Updates from: 04/04/2023 01:11:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
Usage charges for Azure AD B2C are billed to an Azure subscription. You need to
A subscription linked to an Azure AD B2C tenant can be used for the billing of Azure AD B2C usage or other Azure resources, including additional Azure AD B2C resources. It can't be used to add other Azure license-based services or Office 365 licenses within the Azure AD B2C tenant. + ### Prerequisites * [Azure subscription](https://azure.microsoft.com/free/)
active-directory-b2c Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md
Previously updated : 02/24/2023 Last updated : 03/30/2023 + # Use the Azure portal to create and delete consumer users in Azure AD B2C
This article focuses on working with **consumer accounts** in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
+1. In the left menu, select **Azure Active Directory**. Or, select **All services** and search for and select **Azure Active Directory**.
1. Under **Manage**, select **Users**. 1. Select **New user**. 1. Select **Create Azure AD B2C user**.
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Before you create your Azure AD B2C tenant, you need to take the following consi
- An Azure account that's been assigned at least the [Contributor](../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription is required. + ## Create an Azure AD B2C tenant >[!NOTE] >If you're unable to create Azure AD B2C tenant, [review your user settings page](tenant-management-check-tenant-creation-permission.md) to ensure that tenant creation isn't switched off. If tenant creation is switched on, ask your _Global Administrator_ to assign you a _Tenant Creator_ role.
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md
Previously updated : 01/29/2023 Last updated : 03/23/2023
By default, a managed domain is created as a *user* forest. This type of forest
In an Azure AD DS *resource* forest, users authenticate over a one-way forest *trust* from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to Azure AD DS. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
-For more information about forest types in Azure AD DS, see [What are resource forests?][concepts-forest] and [How do forest trusts work in Azure AD DS?][concepts-trust]
- ## Azure AD DS SKUs In Azure AD DS, the available performance and features are based on the SKU. You select a SKU when you create the managed domain, and you can switch SKUs as your business requirements change after the managed domain has been deployed. The following table outlines the available SKUs and the differences between them:
active-directory-domain-services Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md
Previously updated : 01/29/2023 Last updated : 03/23/2023 #Customer intent: As an identity administrator, I want to change the SKU for my Azure AD Domain Services managed domain to use different features as my business requirements change.
To complete this article, you need the following resources and privileges:
## SKU change limitations
-You can change SKUs up or down after the managed domain has been deployed. However, if you use a resource forest and have created one-way outbound forest trusts from Azure AD DS to an on-premises AD DS environment, there are some limitations for the SKU change operation. The *Premium* and *Enterprise* SKUs define a limit on the number of trusts you can create. You can't change to a SKU with a lower maximum limit than you currently have configured.
+You can change SKUs up or down after the managed domain has been deployed. However, the *Premium* and *Enterprise* SKUs define a limit on the number of trusts you can create. You can't change to a SKU with a lower maximum limit than you currently have configured.
-For example:
-
-* You can't change down to the *Standard* SKU. Azure AD DS resource forest doesn't support the *Standard* SKU.
-* Or, if you have created seven trusts on the *Premium* SKU, you can't change down to the *Enterprise* SKU. The *Enterprise* SKU supports a maximum of five trusts.
+For example, if you have created seven trusts on the *Premium* SKU, you can't change down to the *Enterprise* SKU. The *Enterprise* SKU supports a maximum of five trusts.
For more information on these limits, see [Azure AD DS SKU features and limits][concepts-sku].
active-directory-domain-services Compare Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md
Previously updated : 01/29/2023 Last updated : 04/03/2023 #Customer intent: As an IT administrator or decision maker, I want to understand the differences between Active Directory Domain Services (AD DS), Azure AD, and Azure AD DS so I can choose the most appropriate identity solution for my organization.
Although the three Active Directory-based identity solutions share a common name
* **Azure Active Directory (Azure AD)** - Cloud-based identity and mobile device management that provides user account and authentication services for resources such as Microsoft 365, the Azure portal, or SaaS applications. * Azure AD can be synchronized with an on-premises AD DS environment to provide a single identity to users that works natively in the cloud. * For more information about Azure AD, see [What is Azure Active Directory?][whatis-azuread]
-* **Azure Active Directory Domain Services (Azure AD DS)** - Provides managed domain services with a subset of fully-compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / NTLM authentication.
+* **Azure Active Directory Domain Services (Azure AD DS)** - Provides managed domain services with a subset of fully compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / NTLM authentication.
* Azure AD DS integrates with Azure AD, which itself can synchronize with an on-premises AD DS environment. This ability extends central identity use cases to traditional web applications that run in Azure as part of a lift-and-shift strategy. * To learn more about synchronization with Azure AD and on-premises, see [How objects and credentials are synchronized in a managed domain][synchronization].
When you deploy and run a self-managed AD DS environment, you have to maintain a
Common deployment models for a self-managed AD DS environment that provides identity to applications and services in the cloud include the following: * **Standalone cloud-only AD DS** - Azure VMs are configured as domain controllers and a separate, cloud-only AD DS environment is created. This AD DS environment doesn't integrate with an on-premises AD DS environment. A different set of credentials is used to sign in and administer VMs in the cloud.
-* **Resource forest deployment** - Azure VMs are configured as domain controllers and an AD DS domain that's part of an existing forest is created. A trust relationship is then configured to an on-premises AD DS environment. Other Azure VMs can domain-join to this resource forest in the cloud. User authentication runs over a VPN / ExpressRoute connection to the on-premises AD DS environment.
* **Extend on-premises domain to Azure** - An Azure virtual network connects to an on-premises network using a VPN / ExpressRoute connection. Azure VMs connect to this Azure virtual network, which lets them domain-join to the on-premises AD DS environment. * An alternative is to create Azure VMs and promote them as replica domain controllers from the on-premises AD DS domain. These domain controllers replicate over a VPN / ExpressRoute connection to the on-premises AD DS environment. The on-premises AD DS domain is effectively extended into Azure.
With Azure AD DS-joined devices, applications can use the Kerberos and NTLM prot
| Great for... | End-user mobile or desktop devices | Server VMs deployed in Azure |
-If on-prem AD DS and Azure AD are configured for federated authentication using ADFS then there is no (current/valid) password hash available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash but this likely doesn't match a hash of their on-prem password. Hence Azure AD DS won't be able to validate the users credentials
+If on-premises AD DS and Azure AD are configured for federated authentication using AD FS, then there's no (current/valid) password hash available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash but this likely doesn't match a hash of their on-premises password. Hence Azure AD DS won't be able to validate the users credentials
## Next steps
active-directory-domain-services Concepts Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-forest-trust.md
Previously updated : 01/29/2023 Last updated : 03/02/2023
For example, when a one-way, forest trust is created between *Forest 1* (the tru
* Members of *Forest 2* can't access resources located in *Forest 1* using the same trust. > [!IMPORTANT]
-> Azure AD Domain Services resource forest only supports a one-way forest trust to on-premises Active Directory.
+> Azure AD Domain Services only supports a one-way forest trust to on-premises Active Directory.
### Forest trust requirements
active-directory-domain-services Concepts Resource Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-resource-forest.md
- Title: Resource forest concepts for Azure AD Domain Services | Microsoft Docs
-description: Learn what a resource forest is in Azure Active Directory Domain Services and how they benefit your organization in hybrid environment with limited user authentication options or security concerns.
-------- Previously updated : 01/29/2023---
-# Resource forest concepts and features for Azure Active Directory Domain Services
-
-Azure Active Directory Domain Services (Azure AD DS) provides a sign-in experience for legacy, on-premises, line-of-business applications. Users, groups, and password hashes of on-premises and cloud users are synchronized to the Azure AD DS managed domain. These synchronized password hashes are what gives users a single set of credentials they can use for the on-premises AD DS, Microsoft 365, and Azure Active Directory.
-
-Although secure and provides additional security benefits, some organizations can't synchronize those user passwords hashes to Azure AD or Azure AD DS. Users in an organization may not know their password because they only use smart card authentication. These limitations prevent some organizations from using Azure AD DS to lift and shift on-premises classic applications to Azure.
-
-To address these needs and restrictions, you can create a managed domain that uses a resource forest. This conceptual article explains what forests are, and how they trust other resources to provide a secure authentication method.
-
-## What are forests?
-
-A *forest* is a logical construct used by Active Directory Domain Services (AD DS) to group one or more *domains*. The domains then store objects for user or groups, and provide authentication services.
-
-In an Azure AD DS managed domain, the forest only contains one domain. On-premises AD DS forests often contain many domains. In large organizations, especially after mergers and acquisitions, you may end up with multiple on-premises forests that each then contain multiple domains.
-
-By default, a managed domain is created as a *user* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment. User accounts can directly authenticate against the managed domain, such as to sign in to a domain-joined VM. A user forest works when the password hashes can be synchronized, and users aren't using exclusive sign-in methods like smart card authentication. In addition to users who can directly authenticate, users in other on-premises AD DS environments can also authenticate over a one-way forest trust from their on-premises AD DS to access resources in a managed domain user forest.
-
-In a managed domain *resource* forest, users also authenticate over a one-way forest trust from their on-premises AD DS. With this approach, the user objects and password hashes aren't synchronized to the managed domain. The user objects and credentials only exist in the on-premises AD DS. This approach lets enterprises host resources and application platforms in Azure that depend on classic authentication such LDAPS, Kerberos, or NTLM, but any authentication issues or concerns are removed.
-
-Resource forests also provide the capability to lift-and-shift your applications one component at a time. Many legacy on-premises applications are multi-tiered, often using a web server or front end and many database-related components. These tiers make it hard to lift-and-shift the entire application to the cloud in one step. With resource forests, you can lift your application to the cloud in a phased approach, which makes it easier to move your application to Azure.
--
-## What are trusts?
-
-Organizations that have more than one domain often need users to access shared resources in a different domain. Access to these shared resources requires that users in one domain authenticate to another domain. To provide these authentication and authorization capabilities between clients and servers in different domains, there must be a *trust* between the two domains.
-
-With domain trusts, the authentication mechanisms for each domain trust the authentications coming from the other domain. Trusts help provide controlled access to shared resources in a resource domain (the *trusting* domain) by verifying that incoming authentication requests come from a trusted authority (the *trusted* domain). Trusts act as bridges that only allow validated authentication requests to travel between domains.
-
-How a trust passes authentication requests depends on how it's configured. Trusts can be configured in one of the following ways:
-
-* **One-way** - provides access from the trusted domain to resources in the trusting domain.
-* **Two-way** - provides access from each domain to resources in the other domain.
-
-Trusts are also be configured to handle additional trust relationships in one of the following ways:
-
-* **Nontransitive** - The trust exists only between the two trust partner domains.
-* **Transitive** - Trust automatically extends to any other domains that either of the partners trusts.
-
-In some cases, trust relationships are automatically established when domains are created. Other times, you must choose a type of trust and explicitly establish the appropriate relationships. The specific types of trusts used and the structure of those trust relationships depend on how the AD DS directory is organized and whether different versions of Windows coexist on the network.
-
-## Trusts between two forests
-
-You can extend domain trusts within a single forest to another forest by manually creating a one-way or two-way forest trust. A forest trust is a transitive trust that exists only between a forest root domain and a second forest root domain.
-
-* A one-way forest trust allows all users in one forest to trust all domains in the other forest.
-* A two-way forest trust forms a transitive trust relationship between every domain in both forests.
-
-The transitivity of forest trusts is limited to the two forest partners. The forest trust doesn't extend to additional forests trusted by either of the partners.
-
-![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/concepts-resource-forest/resource-forest-trust-relationship.png)
-
-You can create different domain and forest trust configurations depending on the AD DS structure of the organization. Azure AD DS only supports a one-way forest trust. In this configuration, resources in the managed domain can trust all domains in an on-premises forest.
-
-## Supporting technology for trusts
-
-Trusts use various services and features, such as DNS to locate domain controllers in partnering forests. Trusts also depend on NTLM and Kerberos authentication protocols and on Windows-based authorization and access control mechanisms to help provide a secured communications infrastructure across AD DS domains and forests. The following services and features help support successful trust relationships.
-
-### DNS
-
-AD DS needs DNS for domain controller (DC) location and naming. The following support from DNS is provided for AD DS to work successfully:
-
-* A name resolution service that lets network hosts and services to locate DCs.
-* A naming structure that enables an enterprise to reflect its organizational structure in the names of its directory service domains.
-
-A DNS domain namespace is usually deployed that mirrors the AD DS domain namespace. If there's an existing DNS namespace before the AD DS deployment, the DNS namespace is typically partitioned for AD DS, and a DNS subdomain and delegation for the AD DS forest root is created. Additional DNS domain names are then added for each AD DS child domain.
-
-DNS is also used to support the location of AD DS DCs. The DNS zones are populated with DNS resource records that enable network hosts and services to locate AD DS DCs.
-
-### Applications and Net Logon
-
-Both applications and the Net Logon service are components of the Windows distributed security channel model. Applications integrated with Windows Server and AD DS use authentication protocols to communicate with the Net Logon service so that a secured path can be established over which authentication can occur.
-
-### Authentication Protocols
-
-AD DS DCs authenticate users and applications using one of the following protocols:
-
-* **Kerberos version 5 authentication protocol**
- * The Kerberos version 5 protocol is the default authentication protocol used by on-premises computers running Windows and supporting third-party operating systems. This protocol is specified in RFC 1510 and is fully integrated with AD DS, server message block (SMB), HTTP, and remote procedure call (RPC), as well as the client and server applications that use these protocols.
- * When the Kerberos protocol is used, the server doesn't have to contact the DC. Instead, the client gets a ticket for a server by requesting one from a DC in the server account domain. The server then validates the ticket without consulting any other authority.
- * If any computer involved in a transaction doesn't support the Kerberos version 5 protocol, the NTLM protocol is used.
-
-* **NTLM authentication protocol**
- * The NTLM protocol is a classic network authentication protocol used by older operating systems. For compatibility reasons, it's used by AD DS domains to process network authentication requests that come from applications designed for earlier Windows-based clients and servers, and third-party operating systems.
- * When the NTLM protocol is used between a client and a server, the server must contact a domain authentication service on a DC to verify the client credentials. The server authenticates the client by forwarding the client credentials to a DC in the client account domain.
- * When two AD DS domains or forests are connected by a trust, authentication requests made using these protocols can be routed to provide access to resources in both forests.
-
-## Authorization and access control
-
-Authorization and trust technologies work together to provide a secured communications infrastructure across AD DS domains or forests. Authorization determines what level of access a user has to resources in a domain. Trusts facilitate cross-domain authorization of users by providing a path for authenticating users in other domains so their requests to shared resources in those domains can be authorized.
-
-When an authentication request made in a trusting domain is validated by the trusted domain, it's passed to the target resource. The target resource then determines whether to authorize the specific request made by the user, service, or computer in the trusted domain based on its access control configuration.
-
-Trusts provide this mechanism to validate authentication requests that are passed to a trusting domain. Access control mechanisms on the resource computer determine the final level of access granted to the requestor in the trusted domain.
-
-## Next steps
-
-To learn more about trusts, see [How do forest trusts work in Azure AD DS?][concepts-trust]
-
-To get started with creating a managed domain with a resource forest, see [Create and configure an Azure AD DS managed domain][tutorial-create-advanced]. You can then [Create an outbound forest trust to an on-premises domain][create-forest-trust].
-
-<!-- LINKS - INTERNAL -->
-[concepts-trust]: concepts-forest-trust.md
-[tutorial-create-advanced]: tutorial-create-instance-advanced.md
-[create-forest-trust]: tutorial-create-forest-trust.md
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
+
+ Title: Create an Azure AD Domain Services forest trust using Azure PowerShell | Microsoft Docs
+description: In this article, learn how to create and configure an Azure Active Directory Domain Services forest trust to an on-premises Active Directory Domain Services environment using Azure PowerShell.
+++++++ Last updated : 04/03/2023+++
+#Customer intent: As an identity administrator, I want to create an Azure AD Domain Services forest and one-way outbound trust from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest using Azure PowerShell to provide authentication and resource access between forests.
+++
+# Create an Azure Active Directory Domain Services forest trust to an on-premises domain using Azure PowerShell
+
+In environments where you can't synchronize password hashes, or you have users that exclusively sign in using smart cards so they don't know their password, you can create a one-way outbound trust from Azure Active Directory Domain Services (Azure AD DS) to one or more on-premises AD DS environments. This trust relationship lets users, applications, and computers authenticate against an on-premises domain from the Azure AD DS managed domain. In this case, on-premises password hashes are never synchronized.
+
+![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/create-forest-powershell/forest-trust-relationship.png)
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure AD DS forest using Azure PowerShell
+> * Create a one-way outbound forest trust in the managed domain using Azure PowerShell
+> * Configure DNS in an on-premises AD DS environment to support managed domain connectivity
+> * Create a one-way inbound forest trust in an on-premises AD DS environment
+> * Test and validate the trust relationship for authentication and resource access
+
+If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!IMPORTANT]
+> Managed domain forests don't currently support Azure HDInsight or Azure Files. The default managed domain forests do support both of these additional services.
+
+## Prerequisites
+
+To complete this article, you need the following resources and privileges:
+
+* An active Azure subscription.
+ * If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory.
+ * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
+
+* Install and configure Azure PowerShell.
+ * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
+ * Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
+* Install and configure Azure AD PowerShell.
+ * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2).
+ * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Azure AD DS resources.
+
+## Sign in to the Azure portal
+
+In this article, you create and configure the outbound forest trust from a managed domain using the Azure portal. To get started, first sign in to the [Azure portal](https://portal.azure.com).
+
+## Deployment process
+
+It's a multi-part process to create a managed domain forest and the trust relationship to an on-premises AD DS. The following high-level steps build your trusted, hybrid environment:
+
+1. Create a managed domain service principal.
+1. Create a managed domain forest.
+1. Create hybrid network connectivity using site-to-site VPN or Express Route.
+1. Create the managed domain side of the trust relationship.
+1. Create the on-premises AD DS side of the trust relationship.
+
+Before you start, make sure you understand the [network considerations, forest naming, and DNS requirements](tutorial-create-forest-trust.md#networking-considerations). You can't change the managed domain forest name once it's deployed.
+
+## Create the Azure AD service principal
+
+Azure AD DS requires a service principal synchronize data from Azure AD. This principal must be created in your Azure AD tenant before you created the managed domain forest.
+
+Create an Azure AD service principal for Azure AD DS to communicate and authenticate itself. A specific application ID is used named *Domain Controller Services* with an ID of *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Don't change this application ID.
+
+Create an Azure AD service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet:
+
+```powershell
+New-AzureADServicePrincipal -AppId "6ba9a5d4-8456-4118-b521-9c5ca10cdf84"
+```
+
+## Create a managed domain
+
+To create a managed domain, you use the `New-AzureAaddsForest` script. This script is part of a wider set of commands that support managed domains, including create the one-way bound forest in a following section. These scripts are available from the [PowerShell Gallery](https://www.powershellgallery.com/) and are digitally signed by the Azure AD engineering team.
+
+1. First, create a resource group using the [New-AzResourceGroup][New-AzResourceGroup] cmdlet. In the following example, the resource group is named *myResourceGroup* and is created in the *westus* region. Use your own name and desired region:
+
+ ```azurepowershell
+ New-AzResourceGroup `
+ -Name "myResourceGroup" `
+ -Location "WestUS"
+ ```
+
+1. Install the `New-AaddsResourceForestTrust` script from the [PowerShell Gallery][powershell-gallery] using the [Install-Script][Install-Script] cmdlet:
+
+ ```powershell
+ Install-Script -Name New-AaddsResourceForestTrust
+ ```
+
+1. Review the following parameters needed for the `New-AzureAaddsForest` script. Make sure you also have the prerequisite **Azure PowerShell** and **Azure AD PowerShell** modules. Make sure you have planned the virtual network requirements to provide application and on-premises connectivity.
+
+ | Name | Script parameter | Description |
+ |:--||:|
+ | Subscription | *-azureSubscriptionId* | Subscription ID used for Azure AD DS billing. You can get the list of subscriptions using the [Get-AzureRMSubscription][Get-AzureRMSubscription] cmdlet. |
+ | Resource Group | *-aaddsResourceGroupName* | Name of the resource group for the managed domain and associated resources. |
+ | Location | *-aaddsLocation* | The Azure region to host your managed domain. For available regions, see [supported regions for Azure AD DS.](https://azure.microsoft.com/global-infrastructure/services/?products=active-directory-ds&regions=all) |
+ | Azure AD DS administrator | *-aaddsAdminUser* | The user principal name of the first managed domain administrator. This account must be an existing cloud user account in your Azure Active Directory. The user, and the user running the script, is added to the *AAD DC Administrators* group. |
+ | Azure AD DS domain name | *-aaddsDomainName* | The FQDN of the managed domain, based on the previous guidance on how to choose a forest name. |
+
+ The `New-AzureAaddsForest` script can create the Azure virtual network and Azure AD DS subnet if these resources don't already exist. The script can optionally create the workload subnets, when specified:
+
+ | Name | Script parameter | Description |
+ |:-|:-|:|
+ | Virtual network name | *-aaddsVnetName* | Name of the virtual network for the managed domain.|
+ | Address space | *-aaddsVnetCIDRAddressSpace* | Virtual network's address range in CIDR notation (if creating the virtual network).|
+ | Azure AD DS subnet name | *-aaddsSubnetName* | Name of the subnet of the *aaddsVnetName* virtual network hosting the managed domain. Don't deploy your own VMs and workloads into this subnet. |
+ | Azure AD DS address range | *-aaddsSubnetCIDRAddressRange* | Subnet address range in CIDR notation for the Azure AD DS instance, such as *192.168.1.0/24*. Address range must be contained by the address range of the virtual network, and different from other subnets. |
+ | Workload subnet name (optional) | *-workloadSubnetName* | Optional name of a subnet in the *aaddsVnetName* virtual network to create for your own application workloads. VMs and applications and also be connected to a peered Azure virtual network instead. |
+ | Workload address range (optional) | *-workloadSubnetCIDRAddressRange* | Optional subnet address range in CIDR notation for application workload, such as *192.168.2.0/24*. Address range must be contained by the address range of the virtual network, and different from other subnets.|
+
+1. Now create a managed domain forest using the `New-AzureAaaddsForest` script. The following example creates a forest named *addscontoso.com* and creates a workload subnet. Provide your own parameter names and IP address ranges or existing virtual networks.
+
+ ```azurepowershell
+ New-AzureAaddsForest `
+ -azureSubscriptionId <subscriptionId> `
+ -aaddsResourceGroupName "myResourceGroup" `
+ -aaddsLocation "WestUS" `
+ -aaddsAdminUser "contosoadmin@contoso.com" `
+ -aaddsDomainName "aaddscontoso.com" `
+ -aaddsVnetName "myVnet" `
+ -aaddsVnetCIDRAddressSpace "192.168.0.0/16" `
+ -aaddsSubnetName "AzureADDS" `
+ -aaddsSubnetCIDRAddressRange "192.168.1.0/24" `
+ -workloadSubnetName "myWorkloads" `
+ -workloadSubnetCIDRAddressRange "192.168.2.0/24"
+ ```
+
+ It takes quite some time to create the managed domain forest and supporting resources. Allow the script to complete. Continue on to the next section to configure your on-premises network connectivity while the Azure AD forest provisions in the background.
+
+## Configure and validate network settings
+
+As the managed domain continues to deploy, configure and validate the hybrid network connectivity to the on-premises datacenter. You also need a management VM to use with the managed domain for regular maintenance. Some of the hybrid connectivity may already exist in your environment, or you may need to work with others in your team to configure the connections.
+
+Before you start, make sure you understand the [network considerations and recommendations](tutorial-create-forest-trust.md#networking-considerations).
+
+1. Create the hybrid connectivity to your on-premises network to Azure using an Azure VPN or Azure ExpressRoute connection. The hybrid network configuration is beyond the scope of this documentation, and may already exist in your environment. For details on specific scenarios, see the following articles:
+
+ * [Azure Site-to-Site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
+ * [Azure ExpressRoute Overview](../expressroute/expressroute-introduction.md).
+
+ > [!IMPORTANT]
+ > If you create the connection directly to your managed domain's virtual network, use a separate gateway subnet. Don't create the gateway in the managed domain's subnet.
+
+1. To administer a managed domain, you create a management VM, join it to the managed domain, and install the required AD DS management tools.
+
+ While the managed domain is being deployed, [create a Windows Server VM](./join-windows-vm.md) then [install the core AD DS management tools](./tutorial-create-management-vm.md) to install the needed management tools. Wait to join the management VM to the managed domain until one of the following steps after the domain is successfully deployed.
+
+1. Validate network connectivity between your on-premises network and the Azure virtual network.
+
+ * Confirm that your on-premises domain controller can connect to the managed VM using `ping` or remote desktop, for example.
+ * Verify that your management VM can connect to your on-premises domain controllers, again using a utility such as `ping`.
+
+1. In the Azure portal, search for and select **Azure AD Domain Services**. Choose your managed domain, such as *aaddscontoso.com* and wait for the status to report as **Running**.
+
+ When running, [update DNS settings for the Azure virtual network](tutorial-create-instance.md#update-dns-settings-for-the-azure-virtual-network) and then [enable user accounts for Azure AD DS](tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds) to finalize the configurations for your managed domain.
+
+1. Make a note of the DNS addresses shown on the overview page. You need these addresses when you configure the on-premises Active Directory side of the trust relationship in a following section.
+1. Restart the management VM for it to receive the new DNS settings, then [join the VM to the managed domain](join-windows-vm.md#join-the-vm-to-the-managed-domain).
+1. After the management VM is joined to the managed domain, connect again using remote desktop.
+
+ From a command prompt, use `nslookup` and the managed domain name to validate name resolution for the forest.
+
+ ```console
+ nslookup aaddscontoso.com
+ ```
+
+ The command should return two IP addresses for the forest.
+
+## Create the forest trust
+
+The forest trust has two parts - the one-way outbound forest trust in the managed domain, and the one-way inbound forest trust in the on-premises AD DS forest. You manually create both sides of this trust relationship. When both sides are created, users and resources can successfully authenticate using the forest trust. A managed domain supports up to five one-way outbound forest trusts to on-premises forests.
+
+### Create the managed domain side of the trust relationship
+
+Use the `Add-AaddsResourceForestTrust` script to create the managed domain side of the trust relationship. First, install the `Add-AaddsResourceForestTrust` script from the [PowerShell Gallery][powershell-gallery] using the [Install-Script][Install-Script] cmdlet:
+
+```powershell
+Install-Script -Name Add-AaddsResourceForestTrust
+```
+
+Now provide the script the following information:
+
+| Name | Script parameter | Description |
+|:--|:|:|
+| Azure AD DS domain name | *-ManagedDomainFqdn* | FQDN of the managed domain, such as *aaddscontoso.com* |
+| On-premises AD DS domain name | *-TrustFqdn* | The FQDN of the trusted forest, such as *onprem.contoso.com* |
+| Trust friendly name | *-TrustFriendlyName* | Friendly name of the trust relationship. |
+| On-premises AD DS DNS IP addresses | *-TrustDnsIPs* | A comma-delimited list of DNS server IPv4 addresses for the trusted domain listed. |
+| Trust password | *-TrustPassword* | A complex password for the trust relationship. This password is also entered when creating the one-way inbound trust in the on-premises AD DS. |
+| Credentials | *-Credentials* | The credentials used to authenticate to Azure. The user must be in the *AAD DC Administrators group*. If not provided, the script prompts for authentication. |
+
+The following example creates a trust relationship named *myAzureADDSTrust* to *onprem.contoso.com*. Use your own parameter names and passwords:.
+
+```azurepowershell
+Add-AaddsResourceForestTrust `
+ -ManagedDomainFqdn "aaddscontoso.com" `
+ -TrustFqdn "onprem.contoso.com" `
+ -TrustFriendlyName "myAzureADDSTrust" `
+ -TrustDnsIPs "10.0.1.10,10.0.1.11" `
+ -TrustPassword <complexPassword>
+```
+
+> [!IMPORTANT]
+> Remember your trust password. You must use the same password when your create the on-premises side of the trust.
+
+## Configure DNS in the on-premises domain
+
+To correctly resolve the managed domain from the on-premises environment, you may need to add forwarders to the existing DNS servers. If you haven't configure the on-premises environment to communicate with the managed domain, complete the following steps from a management workstation for the on-premises AD DS domain:
+
+1. Select **Start | Administrative Tools | DNS**
+1. Right-select DNS server, such as *myAD01*, select **Properties**
+1. Choose **Forwarders**, then **Edit** to add additional forwarders.
+1. Add the IP addresses of the managed domain, such as *10.0.1.4* and *10.0.1.5*.
+1. From a local command prompt, validate name resolution using **nslookup** of the managed domain name. For example, `Nslookup aaddscontoso.com` should return the two IP addresses for the managed domain.
+
+## Create inbound forest trust in the on-premises domain
+
+The on-premises AD DS domain needs an incoming forest trust for the managed domain. This trust must be manually created in the on-premises AD DS domain, it can't be created from the Azure portal.
+
+To configure inbound trust on the on-premises AD DS domain, complete the following steps from a management workstation for the on-premises AD DS domain:
+
+1. Select **Start | Administrative Tools | Active Directory Domains and Trusts**
+1. Right-select domain, such as *onprem.contoso.com*, select **Properties**
+1. Choose **Trusts** tab, then **New Trust**
+1. Enter the name of the managed domain, such as *aaddscontoso.com*, then select **Next**
+1. Select the option to create a **Forest trust**, then to create a **One way: incoming** trust.
+1. Choose to create the trust for **This domain only**. In the next step, you create the trust in the Azure portal for the managed domain.
+1. Choose to use **Forest-wide authentication**, then enter and confirm a trust password. This same password is also entered in the Azure portal in the next section.
+1. Step through the next few windows with default options, then choose the option for **No, do not confirm the outgoing trust**. You can't validate the trust relation because your delegated admin account to the managed domain doesn't have the required permissions. This behavior is by design.
+1. Select **Finish**
+
+## Validate resource authentication
+
+The following common scenarios let you validate that forest trust correctly authenticates users and access to resources:
+
+* [On-premises user authentication from the Azure AD DS forest](#on-premises-user-authentication-from-the-azure-ad-ds-forest)
+* [Access resources in the Azure AD DS forest as an on-premises user](#access-resources-in-azure-ad-ds-as-an-on-premises-user)
+ * [Enable file and printer sharing](#enable-file-and-printer-sharing)
+ * [Create a security group and add members](#create-a-security-group-and-add-members)
+ * [Create a file share for cross-forest access](#create-a-file-share-for-cross-forest-access)
+ * [Validate cross-forest authentication to a resource](#validate-cross-forest-authentication-to-a-resource)
+
+### On-premises user authentication from the Azure AD DS forest
+
+You should have Windows Server virtual machine joined to the managed domain resource domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine.
+
+1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account.
+
+ > [!TIP]
+ > To securely connect to your VMs joined to Azure AD Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions.
+
+1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user:
+
+ ```console
+ whoami /fqdn
+ ```
+
+1. Use the `runas` command to authenticate as a user from the on-premises domain. In the following command, replace `userUpn@trusteddomain.com` with the UPN of a user from the trusted on-premises domain. The command prompts you for the user's password:
+
+ ```console
+ Runas /u:userUpn@trusteddomain.com cmd.exe
+ ```
+
+1. If the authentication is a successful, a new command prompt opens. The title of the new command prompt includes `running as userUpn@trusteddomain.com`.
+1. Use `whoami /fqdn` in the new command prompt to view the distinguished name of the authenticated user from the on-premises Active Directory.
+
+### Access resources in Azure AD DS as an on-premises user
+
+Using the Windows Server VM joined to the managed domain, you can test the scenario where users can access resources hosted in the forest when they authenticate from computers in the on-premises domain with users from the on-premises domain. The following examples show you how to create and test various common scenarios.
+
+#### Enable file and printer sharing
+
+1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account.
+
+ > [!TIP]
+ > To securely connect to your VMs joined to Azure AD Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions.
+
+1. Open **Windows Settings**, then search for and select **Network and Sharing Center**.
+1. Choose the option for **Change advanced sharing** settings.
+1. Under the **Domain Profile**, select **Turn on file and printer sharing** and then **Save changes**.
+1. Close **Network and Sharing Center**.
+
+#### Create a security group and add members
+
+1. Open **Active Directory Users and Computers**.
+1. Right-select the domain name, choose **New**, and then select **Organizational Unit**.
+1. In the name box, type *LocalObjects*, then select **OK**.
+1. Select and right-click **LocalObjects** in the navigation pane. Select **New** and then **Group**.
+1. Type *FileServerAccess* in the **Group name** box. For the **Group Scope**, select **Domain local**, then choose **OK**.
+1. In the content pane, double-click **FileServerAccess**. Select **Members**, choose to **Add**, then select **Locations**.
+1. Select your on-premises Active Directory from the **Location** view, then choose **OK**.
+1. Type *Domain Users* in the **Enter the object names to select** box. Select **Check Names**, provide credentials for the on-premises Active Directory, then select **OK**.
+
+ > [!NOTE]
+ > You must provide credentials because the trust relationship is only one way. This means users from the managed domain can't access resources or search for users or groups in the trusted (on-premises) domain.
+
+1. The **Domain Users** group from your on-premises Active Directory should be a member of the **FileServerAccess** group. Select **OK** to save the group and close the window.
+
+#### Create a file share for cross-forest access
+
+1. On the Windows Server VM joined to the managed domain, create a folder and provide name such as *CrossForestShare*.
+1. Right-select the folder and choose **Properties**.
+1. Select the **Security** tab, then choose **Edit**.
+1. In the *Permissions for CrossForestShare* dialog box, select **Add**.
+1. Type *FileServerAccess* in **Enter the object names to select**, then select **OK**.
+1. Select *FileServerAccess* from the **Groups or user names** list. In the **Permissions for FileServerAccess** list, choose *Allow* for the **Modify** and **Write** permissions, then select **OK**.
+1. Select the **Sharing** tab, then choose **Advanced Sharing…**
+1. Choose **Share this folder**, then enter a memorable name for the file share in **Share name** such as *CrossForestShare*.
+1. Select **Permissions**. In the **Permissions for Everyone** list, choose **Allow** for the **Change** permission.
+1. Select **OK** two times and then **Close**.
+
+#### Validate cross-forest authentication to a resource
+
+1. Sign in a Windows computer joined to your on-premises Active Directory using a user account from your on-premises Active Directory.
+1. Using **Windows Explorer**, connect to the share you created using the fully qualified host name and the share such as `\\fs1.aaddscontoso.com\CrossforestShare`.
+1. To validate the write permission, right-select in the folder, choose **New**, then select **Text Document**. Use the default name **New Text Document**.
+
+ If the write permissions are set correctly, a new text document is created. The following steps will then open, edit, and delete the file as appropriate.
+1. To validate the read permission, open **New Text Document**.
+1. To validate the modify permission, add text to the file and close **Notepad**. When prompted to save changes, choose **Save**.
+1. To validate the delete permission, right-select **New Text Document** and choose **Delete**. Choose **Yes** to confirm file deletion.
+
+## Update or remove outbound forest trust
+
+If you need to update an existing one-way outbound forest from the managed domain, you can use the `Get-AaddsResourceForestTrusts` and `Set-AaddsResourceForestTrust` scripts. These scripts help in scenarios where you want to update the forest trust friendly name or trust password. To remove a one-way outbound trust from the managed domain, you can use the `Remove-AaddsResourceForestTrust` script. You must manually remove the one-way inbound forest trust in the associated on-premises AD DS forest.
+
+### Update a forest trust
+
+In normal operation, the managed domain and on-premises forest negotiate a regular password update process between themselves. This is part of the normal AD DS trust relationship security process. You don't need to manually rotate the trust password unless the trust relationship has experienced an issue and you want to manually reset to a known password. For more information, see [trusted domain object password changes](concepts-forest-trust.md#tdo-password-changes).
+
+The following example steps show you how to update an existing trust relationship if you need to manually reset the outbound trust password:
+
+1. Install the `Get-AaddsResourceForestTrusts` and `Set-AaddsResourceForestTrust` scripts from the [PowerShell Gallery][powershell-gallery] using the [Install-Script][Install-Script] cmdlet:
+
+ ```powershell
+ Install-Script -Name Get-AaddsResourceForestTrusts,Set-AaddsResourceForestTrust
+ ```
+
+1. Before you can update an existing trust, first get the trust resource using the `Get-AaddsResourceForestTrusts` script. In the following example, the existing trust is assigned to an object named *existingTrust*. Specify your own managed domain forest name and on-premises forest name to update:
+
+ ```powershell
+ $existingTrust = Get-AaddsResourceForestTrust `
+ -ManagedDomainFqdn "aaddscontoso.com" `
+ -TrustFqdn "onprem.contoso.com" `
+ -TrustFriendlyName "myAzureADDSTrust"
+ ```
+
+1. To update the existing trust password, use the `Set-AaddsResourceForestTrust` script. Specify the existing trust object from the previous step, then a new trust relationship password. No password complexity is enforced by PowerShell, so make sure you to generate and use a secure password for your environment.
+
+ ```powershell
+ Set-AaddsResourceForestTrust `
+ -Trust $existingTrust `
+ -TrustPassword <newComplexPassword>
+ ```
+
+### Delete a forest trust
+
+If you no longer need the one-way outbound forest trust from the managed domain to an on-premises AD DS forest, you can remove it. Make sure that no applications or services need to authenticate against the on-premises AD DS forest before you remove the trust. You must manually remove the one-way inbound trust in the on-premises AD DS forest, too.
+
+1. Install the `Remove-AaddsResourceForestTrust` script from the [PowerShell Gallery][powershell-gallery] using the [Install-Script][Install-Script] cmdlet:
+
+ ```powershell
+ Install-Script -Name Remove-AaddsResourceForestTrust
+ ```
+
+1. Now remove the forest trust using the `Remove-AaddsResourceForestTrust` script. In the following example, the trust named *myAzureADDSTrust* between the managed domain forest named *aaddscontoso.com* and on-premises forest *onprem.contoso.com* is removed. Specify your own managed domain forest name and on-premises forest name to remove:
+
+ ```powershell
+ Remove-AaddsResourceForestTrust `
+ -ManagedDomainFqdn "aaddscontoso.com" `
+ -TrustFqdn "onprem.contoso.com" `
+ -TrustFriendlyName "myAzureADDSTrust"
+ ```
+
+To remove the one-way inbound trust from the on-premises AD DS forest, connect to a management computer with access to the on-premises AD DS forest and complete the following steps:
+
+1. Select **Start | Administrative Tools | Active Directory Domains and Trusts**.
+1. Right-select domain, such as *onprem.contoso.com*, select **Properties**.
+1. Choose **Trusts** tab, then select the existing incoming trust from your managed domain forest.
+1. Select **Remove**, then confirm that you wish to remove the incoming trust.
+
+## Next steps
+
+In this article, you learned how to:
+
+> [!div class="checklist"]
+> * Create a managed domain using Azure PowerShell
+> * Create a one-way outbound forest trust in the managed domain using Azure PowerShell
+> * Configure DNS in an on-premises AD DS environment to support the managed domain connectivity
+> * Create a one-way inbound forest trust in an on-premises AD DS environment
+> * Test and validate the trust relationship for authentication and resource access
+
+For more conceptual information about forest types in Azure AD DS, see [How do forest trusts work in Azure AD DS?][concepts-trust]
+
+<!-- INTERNAL LINKS -->
+[concepts-trust]: concepts-forest-trust.md
+[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
+[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-ds-instance-advanced]: tutorial-create-instance-advanced.md
+[Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount
+[Connect-AzureAD]: /powershell/module/AzureAD/Connect-AzureAD
+[New-AzResourceGroup]: /powershell/module/Az.Resources/New-AzResourceGroup
+[network-peering]: ../virtual-network/virtual-network-peering-overview.md
+[New-AzureADServicePrincipal]: /powershell/module/AzureAD/New-AzureADServicePrincipal
+[Get-AzureRMSubscription]: /powershell/module/AzureRM.Profile/Get-AzureRmSubscription
+[Install-Script]: /powershell/module/powershellget/install-script
+
+<!-- EXTERNAL LINKS -->
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
Previously updated : 01/29/2023 Last updated : 03/23/2023
The following features of Azure AD DS simplify deployment and management operati
Some key aspects of a managed domain include the following: * The managed domain is a stand-alone domain. It isn't an extension of an on-premises domain.
- * If needed, you can create one-way outbound forest trusts from Azure AD DS to an on-premises AD DS environment. For more information, see [Resource forest concepts and features for Azure AD DS][ forest-trusts].
+ * If needed, you can create one-way outbound forest trusts from Azure AD DS to an on-premises AD DS environment. For more information, see [Forest concepts and features for Azure AD DS][forest-trusts].
* Your IT team doesn't need to manage, patch, or monitor domain controllers for this managed domain. For hybrid environments that run AD DS on-premises, you don't need to manage AD replication to the managed domain. User accounts, group memberships, and credentials from your on-premises directory are synchronized to Azure AD via [Azure AD Connect][azure-ad-connect]. These user accounts, group memberships, and credentials are automatically available within the managed domain.
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
Previously updated : 01/29/2023 Last updated : 03/22/2023 # Configure scoped synchronization from Azure AD to Azure Active Directory Domain Services using the Azure portal To provide authentication services, Azure Active Directory Domain Services (Azure AD DS) synchronizes users and groups from Azure AD. In a hybrid environment, users and groups from an on-premises Active Directory Domain Services (AD DS) environment can be first synchronized to Azure AD using Azure AD Connect, and then synchronized to an Azure AD DS managed domain.
-By default, all users and groups from an Azure AD directory are synchronized to a managed domain. If you have specific needs, you can instead choose to synchronize only a defined set of users.
+By default, all users and groups from an Azure AD directory are synchronized to a managed domain. If only some users need to use Azure AD DS, you can instead choose to synchronize only groups of users. You can filter synchronization for groups on-premises, cloud only, or both.
This article shows you how to configure scoped synchronization and then change or disable the set of scoped users using the Azure portal. You can also [complete these steps using PowerShell][scoped-sync-powershell]. + ## Before you begin To complete this article, you need the following resources and privileges:
To complete this article, you need the following resources and privileges:
## Scoped synchronization overview
-By default, all users and groups from an Azure AD directory are synchronized to a managed domain. If only a few users need to access the managed domain, you can synchronize only those user accounts. This scoped synchronization is group-based. When you configure group-based scoped synchronization, only the user accounts that belong to the groups you specify are synchronized to the managed domain. Nested groups aren't synchronized, only the specific groups you select.
+By default, all users and groups from an Azure AD directory are synchronized to a managed domain. You can scope synchronization to only user accounts that were created in Azure AD, or synchronize all users.
+
+If only a few groups of users need to access the managed domain, you can select **Filter by group entitlement** to synchronize only those groups. This scoped synchronization is only group-based. When you configure group-based scoped synchronization, only the user accounts that belong to the groups you specify are synchronized to the managed domain. Nested groups aren't synchronized; only the groups you specify get synchronized.
You can change the synchronization scope before or after you create the managed domain. The scope of synchronization is defined by a service principal with the application identifier 2565bd9d-da50-47d4-8b85-4c97f669dc36. To prevent scope loss, don't delete or change the service principal. If it is accidentally deleted, the synchronization scope can't be recovered.
To enable scoped synchronization in the Azure portal, complete the following ste
1. In the Azure portal, search for and select **Azure AD Domain Services**. Choose your managed domain, such as *aaddscontoso.com*. 1. Select **Synchronization** from the menu on the left-hand side.
-1. For the *Synchronization type*, select **Scoped**.
-1. Choose **Select groups**, then search for and choose the groups to add.
+1. For *Synchronization scope*, select **All** or **Cloud Only**.
+1. To filter synchronization for selected groups, click **Show selected groups**, choose whether to synchronize cloud-only groups, on-premises groups, or both. For example, the following screenshot shows how to synchronize only three groups that were created in Azure AD. Only users who belong to those groups will have their accounts synchronized to Azure AD DS.
+
+ :::image type="content" source="media/scoped-synchronization/cloud-only-groups.png" alt-text="Screenshot that shows filter by cloud-only groups." :::
+
+1. To add groups, click **Add groups**, then search for and choose the groups to add.
1. When all changes are made, select **Save synchronization scope**. Changing the scope of synchronization causes the managed domain to resynchronize all data. Objects that are no longer required in the managed domain are deleted, and resynchronization may take some time to complete.
To modify the list of groups whose users should be synchronized to the managed d
1. In the Azure portal, search for and select **Azure AD Domain Services**. Choose your managed domain, such as *aaddscontoso.com*. 1. Select **Synchronization** from the menu on the left-hand side.
-1. To add a group, choose **+ Select groups** at the top, then choose the groups to add.
+1. To add a group, choose **+ Add groups** at the top, then choose the groups to add.
1. To remove a group from the synchronization scope, select it from the list of currently synchronized groups and choose **Remove groups**. 1. When all changes are made, select **Save synchronization scope**.
To disable group-based scoped synchronization for a managed domain, complete the
1. In the Azure portal, search for and select **Azure AD Domain Services**. Choose your managed domain, such as *aaddscontoso.com*. 1. Select **Synchronization** from the menu on the left-hand side.
-1. Change the *Synchronization type* from **Scoped** to **All**, then select **Save synchronization scope**.
+1. Clear the check box for **Show selected groups**, and click **Save synchronization scope**.
Changing the scope of synchronization causes the managed domain to resynchronize all data. Objects that are no longer required in the managed domain are deleted, and resynchronization may take some time to complete.
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
Previously updated : 01/29/2023 Last updated : 04/03/2023
Objects and credentials in an Azure Active Directory Domain Services (Azure AD D
In a hybrid environment, objects and credentials from an on-premises AD DS domain can be synchronized to Azure AD using Azure AD Connect. Once those objects are successfully synchronized to Azure AD, the automatic background sync then makes those objects and credentials available to applications using the managed domain.
-If on-premises AD DS and Azure AD are configured for federated authentication using ADFS without password hash sync, or if third-party identity protection products and Azure AD are configured for federated authentication without password hash sync, no (current/valid) password hash is available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash, but this likely doesn't match a hash of their on-premises password. Hence, Azure AD DS won't be able to validate a user's credentials.
- The following diagram illustrates how synchronization works between Azure AD DS, Azure AD, and an optional on-premises AD DS environment: ![Synchronization overview for an Azure AD Domain Services managed domain](./media/active-directory-domain-services-design-guide/sync-topology.png) ## Synchronization from Azure AD to Azure AD DS - User accounts, group memberships, and credential hashes are synchronized one way from Azure AD to Azure AD DS. This synchronization process is automatic. You don't need to configure, monitor, or manage this synchronization process. The initial synchronization may take a few hours to a couple of days, depending on the number of objects in the Azure AD directory. After the initial synchronization is complete, changes that are made in Azure AD, such as password or attribute changes, are then automatically synchronized to Azure AD DS. When a user is created in Azure AD, they're not synchronized to Azure AD DS until they change their password in Azure AD. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Azure AD. The password hashes are needed to successfully authenticate a user in Azure AD DS.
-The synchronization process is one way / unidirectional by design. There's no reverse synchronization of changes from Azure AD DS back to Azure AD. A managed domain is largely read-only except for custom OUs that you can create. You can't make changes to user attributes, user passwords, or group memberships within a managed domain.
+The synchronization process is one-way by design. There's no reverse synchronization of changes from Azure AD DS back to Azure AD. A managed domain is largely read-only except for custom OUs that you can create. You can't make changes to user attributes, user passwords, or group memberships within a managed domain.
+
+## Scoped synchronization and group filter
+
+You can scope synchronization to only user accounts that originated in the cloud. Within that synchronization scope, you can filter for specific groups os users. You can choose between cloud only groups, on-premises groups, or both. For more information about how to configure scoped synchronization, see [Configure scoped synchronization](scoped-synchronization.md).
++ ## Attribute synchronization and mapping to Azure AD DS
Azure AD Connect is used to synchronize user accounts, group memberships, and cr
> [!IMPORTANT] > Azure AD Connect should only be installed and configured for synchronization with on-premises AD DS environments. It's not supported to install Azure AD Connect in a managed domain to synchronize objects back to Azure AD.
-If you configure write-back, changes from Azure AD are synchronized back to the on-premises AD DS environment. For example, if a user changes their password using Azure AD self-service password management, the password is updated back in the on-premises AD DS environment.
+If you configure writeback, changes from Azure AD are synchronized back to the on-premises AD DS environment. For example, if a user changes their password using Azure AD self-service password management, the password is updated back in the on-premises AD DS environment.
> [!NOTE] > Always use the latest version of Azure AD Connect to ensure you have fixes for all known bugs.
The following objects or attributes aren't synchronized from an on-premises AD D
## Password hash synchronization and security considerations
-When you enable Azure AD DS, legacy password hashes for NTLM + Kerberos authentication are required. Azure AD doesn't store clear-text passwords, so these hashes can't be automatically generated for existing user accounts. Once generated and stored, NTLM and Kerberos compatible password hashes are always stored in an encrypted manner in Azure AD.
+When you enable Azure AD DS, legacy password hashes for NTLM and Kerberos authentication are required. Azure AD doesn't store clear-text passwords, so these hashes can't be automatically generated for existing user accounts. NTLM and Kerberos compatible password hashes are always stored in an encrypted manner in Azure AD.
The encryption keys are unique to each Azure AD tenant. These hashes are encrypted such that only Azure AD DS has access to the decryption keys. No other service or component in Azure AD has access to the decryption keys.
active-directory-domain-services Tutorial Configure Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md
Previously updated : 01/29/2023 Last updated : 04/03/2023 #Customer intent: As an server administrator, I want to learn how to enable password hash synchronization with Azure AD Connect to create a hybrid environment using an on-premises AD DS domain.
For hybrid environments, an Azure Active Directory (Azure AD) tenant can be conf
To use Azure AD DS with accounts synchronized from an on-premises AD DS environment, you need to configure Azure AD Connect to synchronize those password hashes required for NTLM and Kerberos authentication. After Azure AD Connect is configured, an on-premises account creation or password change event also then synchronizes the legacy password hashes to Azure AD.
-You don't need to perform these steps if you use cloud-only accounts with no on-premises AD DS environment, or if you use a *resource forest*. For managed domains that use a resource forest, on-premises password hashes are never synchronized. Authentication for on-premises accounts use the forest trust(s) back to your own AD DS domain controllers.
+You don't need to perform these steps if you use cloud-only accounts with no on-premises AD DS environment.
In this tutorial, you learn:
With Azure AD Connect installed and configured to synchronize with Azure AD, now
In this example screenshot, the following connectors are used:
- * The Azure AD connector is named *contoso.onmicrosoft.com - AAD*
+ * The Azure AD connector is named *contoso.onmicrosoft.com - Azure AD*
* The on-premises AD DS connector is named *onprem.contoso.com* 1. Copy and paste the following PowerShell script to the computer with Azure AD Connect installed. The script triggers a full password sync that includes legacy password hashes. Update the `$azureadConnector` and `$adConnector` variables with the connector names from the previous step.
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
Previously updated : 06/07/2022 Last updated : 03/02/2023 #Customer intent: As an identity administrator, I want to create a one-way outbound forest from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest to provide authentication and resource access between forests.
You can create a one-way outbound trust from Azure AD DS to one or more on-premi
- Environments where you can't synchronize password hashes, or where users exclusively sign in using smart cards and don't know their password. - Hybrid scenarios that still require access to on-premises domains.
-Trusts can be created in both resource forest and user forest domain types. The resource forest domain type will automatically block sync for any user accounts that were synchronized to Azure AD DS from an on-premises domain. This is the safest domain type to use for trusts as it ensures that there will be no UPN collisions when users are authenticating. Trusts created in a user forest are not inherently safe but allow you more flexibility in what gets synchronized from Azure AD.
+Trusts can be created in any domain. The domain will automatically block synchronization from an on-premises domain for any user accounts that were synchronized to Azure AD DS. This prevents UPN collisions when users authenticate.
![Diagram of forest trust from Azure AD DS to on-premises AD DS](./media/tutorial-create-forest-trust/forest-trust-relationship.png)
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* An Azure Active Directory Domain Services managed domain created using a user or resource forest and configured in your Azure AD tenant.
+* An Azure Active Directory Domain Services managed domain.
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance-advanced]. > [!IMPORTANT]
In this tutorial, you learned how to:
> * Create a one-way outbound forest trust in Azure AD DS > * Test and validate the trust relationship for authentication and resource access
-For more conceptual information about forest types in Azure AD DS, see [What are resource forests?][concepts-forest] and [How do forest trusts work in Azure AD DS?][concepts-trust].
+For more conceptual information about forest in Azure AD DS, see [How do forest trusts work in Azure AD DS?][concepts-trust].
<!-- INTERNAL LINKS -->
-[concepts-forest]: concepts-resource-forest.md
[concepts-trust]: concepts-forest-trust.md [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md [associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 01/29/2023 Last updated : 04/03/2023 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Complete the fields in the *Basics* window of the Azure portal to create a manag
1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku]. For this tutorial, select the *Standard* SKU.
-1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains. By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment.
-
- A *Resource* forest only synchronizes users and groups created directly in Azure AD. Password hashes for on-premises users are never synchronized into a managed domain when you create a resource forest. For more information on *Resource* forests, including why you may use one and how to create forest trusts with on-premises AD DS domains, see [Azure AD DS resource forests overview][resource-forests].
-
- For this tutorial, choose to create a *User* forest.
+1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains.
![Configure basic settings for an Azure AD Domain Services managed domain](./media/tutorial-create-instance-advanced/basics-window.png)
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Complete the fields in the *Basics* window of the Azure portal to create a manag
1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku]. For this tutorial, select the *Standard* SKU.
-1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains. By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment.
-
- A *Resource* forest only synchronizes users and groups created directly in Azure AD. For more information on *Resource* forests, including why you may use one and how to create forest trusts with on-premises AD DS domains, see [Azure AD DS resource forests overview][resource-forests].
-
- For this tutorial, choose to create a *User* forest.
+1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains.
![Configure basic settings for an Azure AD Domain Services managed domain](./media/tutorial-create-instance/basics-window.png)
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 03/31/2023 Last updated : 04/03/2023
The **Azure AD Provisioning Service** provisions users to SaaS apps and other sy
## Provisioning using SCIM 2.0
-The Azure AD provisioning service uses the [SCIM 2.0 protocol](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses SCIM user object schema and REST APIs to automate the provisioning and de-provisioning of users and groups. A SCIM-based provisioning connector is provided for most applications in the Azure AD gallery. When building apps for Azure AD, developers can use the SCIM 2.0 user management API to build a SCIM endpoint that integrates Azure AD for provisioning. For details, see [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+The Azure AD provisioning service uses the [SCIM 2.0 protocol](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses SCIM user object schema and REST APIs to automate the provisioning and de-provisioning of users and groups. A SCIM-based provisioning connector is provided for most applications in the Azure AD gallery. Developers use the SCIM 2.0 user management API in Azure AD to build endpoints for their apps that integrate with the provisioning service. For details, see [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md).
To request an automatic Azure AD provisioning connector for an app that doesn't currently have one, see [Azure Active Directory Application Request](../manage-apps/v2-howto-app-gallery-listing.md).
Content-type: application/json
} ``` - A new initial cycle is triggered because of a change in attribute mappings or scoping filters. This action also clears any stored watermark and causes all source objects to be evaluated again.-- The provisioning process goes into quarantine (see example) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service will be automatically disabled.
+- The provisioning process goes into quarantine (see example) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service is automatically disabled.
### Errors and retries
The provisioning service supports both deleting and disabling (sometimes referre
**Configure your application to disable a user**
-Confirm the checkobx for updates is selected.
+Confirm the checkbox for updates is selected.
-Confirm the mapping for *active* for your application. If your using an application from the app gallery, the mapping may be slightly different. In this case, use the default mapping for gallery applications.
+Confirm the mapping for *active* for your application. If you're using an application from the app gallery, the mapping may be slightly different. In this case, use the default mapping for gallery applications.
:::image type="content" source="./media/how-provisioning-works/disable-user.png" alt-text="Disable a user" lightbox="./media/how-provisioning-works/disable-user.png"::: **Configure your application to delete a user**
-The scenarios will trigger a disable or a delete:
-* A user is soft deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false).
+The scenario triggers a disable or a delete:
+* A user is soft-deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false).
30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application. * A user is permanently deleted / removed from the recycle bin in Azure AD. * A user is unassigned from an app.
The scenarios will trigger a disable or a delete:
:::image type="content" source="./media/how-provisioning-works/delete-user.png" alt-text="Delete a user" lightbox="./media/how-provisioning-works/delete-user.png":::
-By default, the Azure AD provisioning service soft deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
+By default, the Azure AD provisioning service soft-deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
-If one of the four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
+When one of the four events occurs and the target application doesn't support soft-deletes, the provisioning service sends a DELETE request to permanently delete the user from the app.
-If you see an attribute IsSoftDeleted in your attribute mappings, it's used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
+If you see `IsSoftDeleted` in your attribute mappings, it's used to determine the state of the user and whether to send an update request with `active = false` to soft-delete the user.
**Deprovisioning events**
-The table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they've been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
+The table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they've been optimized to meet the needs of the application. For example, if the target application doesn't support soft-deleting then the Azure AD provisioning service might send a hard-delete request to delete users rather than send a soft-delete.
|Scenario|How to configure in Azure AD| |--|--| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, do nothing.|Remove isSoftDeleted from the attribute mappings and / or set the [skip out of scope deletions](skip-out-of-scope-deletions.md) property to true.| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, set a specific attribute to true / false.|Map isSoftDeleted to the attribute that you would like to set to false.| |When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It's not configurable by customers.|
-|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" isn't selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
+|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" isn't selected as one of the target object actions in the [attribute configuration experience](skip-out-of-scope-deletions.md).|
|When a user is deleted in Azure AD, set the value of an attribute in the target application.|Not supported.| |When a user is deleted in Azure AD, delete the user in the target application|This is supported. Ensure that Delete is selected as one of the target object actions in the [attribute configuration experience](skip-out-of-scope-deletions.md).|
The table describes how you can configure deprovisioning actions with the Azure
**Recommendation**
-When developing an application, always support both soft deletes and hard deletes. It allows customers to recover when a user is accidentally disabled.
+When developing an application, always support both soft-deletes and hard-deletes. It allows customers to recover when a user is accidentally disabled.
## Next Steps
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Previously updated : 03/06/2023 Last updated : 04/02/2023
The following expiration requirements apply to other providers that use Azure AD
| Property | Requirements | | | |
-| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days.</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
-| Password expiry notification (When users are notified of password expiration) |<ul><li>Default value: **14** days (before password expires).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet.</li></ul> |
-| Password expiry (Let passwords never expire) |<ul><li>Default value: **false** (indicates that password's have an expiration date).</li><li>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.</li></ul> |
+| Password expiry duration (Maximum password age) |Default value: **90** days.<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell. |
+| Password expiry (Let passwords never expire) |Default value: **false** (indicates that password's have an expiration date).<br>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.|
## Next steps
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 03/31/2023 Last updated : 04/03/2023
After system-preferred MFA is enabled, the authentication system does all the wo
>[!NOTE] >System-preferred MFA is a key security upgrade to traditional second factor notifications. We highly recommend enabling system-preferred MFA in the near term for improved sign-in security.
-## Enable system-preferred MFA
+## Enable system-preferred MFA in the Azure portal
+
+By default, system-preferred MFA is Microsoft managed and disabled for all users.
+
+1. In the Azure portal, click **Security** > **Authentication methods** > **Settings**.
+1. For **System-preferred multifactor authentication**, choose whether to explicitly enable or disable the feature, and include or exclude any users. Excluded groups take precedence over include groups.
+
+ For example, the following screenshot shows how to make system-preferred MFA explicitly enabled for only the Engineering group.
+
+ :::image type="content" border="true" source="./media/concept-system-preferred-multifactor-authentication/enable.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
+
+1. After you finish making any changes, click **Save**.
+
+## Enable system-preferred MFA using Graph APIs
To enable system-preferred MFA in advance, you need to choose a single target group for the schema configuration, as shown in the [Request](#request) example.
System-preferred MFA can be enabled only for a single group, which can be a dyna
| Property | Type | Description | |-||-|
-| id | String | ID of the entity targeted. |
+| ID | String | ID of the entity targeted. |
| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: 'group', 'administrativeUnit', 'role', 'unknownFutureValue'. | Use the following API endpoint to enable **systemCredentialPreferences** and include or exclude groups:
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
For more information about NIST standards for onboarding and recovery, see [NIST
Keep these limitations in mind: - When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation doesn't apply to a Temporary Access Pass that can be used more than once.-- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they've signed in with a Temporary Access Pass.
+- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they've signed in with a Temporary Access Pass using a browser.
Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience doesn't currently support FIDO2 and Phone Sign-in registration. - A Temporary Access Pass can't be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter. - It can take a few minutes for changes to replicate. Because of this, after a Temporary Access Pass is added to an account it can take a while for the prompt to appear. For the same reason, after a Temporary Access Pass expires, users may still see a prompt for Temporary Access Pass.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Previously updated : 01/05/2023 Last updated : 04/03/2023
Administrators can monitor and troubleshoot sign in events where [continuous acc
## Continuous access evaluation sign-in reporting
-Administrators will have the opportunity to monitor user sign-ins where CAE is applied. This pane can be located by via the following instructions:
+Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs:
1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-ins**.
+1. Browse to **Azure Active Directory** > **Sign-in logs**.
1. Apply the **Is CAE Token** filter.
-[ ![Add a filter to the Sign-ins log to see where CAE is being applied or not](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png#lightbox)
+[ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
-From here, admins will be presented with information about their userΓÇÖs sign-in events. Select any sign-in to see details about the session, like which Conditional Access policies were applied and is CAE enabled.
+From here, admins are presented with information about their userΓÇÖs sign-in events. Select any sign-in to see details about the session, like which Conditional Access policies applied and if CAE enabled.
-There are multiple sign-in requests for each authentication. Some will be shown on the interactive tab, while others will be shown on the non-interactive tab. CAE will only be displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. Admins need to check both tabs to confirm whether the user's authentication is CAE enabled or not.
+There are multiple sign-in requests for each authentication. Some are on the interactive tab, while others are on the non-interactive tab. CAE is only marked true for one of the requests, it can be on the interactive tab or non-interactive tab. Admins must check both tabs to confirm whether the user's authentication is CAE enabled or not.
### Searching for specific sign-in attempts
-Sign in logs contain information on Success as well as failure events. Use filters to narrow your search. For example, if a user signed in to Teams, use the Application filter and set it to Teams. Admins may need to check the sign-ins from both interactive and non-interactive tabs to locate the specific sign-in. To further narrow the search, admins may apply multiple filters.
+Sign in logs contain information on success and failure events. Use filters to narrow your search. For example, if a user signed in to Teams, use the Application filter and set it to Teams. Admins may need to check the sign-ins from both interactive and non-interactive tabs to locate the specific sign-in. To further narrow the search, admins may apply multiple filters.
## Continuous access evaluation workbooks
Log Analytics integration must be completed before workbooks are displayed. For
1. Browse to **Azure Active Directory** > **Workbooks**. 1. Under **Public Templates**, search for **Continuous access evaluation insights**.
-[ ![Find the CAE insights workbook in the gallery to continue monitoring](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-workbooks-continuous-access-evaluation.png) ](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-workbooks-continuous-access-evaluation.png#lightbox)
- The **Continuous access evaluation insights** workbook contains the following table: ### Potential IP address mismatch between Azure AD and resource provider
-![Workbook table 1 showing potential IP address mismatches](./media/howto-continuous-access-evaluation-troubleshoot/continuous-access-evaluation-insights-workbook-table-1.png)
- The potential IP address mismatch between Azure AD & resource provider table allows admins to investigate sessions where the IP address detected by Azure AD doesn't match with the IP address detected by the resource provider. This workbook table sheds light on these scenarios by displaying the respective IP addresses and whether a CAE token was issued during the session.
-#### Continuous access evaluation insights per sign-in
+### Continuous access evaluation insights per sign-in
The continuous access evaluation insights per sign-in page in the workbook connects multiple requests from the sign-in logs and displays a single request where a CAE token was issued. This workbook can come in handy, for example, when: A user opens Outlook on their desktop and attempts to access resources inside of Exchange Online. This sign-in action may map to multiple interactive and non-interactive sign-in requests in the logs making issues hard to diagnose.
-#### IP address configuration
+## IP address configuration
Your identity provider and resource providers may see different IP addresses. This mismatch may happen because of the following examples:
Your identity provider and resource providers may see different IP addresses. Th
- Your resource provider is using an IPv6 address and Azure AD is using an IPv4 address. - Because of network configurations, Azure AD sees one IP address from the client and your resource provider sees a different IP address from the client.
-If this scenario exists in your environment, to avoid infinite loops, Azure AD will issue a one-hour CAE token and won't enforce client location change during that one-hour period. Even in this case, security is improved compared to traditional one-hour tokens since we're still evaluating the other events besides client location change events.
+If this scenario exists in your environment, to avoid infinite loops, Azure AD issues a one-hour CAE token and doesn't enforce client location change during that one-hour period. Even in this case, security is improved compared to traditional one-hour tokens since we're still evaluating the other events besides client location change events.
Admins can view records filtered by time range and application. Admins can compare the number of mismatched IPs detected with the total number of sign-ins during a specified time period.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
The Microsoft identity platform uses some claims to help secure tokens for reuse
### Payload claims
-| Claim | Format | Description |
-|-|--|-|
-| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. The API must validate this value and reject the token if the value doesn't match. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. |
-| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
-|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. |
-| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. |
-| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. |
-| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. |
-| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. |
-| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. |
-| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. |
-| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
-| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
-| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
-| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
-| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. Since the value is mutable, don't use it to make authorization decisions. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. |
-| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. |
-| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. Only included for user tokens. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. |
-| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. |
-| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. Safely use these unique values for managing access, such as enforcing authorization to access a resource. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. |
-| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. |
-| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` |
-| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. Use it to perform authorization checks safely, such as when using the token to access a resource, and can be used as a key in database tables. Because the subject is always present in the tokens that Azure AD issues, use this value in a general-purpose authorization system. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. |
-| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. Use this value to also perform authorization checks safely and as a key in database tables. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. |
-|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. |
-| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. This value can be different within a tenant and use it only for display purposes. |
-| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. |
-| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. |
-| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. |
+| Claim | Format | Description | Authorization considerations |
+|-|--|-||
+| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
+| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
+|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
+| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
+| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. | |
+| `exp` | int, a Unix timestamp | Specifies the expiration time before which the JWT can be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. | |
+| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
+| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
+| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. | |
+| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
+| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
+| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
+| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
+| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. | Since this value is mutable, don't use it to make authorization decisions. |
+| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. | Don't use this value to make authorization decisions. |
+| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
+| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
+| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
+| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value can be different within a tenant and use it only for display purposes. |
+| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
+| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
+| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
#### Groups overage claim
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
Previously updated : 03/22/2022 Last updated : 04/03/2023 + # Customer intent: As an application developer, I want to learn about the authentication flows supported by MSAL.
The Microsoft Authentication Library (MSAL) supports several authorization grant
| Authentication flow | Enables | Supported application types | |--|--||
-| [Authorization code](#authorization-code) | User sign-in and access to web APIs on behalf of the user. | * [Desktop](scenario-desktop-overview.md) <br /> * [Mobile](scenario-mobile-overview.md) <br /> * [Single-page app (SPA)](scenario-spa-overview.md) (requires PKCE) <br /> * [Web](scenario-web-app-call-api-overview.md) |
+| [Authorization code](#authorization-code) | User sign-in and access to web APIs on behalf of the user. | [Desktop](scenario-desktop-overview.md) <br /> [Mobile](scenario-mobile-overview.md) <br /> [Single-page app (SPA)](scenario-spa-overview.md) (requires PKCE) <br /> [Web](scenario-web-app-call-api-overview.md) |
| [Client credentials](#client-credentials) | Access to web APIs by using the identity of the application itself. Typically used for server-to-server communication and automated scripts requiring no user interaction. | [Daemon](scenario-daemon-overview.md) | | [Device code](#device-code) | User sign-in and access to web APIs on behalf of the user on input-constrained devices like smart TVs and IoT devices. Also used by command line interface (CLI) applications. | [Desktop, Mobile](scenario-desktop-acquire-token-device-code-flow.md) | | [Implicit grant](#implicit-grant) | User sign-in and access to web APIs on behalf of the user. _The implicit grant flow is no longer recommended - use authorization code with PKCE instead._ | * [Single-page app (SPA)](scenario-spa-overview.md) <br /> * [Web](scenario-web-app-call-api-overview.md) |
Your MSAL-based application should first try to acquire a token silently and fal
The [OAuth 2.0 authorization code grant](v2-oauth2-auth-code-flow.md) can be used by web apps, single-page apps (SPA), and native (mobile and desktop) apps to gain access to protected resources like web APIs.
-When users sign in to web applications, the application receives an authorization code that it can redeem for an access token to call web APIs.
+When users sign in to web applications, the application receives an authorization code that it can redeem for an access token to call web APIs.
-![Diagram of authorization code flow](media/msal-authentication-flows/authorization-code.png)
+In the following diagram, the application:
-In the preceding diagram, the application:
-
-1. Requests an authorization code which redeemed for an access token.
+1. Requests an authorization code which was redeemed for an access token.
2. Uses the access token to call a web API, Microsoft Graph.
+![Diagram of authorization code flow.](media/msal-authentication-flows/authorization-code.png)
+ ### Constraints for authorization code -- Single-page applications require Proof Key for Code Exchange (PKCE) when using the authorization code grant flow. PKCE is supported by MSAL.
+- Single-page applications require *Proof Key for Code Exchange* (PKCE) when using the authorization code grant flow. PKCE is supported by MSAL.
-- The OAuth 2.0 specification requires you use an authorization code to redeem an access token only _once_.
+- The OAuth 2.0 specification requires you to use an authorization code to redeem an access token only _once_.
- If you attempt to acquire access token multiple times with the same authorization code, an error similar to the following is returned by the Microsoft identity platform. Keep in mind that some libraries and frameworks request the authorization code for you automatically, and requesting a code manually in such cases will also result in this error.
+ If you attempt to acquire access token multiple times with the same authorization code, an error similar to the following is returned by the Microsoft identity platform. Some libraries and frameworks request the authorization code for you automatically, and requesting a code manually in such cases will also result in this error.
`AADSTS70002: Error validating credentials. AADSTS54005: OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token.`
The client credentials grant flow permits a web service (a confidential client)
### Application secrets
-![Diagram of confidential client with password](media/msal-authentication-flows/confidential-client-password.png)
-
-In the preceding diagram, the application:
+In the following diagram, the application:
1. Acquires a token by using application secret or password credentials. 2. Uses the token to make requests of the resource.
-### Certificates
+![Diagram of confidential client with password.](media/msal-authentication-flows/confidential-client-password.png)
-![Diagram of confidential client with cert](media/msal-authentication-flows/confidential-client-certificate.png)
+### Certificates
-In the preceding diagram, the application:
+In the following diagram, the application:
1. Acquires a token by using certificate credentials. 2. Uses the token to make requests of the resource.
+![Diagram of confidential client with cert.](media/msal-authentication-flows/confidential-client-certificate.png)
+ These client credentials need to be: - Registered with Azure AD.
The [OAuth 2 device code flow](v2-oauth2-device-code.md) allows users to sign in
By using the device code flow, the application obtains tokens through a two-step process designed for these devices and operating systems. Examples of such applications include those running on IoT devices and command-line interface (CLI) tools.
-![Diagram of device code flow](media/msal-authentication-flows/device-code.png)
-
-In the preceding diagram:
+In the following diagram:
1. Whenever user authentication is required, the app provides a code and asks the user to use another device like an internet-connected smartphone to visit a URL (for example, `https://microsoft.com/devicelogin`). The user is then prompted to enter the code, and proceeding through a normal authentication experience including consent prompts and [multi-factor authentication](../authentication/concept-mfa-howitworks.md), if necessary. 1. Upon successful authentication, the command-line app receives the required tokens through a back channel, and uses them to perform the web API calls it needs.
+![Diagram of device code flow.](media/msal-authentication-flows/device-code.png)
+ ### Constraints for device code - The device code flow is available only for public client applications. - When you initialize a public client application in MSAL, use one of these authority formats:
- - Tenanted: `https://login.microsoftonline.com/{tenant}/,` where `{tenant}` is either the GUID representing the tenant ID or a domain name associated with the tenant.
+ - Tenant: `https://login.microsoftonline.com/{tenant}/,` where `{tenant}` is either the GUID representing the tenant ID or a domain name associated with the tenant.
- Work and school accounts: `https://login.microsoftonline.com/organizations/`. ## Implicit grant
Tokens issued via the implicit flow mode have a **length limitation** because th
The [OAuth 2 on-behalf-of authentication flow](v2-oauth2-on-behalf-of-flow.md) flow is used when an application invokes a service or web API that in turn needs to call another service or web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform *on behalf of* the user.
-![Diagram of on-behalf-of flow](media/msal-authentication-flows/on-behalf-of.png)
-
-In the preceding diagram:
+In the following diagram:
1. The application acquires an access token for the web API. 2. A client (web, desktop, mobile, or single-page application) calls a protected web API, adding the access token as a bearer token in the authentication header of the HTTP request. The web API authenticates the user. 3. When the client calls the web API, the web API requests another token on-behalf-of the user. 4. The protected web API uses this token to call a downstream web API on-behalf-of the user. The web API can also later request tokens for other downstream APIs (but still on behalf of the same user).
+![Diagram of on-behalf-of flow.](media/msal-authentication-flows/on-behalf-of.png)
+ ## Username/password (ROPC) > [!WARNING]
The [OAuth 2 resource owner password credentials](v2-oauth-ropc.md) (ROPC) grant
Some application scenarios like DevOps might find ROPC useful, but you should avoid it in any application in which you provide an interactive UI for user sign-in.
-![Diagram of the username/password flow](media/msal-authentication-flows/username-password.png)
-
-In the preceding diagram, the application:
+In the following diagram, the application:
1. Acquires a token by sending the username and password to the identity provider. 2. Calls a web API by using the token.
+![Diagram of the username/password flow.](media/msal-authentication-flows/username-password.png)
+ To acquire a token silently on Windows domain-joined machines, we recommend [integrated Windows authentication (IWA)](#integrated-windows-authentication-iwa) instead of ROPC. For other scenarios, use the [device code flow](#device-code). ### Constraints for ROPC
The following constraints apply to the applications using the ROPC flow:
MSAL supports integrated Windows authentication (IWA) for desktop and mobile applications that run on domain-joined or Azure AD-joined Windows computers. By using IWA, these applications acquire a token silently without requiring UI interaction by user.
-![Diagram of integrated Windows authentication](media/msal-authentication-flows/integrated-windows-authentication.png)
-
-In the preceding diagram, the application:
+In the following diagram, the application:
1. Acquires a token by using integrated Windows authentication. 2. Uses the token to make requests of the resource.
+![Diagram of integrated Windows authentication.](media/msal-authentication-flows/integrated-windows-authentication.png)
+ ### Constraints for IWA **Compatibility**
To satisfy either requirement, one of these operations must have been completed:
- You as the application developer have selected **Grant** in the Azure portal for yourself. - A tenant admin has selected **Grant/revoke admin consent for {tenant domain}** in the **API permissions** tab of the app registration in the Azure portal; see [Add permissions to access your web API](quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).-- You've provided a way for users to consent to the application; see [Requesting individual user consent](v2-permissions-and-consent.md#requesting-individual-user-consent).-- You've provided a way for the tenant admin to consent for the application; see [admin consent](v2-permissions-and-consent.md#requesting-consent-for-an-entire-tenant).
+- You've provided a way for users to consent to the application; see [User consent](../manage-apps/user-admin-consent-overview.md#user-consent).
+- You've provided a way for the tenant admin to consent for the application; see [Administrator consent]../manage-apps/user-admin-consent-overview.md#administrator-consent).
-For more information on consent, see [Permissions and consent](v2-permissions-and-consent.md).
+For more information on consent, see [Permissions and consent](v2-permissions-and-consent.md#consent).
## Next steps
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
After the application has been registered, you can check or change the account t
| Accounts in this organizational directory only (Single tenant) | `AzureADMyOrg` | | Accounts in any organizational directory (Any Azure AD directory - Multitenant) | `AzureADMultipleOrgs` | | Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox) | `AzureADandPersonalMicrosoftAccount` |
+| Personal Microsoft accounts only | `PersonalMicrosoftAccount` |
If you change this property you may need to change other properties first.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 03/01/2023 Last updated : 04/03/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## March 2023
+
+### New articles
+
+- [Configure a SAML app to receive tokens with claims from an external store (preview)](custom-extension-configure-saml-app.md)
+- [Configure a custom claim provider token issuance event (preview)](custom-extension-get-started.md)
+- [Custom claims provider (preview)](custom-claims-provider-overview.md)
+- [Custom claims providers](custom-claims-provider-reference.md)
+- [Custom authentication extensions (preview)](custom-extension-overview.md)
+- [Troubleshoot your custom claims provider API (preview)](custom-extension-troubleshoot.md)
+- [Understanding application-only access](app-only-access-primer.md)
+
+### Updated articles
+
+- [ADAL to MSAL migration guide for Python](migrate-python-adal-msal.md)
+- [Handle errors and exceptions in MSAL for Python](msal-error-handling-python.md)
+- [How to migrate a JavaScript app from ADAL.js to MSAL.js](msal-compare-msal-js-and-adal-js.md)
+- [Microsoft identity platform access tokens](access-tokens.md)
+- [Microsoft Enterprise SSO plug-in for Apple devices (preview)](apple-sso-plugin.md)
+- [Restrict your Azure AD app to a set of users in an Azure AD tenant](howto-restrict-your-app-to-a-set-of-users.md)
+- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
+- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)
+- [Tutorial: Call the Microsoft Graph API from a Universal Windows Platform (UWP) application](tutorial-v2-windows-uwp.md)
+ ## February 2023 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md) - [Enable cross-app SSO on Android using MSAL](msal-android-single-sign-on.md) - [Using redirect URIs with the Microsoft Authentication Library (MSAL) for iOS and macOS](redirect-uris-ios.md)-
-## December 2022
-
-### New articles
--- [Block workload identity federation on managed identities using a policy](workload-identity-federation-block-using-azure-policy.md)-- [Troubleshooting the configured permissions limits](troubleshoot-required-resource-access-limits.md)-
-### Updated articles
--- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)-- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)-- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)-- [Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app](tutorial-blazor-webassembly.md)-- [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md)-- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
Previously updated : 7/5/2022- Last updated : 3/23/2023+
When a requirement exists to deploy IaaS workloads to Azure that require identit
**Azure AD DS managed domain** - Only one Azure AD DS managed domain can be deployed per Azure AD tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Azure AD DS authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Azure AD DS joined servers are located and are linked to the hub using Azure network gateways or VNet peering.
-**User forest vs. resource forest** - Azure AD DS provides two options for forest configuration of the Azure AD DS managed domain. For the purposes of this section we focus on user forest, as the resource forest relies on a trust being configured with an AD DS forest and this goes against the isolation principle we're addressing here.
-
-* **User forest** - By default, an Azure AD DS managed domain is created as a user forest. This type of forest synchronizes all objects from Azure AD, including any user accounts synchronized from an on-premises AD DS environment.
-
-* **Resource forest** - Resource forests only synchronize users and groups created directly in Azure AD and requires a trust be configured with an AD DS forest for user authentication. For more information, see [Resource forest concepts and features for Azure Active Directory Domain Services](../../active-directory-domain-services/concepts-resource-forest.md).
- **Managed domain location** - A location must be set when deploying an Azure AD DS managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you: * Consider a location that is geographically closed to the servers and applications that require Azure AD DS services.
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
Emails tasks allow for the customization of the following aspects:
- Email language > [!NOTE]
-> To avoid additional security disclaimers, you should opt in to using customized domain and organizational branding.
+> When customizing the subject or message body, we recommend that you also enable the custom sender domain and organizational branding, otherwise an additional security disclaimer will be added to your email.
For more information on these customizable parameters, see: [Common email task parameters](lifecycle-workflow-tasks.md#common-email-task-parameters).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 03/02/2023 Last updated : 04/03/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## March 2023
+
+### Updated articles
+
+- [Move application authentication to Azure Active Directory](migrate-adfs-apps-to-azure.md)
+- [Quickstart: Create and assign a user account](add-application-portal-assign-users.md)
+- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)
+- [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md)
+- [Review permissions granted to enterprise applications](manage-application-permissions.md)
+- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Configure permission classifications](configure-permission-classifications.md)
+- [Restrict access to a tenant](tenant-restrictions.md)
+- [Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)
+- [Delete an enterprise application](delete-application-portal.md)
+- [Restore an enterprise application in Azure AD](restore-application.md)
+ ## February 2023 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md) - [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md) - [Secure hybrid access with Azure Active Directory partner integrations](secure-hybrid-access-integrations.md)-
-## December 2022
-
-### Updated articles
--- [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md)-- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)-- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)-- [Deploy F5 BIG-IP Virtual Edition VM in Azure](f5-bigip-deployment-guide.md)-- [End-user experiences for applications](end-user-experiences.md)-- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)-- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md)
active-directory Convercent Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/convercent-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Convercent'
+ Title: 'Tutorial: Azure Active Directory integration with Convercent | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and Convercent.
Previously updated : 11/21/2022 Last updated : 03/29/2023 # Tutorial: Azure Active Directory integration with Convercent
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<INSTANCE_NAME>.convercent.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Sign-On URL and Relay State. Contact [Convercent Client support team](http://support.convercent.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Sign-On URL and Relay State. Contact [Convercent Client support team](https://www.convercent.com/customers/services/customer-support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Convercent SSO
-To configure single sign-on on **Convercent** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Convercent support team](http://support.convercent.com/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Convercent** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Convercent support team](https://www.convercent.com/customers/services/customer-support). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Convercent test user
-In this section, you create a user called Britta Simon in Convercent. Work with [Convercent support team](http://support.convercent.com/) to add the users in the Convercent platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Convercent. Work with [Convercent support team](https://www.convercent.com/customers/services/customer-support) to add the users in the Convercent platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Maximo Application Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maximo-application-suite-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Maximo Application Suite
+description: Learn how to configure single sign-on between Azure Active Directory and Maximo Application Suite.
++++++++ Last updated : 04/03/2023++++
+# Azure Active Directory SSO integration with Maximo Application Suite
+
+In this article, you learn how to integrate Maximo Application Suite with Azure Active Directory (Azure AD). Customer-Managed - IBM Maximo Application Suite is a CMMS EAM platform, which delivers intelligent asset management, monitoring, predictive maintenance and reliability in a single platform. When you integrate Maximo Application Suite with Azure AD, you can:
+
+* Control in Azure AD who has access to Maximo Application Suite.
+* Enable your users to be automatically signed-in to Maximo Application Suite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Maximo Application Suite in a test environment. Maximo Application Suite supports **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Maximo Application Suite, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Maximo Application Suite single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Maximo Application Suite application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Maximo Application Suite from the Azure AD gallery
+
+Add Maximo Application Suite from the Azure AD application gallery to configure single sign-on with Maximo Application Suite. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Maximo Application Suite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file** then perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows to upload metadata file.](common/upload-metadata.png "File")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows how to choose metadata file.](common/browse-upload-metadata.png "Browse")
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
+
+ d. If you wish to configure **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern without `</path>`:
+ `https://<workspace_id>.<mas_application>.<mas_domain>`
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the **Configure Maximo Application Suite SSO** section, which is explained later in the tutorial. If the **Identifier** and **Reply URL** values do not get auto populated, then fill the values manually according to your requirement. Contact [Maximo Application Suite Client support](https://www.ibm.com/mysupport/) to get these values.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Maximo Application Suite** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Maximo Application Suite SSO
+
+1. Log in to your Maximo Application Suite company site as an administrator.
+
+1. Go to the Suite administration and select **Configure SAML**.
+
+ ![Screenshot shows the Maximo administration portal.](media/maximo-application-suite-tutorial/configure.png "Portal")
+
+1. In the SAML Authentication page, perform the following steps:
+
+ ![Screenshot shows the Authentication page.](media/maximo-application-suite-tutorial/authenticate.png "Page")
+
+ 1. Select emailAddress as the [name-id format](../develop/single-sign-on-saml-protocol.md).
+
+ 1. Click **Generate file**, wait and then **Download file**. Store this metadata file and upload it in Azure AD side.
+
+1. Download the **Federation Metadata XML file** from the Azure portal and upload the Azure AD Federation Metadata XML document to Maximo's SAML configuration panel and save it.
+
+ ![Screenshot shows to upload Federation Metadata file.](media/maximo-application-suite-tutorial/file.png "Federation")
+
+### Create Maximo Application Suite test user
+
+1. In a different web browser window, sign into your Maximo Application Suite company site as an administrator.
+
+1. Create a new user in Suite Administration under **Users** and perform the following steps:
+
+ ![Screenshot shows the new user in Suite Administration.](media/maximo-application-suite-tutorial/users.png "New user")
+
+ 1. Select Authentication type as **SAML**.
+
+ 1. In the **Display Name** textbox, enter the UPN used in Azure AD as they must match.
+
+ 1. In the **Primary email** textbox, enter the UPN used in Azure AD.
+ > [!Note]
+ > The rest of the fields can be populated as you like with whatever permissions necessary.
+
+ 1. Select any **Entitlements** required for that user.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Maximo Application Suite Sign-on URL where you can initiate the login flow.
+
+* Go to Maximo Application Suite Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal to be taken to the Maximo login page where you need to enter in your SAML identity as a fully qualified email address. If the user has already authenticated with the IDP the Maximo Application Suite won't have to login again, and the browser will be redirected to the home page.
+
+* You can also use Microsoft My Apps to test the application in any mode. When you click the Maximo Application Suite tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Maximo Application Suite for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+> [!Note]
+> Screenshots are from MAS Continuous-delivery 8.9 and may differ in future versions.
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Maximo Application Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ServiceNow'
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ServiceNow | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and ServiceNow.
Previously updated : 11/21/2022 Last updated : 03/29/2023
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the **Basic SAML Configuration** section, perform the following steps:
- a. For **Sign on URL**, enter one of the following URL pattern:
+ a. For **Sign on URL**, enter one of the following URL patterns:
| Sign on URL | |--|
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. For **Identifier (Entity ID)**, enter a URL that uses the following pattern: `https://<instance-name>.service-now.com`
- c. For **Reply URL**, enter one of the following URL pattern:
+ c. For **Reply URL**, enter one of the following URL patterns:
| Reply URL | |--|
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
The objective of this section is to create a user called B.Simon in ServiceNow. ServiceNow supports automatic user provisioning, which is enabled by default. > [!NOTE]
-> If you need to create a user manually, contact the [ServiceNow Client support team](https://www.servicenow.com/support/contact-support.html).
+> If you need to create a user manually, contact the [ServiceNow Client support team](https://support.servicenow.com/now).
### Configure ServiceNow Express SSO
active-directory Tanium Cloud Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-cloud-sso-tutorial.md
Previously updated : 02/16/2023 Last updated : 03/29/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
`urn:amazon:cognito:sp:InstanceName` b. In the **Reply URL** textbox, type a URL using the following pattern:
- `https://InstanceName-tanium.auth.<SUBDOMAIN>.amazoncognito.com/saml2/idpresponse`
+ `https://<InstanceName>-tanium.auth.<SUBDOMAIN>.amazoncognito.com/saml2/idpresponse`
1. If you wish to configure the application in **SP** initiated mode, then perform the following step: In the **Sign on URL** textbox, type a URL using the following pattern:
- `https://InstanceName.cloud.tanium.com`
+ `https://<InstanceName>.cloud.tanium.com`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium Cloud SSO Client support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Zendesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-tutorial.md
Previously updated : 11/21/2022 Last updated : 03/29/2023
You can set up one SAML configuration for team members and a second SAML configu
1. In the **Zendesk Admin Center**, go to **Account -> Security -> Single sign-on**, then click **Create SSO configuration** and select **SAML**.
- ![Screenshot shows the Zendesk Admin Center with Security settings selected.](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_create_sso_configuration.png "Security")
+ ![Screenshot shows the Zendesk Admin Center with Security settings selected.](./media/zendesk-tutorial/zendesk-create-sso-configuration.png "Security")
1. Perform the following steps in the **Single sign-on** page.
- ![Single sign-on](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_saml_configuration_settings.png "Single sign-on")
+ ![Single sign-on](./media/zendesk-tutorial/zendesk-saml-configuration-settings.png "Single sign-on")
a. In **Configuration name**, enter a name for your configuration. Up to two SAML and two JWT configurations are possible.
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation.md
Previously updated : 03/09/2023 Last updated : 03/29/2023
You can use workload identity federation in scenarios such as GitHub Actions, wo
Watch this video to learn why you would use workload identity federation. > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWXamJ]
-Typically, a software workload (such as an application, service, script, or container-based application) needs an identity in order to authenticate and access resources or communicate with other services. When these workloads run on Azure, you can use [managed identities](../managed-identities-azure-resources/overview.md) and the Azure platform manages the credentials for you. For a software workload running outside of Azure, you need to use application credentials (a secret or certificate) to access Azure AD protected resources (such as Azure, Microsoft Graph, Microsoft 365, or third-party resources). These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
+Typically, a software workload (such as an application, service, script, or container-based application) needs an identity in order to authenticate and access resources or communicate with other services. When these workloads run on Azure, you can use [managed identities](../managed-identities-azure-resources/overview.md) and the Azure platform manages the credentials for you. You can only use managed identities, however, for software workloads running in Azure. For a software workload running outside of Azure, you need to use application credentials (a secret or certificate) to access Azure AD protected resources (such as Azure, Microsoft Graph, Microsoft 365, or third-party resources). These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
-You use workload identity federation to configure an Azure AD app registration or [user-assigned managed identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to trust tokens from an external identity provider (IdP), such as GitHub. Once that trust relationship is created, your software workload can exchange trusted tokens from the external IdP for access tokens from Microsoft identity platform. Your software workload then uses that access token to access the Azure AD protected resources to which the workload has been granted access. This eliminates the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or having certificates expire.
+You use workload identity federation to configure an [user-assigned managed identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) or [app registration](../develop/app-objects-and-service-principals.md) in Azure AD to trust tokens from an external identity provider (IdP), such as GitHub or Google. The user-assigned managed identity or app registration in Azure AD becomes an identity for software workloads running, for example, in on-premises Kubernetes or GitHub Actions workflows. Once that trust relationship is created, your external software workload exchanges trusted tokens from the external IdP for access tokens from Microsoft identity platform. Your software workload uses that access token to access the Azure AD protected resources to which the workload has been granted access. You eliminate the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or having certificates expire.
## Supported scenarios
-> [!NOTE]
-> Azure AD issued tokens may not be used for federated identity flows. The federated identity credentials flow does not support tokens issued by Azure AD.
- The following scenarios are supported for accessing Azure AD protected resources using workload identity federation: -- GitHub Actions. First, [Configure a trust relationship](workload-identity-federation-create-trust.md) between your app in Azure AD and a GitHub repo in the Azure portal or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources.-- Google Cloud. First, configure a trust relationship between your app in Azure AD and an identity in Google Cloud. Then configure your software workload running in Google Cloud to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md).-- Workloads running on Kubernetes. Establish a trust relationship between your app or user-assigned managed identity in Azure AD and a Kubernetes workload (described in the [workload identity overview](../../aks/workload-identity-overview.md)).-- Workloads running in compute platforms outside of Azure. [Configure a trust relationship](workload-identity-federation-create-trust.md) between your Azure AD application registration and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate.
+- Workloads running on any Kubernetes cluster (Azure Kubernetes Service (AKS), Amazon Web Services EKS, Google Kubernetes Engine (GKE), or on-premises). Establish a trust relationship between your user-assigned managed identity or app in Azure AD and a Kubernetes workload (described in the [workload identity overview](../../aks/workload-identity-overview.md)).
+- GitHub Actions. First, configure a trust relationship between your [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Azure AD and a GitHub repo in the Azure portal or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources.
+- Google Cloud. First, configure a trust relationship between your user-assigned managed identity or app in Azure AD and an identity in Google Cloud. Then configure your software workload running in Google Cloud to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Access Azure AD protected resources from an app in Google Cloud](https://blog.identitydigest.com/azuread-federate-gcp/).
+- Workloads running in Amazon Web Services (AWS). First, configure a trust relationship between your user-assigned managed identity or app in Azure AD and an identity in Amazon Cognito. Then configure your software workload running in AWS to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Workload identity federation with AWS](https://blog.identitydigest.com/azuread-federate-aws/).
+- Other workloads running in compute platforms outside of Azure. Configure a trust relationship between your [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Azure AD and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate.
+- SPIFFE and SPIRE are a set of platform agnostic, open-source standards for providing identities to your software workloads deployed across platforms and cloud vendors. First, configure a trust relationship between your user-assigned managed identity or app in Azure AD and a SPIFFE ID for an external workload. Then configure your external software workload to get an access token from Microsoft identity provider and access Azure AD protected resources. See [Workload identity federation with SPIFFE and SPIRE](https://blog.identitydigest.com/azuread-federate-spiffe/).
+
+> [!NOTE]
+> Azure AD issued tokens may not be used for federated identity flows. The federated identity credentials flow does not support tokens issued by Azure AD.
## How it works
-Create a trust relationship between the external IdP and an app registration or user-assigned managed identity in Azure AD. The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure a federated identity either:
+Create a trust relationship between the external IdP and a [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Azure AD. The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure a federated identity either:
+- On a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). The [steps for configuring the trust relationship](workload-identity-federation-create-trust-user-assigned-managed-identity.md) will differ, depending on the scenario and external IdP.
- On an Azure AD [App registration](/azure/active-directory/develop/quickstart-register-app) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).-- On a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). The [steps for configuring the trust relationship](workload-identity-federation-create-trust-user-assigned-managed-identity.md) will differ, depending on the scenario and external IdP. The workflow for exchanging an external token for an access token is the same, however, for all scenarios. The following diagram shows the general workflow of a workload exchanging an external token for an access token and then accessing Azure AD protected resources.
The workflow for exchanging an external token for an access token is the same, h
1. The external workload (such as a GitHub Actions workflow) requests a token from the external IdP (such as GitHub). 1. The external IdP issues a token to the external workload. 1. The external workload (the login action in a GitHub workflow, for example) [sends the token to Microsoft identity platform](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and requests an access token.
-1. Microsoft identity platform checks the [trust relationship](workload-identity-federation-create-trust.md) on the app registration or user-assigned managed identity and validates the external token against the Open ID Connect (OIDC) issuer URL on the external IdP.
+1. Microsoft identity platform checks the trust relationship on the [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [app registration](workload-identity-federation-create-trust.md) and validates the external token against the Open ID Connect (OIDC) issuer URL on the external IdP.
1. When the checks are satisfied, Microsoft identity platform issues an access token to the external workload. 1. The external workload accesses Azure AD protected resources using the access token from Microsoft identity platform. A GitHub Actions workflow, for example, uses the access token to publish a web app to Azure App Service.
-The Microsoft identity platform stores only the first 100 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 100 signing keys, you may experience errors when using Workload Identity Federation.
+The Microsoft identity platform stores only the first 100 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 100 signing keys, you may experience errors when using workload identity federation.
## Next steps Learn more about how workload identity federation works:-- How Azure AD uses the [OAuth 2.0 client credentials grant](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration.+ - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity.-- Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.
+- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration.
+- Read the [workload identity overview](../../aks/workload-identity-overview.md) to learn how to configure a Kubernetes workload to get an access token from Microsoft identity provider and access Azure AD protected resources.
+- Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure AD protected resources.
+- How Azure AD uses the [OAuth 2.0 client credentials grant](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
- For information about the required format of JWTs created by external identity providers, read about the [assertion format](/azure/active-directory/develop/active-directory-certificate-credentials#assertion-format).
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/09/2023 Last updated : 03/29/2023
To have a storage volume persist for your workload, you can use a StatefulSet. T
# [NFS](#tab/NFS)
+### Prerequisites
+
+- Your AKS cluster *Control plane* identity (that is, your AKS cluster name) is added to the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role on the VNet and network security group.
+ 1. Create a file named `azure-blob-nfs-ss.yaml` and copy in the following YAML. ```yml
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
If you want to create clusters without host-based encryption, you can do so by o
You can enable host-based encryption on existing clusters by adding a new node pool to your cluster. Configure a new node pool to use host-based encryption by using the `--enable-encryption-at-host` parameter. ```azurecli
-az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 -l westus2 --enable-encryption-at-host
+az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 --enable-encryption-at-host
``` If you want to create new node pools without the host-based encryption feature, you can do so by omitting the `--enable-encryption-at-host` parameter.
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
-When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch.
+When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](https://learn.microsoft.com/azure/aks/auto-upgrade-cluster#using-cluster-auto-upgrade).
To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version.
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
To demonstrate the deployment of an untrusted application into the pod sandbox o
```output root@untrusted:/# uname -r
- 5.15.80.mshv2-hvl1.m2
+ 5.15.48.1-8.cm2
``` 3. Start a shell session to the container of the *trusted* pod to verify the kernel output:
To demonstrate the deployment of an untrusted application into the pod sandbox o
The following example resembles output from the VM that is running the *trusted* pod, which is a different kernel than the *untrusted* pod running within the pod sandbox: ```output
- 5.15.48.1-8.cm2
+ 5.15.80.mshv2-hvl1.m2
+ ```
## Cleanup
api-management Enable Cors Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/enable-cors-power-platform.md
Title: Enable CORS policies to test Azure API Management custom connector
-description: How to enable CORS policies in Azure API Management and Power Platform to test a custom connector from Power Platform applications.
+ Title: Enable CORS policies for Azure API Management custom connector
+description: How to enable CORS policies in Azure API Management and Power Platform to test and use a custom connector from Power Platform applications.
Last updated 03/24/2023
-# Enable CORS policies to test custom connector from Power Platform
+# Enable CORS policies for API Management custom connector
Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. Customers can add a [CORS policy](cors-policy.md) to their web APIs in Azure API Management, which adds cross-origin resource sharing support to an operation or an API to allow cross-domain calls from browser-based clients.
-If you've exported an API from API Management as a [custom connector](export-api-power-platform.md) in the Power Platform and want to use the Power Apps or Power Automate test console to call the API, you need to configure your API to explicitly enable cross-origin requests from Power Platform applications. This article shows you how to configure the following two necessary policy settings:
+If you've exported an API from API Management as a [custom connector](export-api-power-platform.md) in the Power Platform and want to use browser-based clients including Power Apps or Power Automate to call the API, you need to configure your API to explicitly enable cross-origin requests from Power Platform applications. This article shows you how to configure the following two necessary policy settings:
* Add a CORS policy to your API
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
You can manage your custom connector in your Power Apps or Power Platform enviro
1. Select the pencil (Edit) icon to edit and test the custom connector. > [!IMPORTANT]
-> To call the API from the Power Apps test console, you need to configure a CORS policy in your API Management instance and create a policy in the custom connector to set an Origin header in HTTP requests. For more information, see [Enable CORS policies to test custom connector from Power Platform](enable-cors-power-platform.md).
+> To call the API from the Power Apps test console, you need to configure a CORS policy in your API Management instance and create a policy in the custom connector to set an Origin header in HTTP requests. For more information, see [Enable CORS policies for custom connector](enable-cors-power-platform.md).
> ## Update a custom connector
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-event-grid.md
API Management event data includes the `resourceUri`, which identifies the API M
## Next steps
-* [Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus](../event-grid/compare-messaging-services.md)
* Learn more about [subscribing to events](../event-grid/subscribe-through-portal.md).
app-service Overview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-disaster-recovery.md
For IT, business continuity plans are largely driven by two metrics:
- Recovery Time Objective (RTO) ΓÇô the time duration in which your application must come back online after an outage. - Recovery Point Objective (RPO) ΓÇô the acceptable amount of data loss in a disaster, expressed as a unit of time (for example, 1 minute of transactional database records).
-Normally, maintaining an SLA around RTO is impractical for regional disasters, and you would typically design your disaster recovery strategy around RPO alone (i.e. focus on recovering data and not on minimizing interruption). With Azure, however, it's not only practical but could even be straightforward to deploy App Service for automatic geo-failovers. This lets you disaster-proof your applications further by take care of both RTO and RPO.
+Normally, maintaining an SLA around RTO is impractical for regional disasters, and you would typically design your disaster recovery strategy around RPO alone (i.e. focus on recovering data and not on minimizing interruption). With Azure, however, it's not only practical but could even be straightforward to deploy App Service for automatic geo-failovers. This lets you disaster-proof your applications further by taking care of both RTO and RPO.
Depending on your desired RTO and RPO metrics, three disaster recovery architectures are commonly used, as shown in the following table:
Steps to create a passive-cold region without GRS and GZRS are summarized as fol
## Next steps
-[Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md)
+[Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md)
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
Title: Name resolution in App Service
description: Overview of how name resolution (DNS) works for your app in Azure App Service. Previously updated : 03/01/2023 Last updated : 04/03/2023
Using the app setting `WEBSITE_DNS_ALT_SERVER`, you append a DNS server to end o
If you require fine-grained control over name resolution, App Service allows you to modify the default behavior. You can modify retry attempts, retry timeout and cache timeout. Changing behavior like disabling or lowering cache duration may influence performance.
-|Property name|Default value|Allowed values|Description|
-|-|-|-|
-|dnsRetryAttemptCount|1|1-5|Defines the number of attempts to resolve where one means no retries|
-|dnsMaxCacheTimeout|30|0-60|Cache timeout defined in seconds. Setting cache to zero means you've disabled caching|
-|dnsRetryAttemptTimeout|3|1-30|Timeout before retrying or failing. Timeout also defines the time to wait for secondary server results if the primary doesn't respond|
+|Property name|Windows default value|Linux default value|Allowed values|Description|
+|-|-|-|-|
+|dnsRetryAttemptCount|1|5|1-5|Defines the number of attempts to resolve where one means no retries.|
+|dnsMaxCacheTimeout|30|0|0-60|Cache timeout defined in seconds. Setting cache to zero means you've disabled caching.|
+|dnsRetryAttemptTimeout|3|1|1-30|Timeout before retrying or failing. Timeout also defines the time to wait for secondary server results if the primary doesn't respond.|
>[!NOTE]
-> * Changing name resolution behavior is not supported on Windows Container apps
-> * To enable DNS caching on Web App for Containers and Linux-based apps you must add the app setting `WEBSITE_ENABLE_DNS_CACHE`
+> * Changing name resolution behavior is not supported on Windows Container apps.
+> * To enable DNS caching on Web App for Containers and Linux-based apps, you must add the app setting `WEBSITE_ENABLE_DNS_CACHE`. This setting defaults to 30 seconds.
Configure the name resolution behavior by using these CLI commands:
az resource show --resource-group <group-name> --name <app-name> --query propert
- [Configure virtual network integration](./configure-vnet-integration-enable.md) - [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)-- [General networking overview](./networking-features.md)
+- [General networking overview](./networking-features.md)
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting isn't injected into the container as an environment variable. || | `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).|| | `WEBSITE_MEMORY_LIMIT_MB` | By default all Windows Containers deployed in Azure App Service are limited to 1 GB RAM. Set to the desired memory limit in MB. The cumulative total of this setting across apps in the same plan must not exceed the amount allowed by the chosen pricing tier. For more information, see [Customize container memory](configure-custom-container.md?pivots=container-windows#customize-container-memory). ||
-| `CONTAINER_WINRM_ENABLED` | For a Windows containerized app, set to `1` to enable Windows Remote Management (WIN-RM). ||
<!-- CONTAINER_ENCRYPTION_KEY
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
In this step, you **grant the frontend app access to the backend app** on the us
1. In the **Authentication** page for the frontend app, select your frontend app name under **Identity provider**. This app registration was automatically generated for you. Select **API permissions** in the left menu.
-1. Select **Add a permission**, then select **My APIs** > **\<front-end-app-name>**.
+1. Select **Add a permission**, then select **My APIs** > **\<back-end-app-name>**.
1. In the **Request API permissions** page for the backend app, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
In the Cloud Shell, run the following commands on the frontend app to add the `s
```azurecli-interactive authSettings=$(az webapp auth show -g myAuthResourceGroup -n <front-end-app-name>)
-authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope==openid offline_access api://<back-end-client-id>/user_impersonation"]}')
+authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid offline_access api://<back-end-client-id>/user_impersonation"]}')
az webapp auth set --resource-group myAuthResourceGroup --name <front-end-app-name> --body "$authSettings" ```
What you learned:
Advance to the next tutorial to learn how to use this user's identity to access an Azure service. > [!div class="nextstepaction"]
-> [Create a secure n-tier app in Azure App Service](tutorial-secure-ntier-app.md)
+> [Create a secure n-tier app in Azure App Service](tutorial-secure-ntier-app.md)
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managing-data.md
Title: Azure Automation data security
description: This article helps you learn how Azure Automation protects your privacy and secures your data. Previously updated : 12/11/2022 Last updated : 04/02/2023
This article contains several topics explaining how data is protected and secure
## TLS 1.2 for Azure Automation
-To insure the security of data in transit to Azure Automation, we strongly encourage you to configure the use of Transport Layer Security (TLS) 1.2. The following are a list of methods or clients that communicate over HTTPS to the Automation service:
+To ensure the security of data in transit to Azure Automation, we strongly encourage you to configure the use of Transport Layer Security (TLS) 1.2. The following are a list of methods or clients that communicate over HTTPS to the Automation service:
* Webhook calls
The following table summarizes the retention policy for different resources.
## Data backup
-When you delete an Automation account in Azure, all objects in the account are deleted. The objects include runbooks, modules, configurations, settings, jobs, and assets. They can't be recovered after the account is deleted. You can use the following information to back up the contents of your Automation account before deleting it.
+When you delete an Automation account in Azure, all objects in the account are deleted. The objects include runbooks, modules, configurations, settings, jobs, and assets. You can [recover](delete-account.md#restore-a-deleted-automation-account) a deleted Automation account within 30 days. You can also use the following information to back up the contents of your Automation account before deleting it:
### Runbooks
You can't retrieve the values for encrypted variables or the password fields of
You can export your DSC configurations to script files using either the Azure portal or the [Export-AzAutomationDscConfiguration](/powershell/module/az.automation/export-azautomationdscconfiguration) cmdlet in Windows PowerShell. You can import and use these configurations in another Automation account.
-## Geo-replication in Azure Automation
+## Data residency
-Geo-replication is standard in Azure Automation accounts. You choose a primary region when setting up your account. The internal Automation geo-replication service assigns a secondary region to the account automatically. The service then continuously backs up account data from the primary region to the secondary region. The full list of primary and secondary regions can be found at [Cross-region replication in Azure: Business continuity and disaster recovery](../availability-zones/cross-region-replication-azure.md).
+You specify a region during the creation of an Azure Automation account. Service data such as assets, configuration, logs are stored in that region and may transit or be processed in other regions within the same geography. These global endpoints are necessary to provide end-users with a high-performance, low-latency experience regardless of location. Only for the Brazil South (Sao Paulo State) region of Brazil geography, Southeast Asia region (Singapore) and East Asia region (Hongkong) of the Asia Pacific geography, we store Azure Automation data in the same region to accommodate data-residency requirements for these regions.
-The backup created by the Automation geo-replication service is a complete copy of Automation assets, configurations, and the like. This backup can be used if the primary region goes down and loses data. In the unlikely event that data for a primary region is lost, Microsoft attempts to recover it.
-
-> [!NOTE]
-> Azure Automation stores customer data in the region selected by the customer. For the purpose of BCDR, for all regions except Brazil South and Southeast Asia, Azure Automation data is stored in a different region (Azure paired region). Only for the Brazil South (Sao Paulo State) region of Brazil geography and Southeast Asia region (Singapore) of the Asia Pacific geography, we store Azure Automation data in the same region to accommodate data-residency requirements for these regions.
-
-The Automation geo-replication service isn't accessible directly to external customers if there is a regional failure. If you want to maintain Automation configuration and runbooks during regional failures, set up disaster recovery of the Automation accounts and their dependent resources, such as Modules, Connections, Credentials, Certificates, Variables and Schedules. [Learn more](automation-disaster-recovery.md).
## Next steps
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
description: This article provides information about deploying the extension-bas
Previously updated : 03/21/2023 Last updated : 04/01/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
Azure Automation stores and manages runbooks and then delivers them to one or mo
### Supported operating systems
-| Windows | Linux |
+| Windows (x64) | Linux (x64) |
||| | &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro | &#9679; Debian GNU/Linux 8, 9, 10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> *Hybrid Worker extension would follow support timelines of the OS vendor.| ### Other Requirements
-| Windows | Linux |
+| Windows (x64) | Linux (x64) |
||| | Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» | | .NET Framework 4.6.2 or later.ΓÇ»| |
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 03/30/2023 Last updated : 04/01/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
The purpose of the Extension-based approach is to simplify the installation and
### Supported operating systems
-| Windows | Linux |
+| Windows (x64) | Linux (x64) |
||| | &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»| ### Other Requirements
-| Windows | Linux |
+| Windows (x64) | Linux (x64) |
||| | Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» | | .NET Framework 4.6.2 or later.ΓÇ»| |
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
See the [full list](./update-management/operating-system-requirements.md) of sup
**Type:** New feature
-In all regions except Brazil South and Southeast Asia, Azure Automation data is stored in a different region (Azure paired region) for providing Business Continuity and Disaster Recovery (BCDR). For the Brazil and Southeast Asia regions only, we now store Azure Automation data in the same region to accommodate data-residency requirements for these regions. For more information, see [Geo-replication in Azure Automation](./automation-managing-data.md#geo-replication-in-azure-automation).
+In all regions except Brazil South and Southeast Asia, Azure Automation data is stored in a different region (Azure paired region) for providing Business Continuity and Disaster Recovery (BCDR). For the Brazil and Southeast Asia regions only, we now store Azure Automation data in the same region to accommodate data-residency requirements for these regions. For more information, see [Data residency](./automation-managing-data.md#data-residency).
## February 2021
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
Some Redis operations, like the [KEYS](https://redis.io/commands/keys) command,
## Choose an appropriate tier
-Use Standard or Premium tier for production systems. Don't use the Basic tier in production. The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are only meant for simple dev/test scenarios because:
+Use Standard, Premium, Enterprise, or Enterprise Flash tiers for production systems. Don't use the Basic tier in production. The Basic tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches are only meant for simple dev/test scenarios because:
- they share a CPU core - use little memory
The public IP address assigned to your cache can change as a result of a scale o
The default version of Redis that is used when creating a cache can change over time. Azure Cache for Redis might adopt a new version when a new version of open-source Redis is released. If you need a specific version of Redis for your application, we recommend choosing the Redis version explicitly when you create the cache.
+## Specific guidance for the Enterprise tiers
+
+Because the _Enterprise_ and _Enterprise Flash_ tiers are built on Redis Enterprise rather than open-source Redis, there are some differences in development best practices. See [Best Practices for the Enterprise and Enterprise Flash tiers](cache-best-practices-enterprise-tiers.md) for more information.
+ ## Use TLS encryption Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible.
If your application validates certificate in code, you need to modify it to reco
For more information, see [Client libraries](cache-best-practices-client-libraries.md#client-libraries).
-## Next steps
+## Next steps
- [Performance testing](cache-best-practices-performance.md) - [Failover and patching for Azure Cache for Redis](cache-failover.md)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
Title: Azure Event Grid trigger for Azure Functions description: Learn to run code when Event Grid events in Azure Functions are dispatched. Previously updated : 03/04/2022 Last updated : 04/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The type of the input parameter used with an Event Grid trigger depends on these
# [In-process](#tab/in-process)
-The following example shows a Functions version 3.x function that uses a `CloudEvent` binding parameter:
+The following example shows a Functions version 4.x function that uses a `CloudEvent` binding parameter:
```cs using Azure.Messaging;
namespace Company.Function
} ```
-The following example shows a Functions version 3.x function that uses an `EventGridEvent` binding parameter:
+The following example shows a Functions version 4.x function that uses an `EventGridEvent` binding parameter:
```cs using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.EventGrid.Models;
+using Azure.Messaging.EventGrid;
using Microsoft.Azure.WebJobs.Extensions.EventGrid; using Microsoft.Extensions.Logging;
The following example shows a function that uses a `JObject` binding parameter
```cs using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json; using Newtonsoft.Json.Linq; using Microsoft.Extensions.Logging;
azure-functions Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md
The following table lists common and recommended alert rules for Functions.
| Metric | Average connections| When number of connections exceed a set value| | Metric | HTTP 404| When HTTP 404 responses exceed a set value| | Metric | HTTP Server Errors| When HTTP 5xx errors exceed a set value|
-| Activity Log | Create or Update Web App | When app is created or updated|
-| Activity Log | Delete Web App | When app is deleted|
-| Activity Log | Restart Web App| When app is restarted|
-| Activity Log | Stop Web App| When app is stopped|
+| Activity Log | Create or update function app | When app is created or updated|
+| Activity Log | Delete function app | When app is deleted|
+| Activity Log | Restart function app| When app is restarted|
+| Activity Log | Stop function app| When app is stopped|
## Next steps
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/23/2023 Last updated : 04/02/2023 # Compare Azure Government and global Azure
This section outlines variations and considerations when using Identity services
For feature variations and limitations, see [Cloud feature availability](../active-directory/authentication/feature-availability.md).
+For information on how to use Power BI capabilities for collaboration between Azure and Azure Government, see [Cross-cloud B2B](/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).
+ The following features have known limitations in Azure Government: - Limitations with B2B Collaboration in supported Azure US Government tenants: - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md).
- - B2B collaboration via Power BI isn't supported. When you invite a guest user from within Power BI, the B2B flow isn't used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
- Limitations with multi-factor authentication: - Trusted IPs isn't supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and shouldn't be required based off the user's current IP address.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 03/21/2023 Last updated : 04/02/2023 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
> [!NOTE] > > - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
-> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
**Azure Government Secret** maintains:
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
> [!NOTE] >
-> - Some services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
-> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+> - Some services deployed in Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions US DoD Central and US DoD East (US DoD regions), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | | - |::|:-:|:-:|:-:|:-:|
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 02/09/2023 Last updated : 04/02/2023 # Isolation guidelines for Impact Level 5 workloads
-Azure Government supports applications that use Impact Level 5 (IL5) data in all available regions. IL5 requirements are defined in the [US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG)](https://public.cyber.mil/dccs/dccs-documents/). IL5 workloads have a higher degree of impact to the DoD and must be secured to a higher standard. When you deploy these workloads on Azure Government, you can meet their isolation requirements in various ways. The guidance in this document addresses configurations and settings needed to meet the IL5 isolation requirements. We'll update this document as we enable new isolation options and the Defense Information Systems Agency (DISA) authorizes new services for IL5 data.
+Azure Government supports applications that use Impact Level 5 (IL5) data in all available regions. IL5 requirements are defined in the [US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG)](https://public.cyber.mil/dccs/dccs-documents/). IL5 workloads have a higher degree of impact to the DoD and must be secured to a higher standard. When you deploy these workloads on Azure Government, you can meet their isolation requirements in various ways. The guidance in this document addresses configurations and settings needed to meet the IL5 isolation requirements. We'll update this article as we enable new isolation options and the Defense Information Systems Agency (DISA) authorizes new services for IL5 data.
## Background
-In January 2017, DISA awarded the [IL5 Provisional Authorization](/azure/compliance/offerings/offering-dod-il5) (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions (US DoD Central and US DoD East) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions: US Gov Arizona, US Gov Texas, and US Gov Virginia. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+In January 2017, DISA awarded the [IL5 Provisional Authorization](/azure/compliance/offerings/offering-dod-il5) (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions US DoD Central and US DoD East (US DoD regions) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions). For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&rar=true).
+
+- For a list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+- For a list of services in scope for DoD IL5 PA in US DoD regions, see [Azure Government DoD regions IL5 audit scope](./documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).
Azure Government is available to US federal, state, local, and tribal governments and their partners. The IL5 expansion to Azure Government honors the isolation requirements mandated by the DoD. Azure Government continues to provide more PaaS services suitable for DoD IL5 workloads than any other cloud services environment. ## Principles and approach
-You need to address two key areas for Azure services in IL5 scope: compute isolation and storage isolation. We'll focus in this article on how Azure services can help isolate the compute and storage of IL5 data. The SRG allows for a shared management and network infrastructure. **This article is focused on Azure Government compute and storage isolation approaches for US Gov Arizona, US Gov Texas, and US Gov Virginia regions.** If an Azure service is available in Azure Government DoD regions and authorized at IL5, then it is by default suitable for IL5 workloads with no extra isolation configuration required. Azure Government DoD regions are reserved for DoD agencies and their partners, enabling physical separation from non-DoD tenants by design. For more information, see [DoD in Azure Government](./documentation-government-overview-dod.md).
+You need to address two key areas for Azure services in IL5 scope: compute isolation and storage isolation. We'll focus in this article on how Azure services can help you isolate the compute and storage services for IL5 data. The SRG allows for a shared management and network infrastructure. **This article is focused on Azure Government compute and storage isolation approaches for US Gov Arizona, US Gov Texas, and US Gov Virginia regions (US Gov regions).** If an Azure service is available in Azure Government DoD regions US DoD Central and US DoD East (US DoD regions) and authorized at IL5, then it is by default suitable for IL5 workloads with no extra isolation configuration required. Azure Government DoD regions are reserved for DoD agencies and their partners, enabling physical separation from non-DoD tenants by design. For more information, see [DoD in Azure Government](./documentation-government-overview-dod.md).
> [!IMPORTANT] > You are responsible for designing and deploying your applications to meet DoD IL5 compliance requirements. In doing so, you should not include sensitive or restricted information in Azure resource names, as explained in **[Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).**
For services where the compute processes are obfuscated from access by the owner
The DoD requirements for encrypting data at rest are provided in Section 5.11 (Page 122) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). DoD emphasizes encrypting all data at rest stored in virtual machine virtual hard drives, mass storage facilities at the block or file level, and database records where the mission owner doesn't have sole control over the database service. For cloud applications where encrypting data at rest with DoD key control isn't possible, mission owners must perform a risk analysis with relevant data owners before transmitting data into a cloud service offering.
-In a recent PA for Azure Government, DISA approved logical separation of IL5 from other data via cryptographic means. In Azure, this approach involves data encryption via keys that are maintained in Azure Key Vault and stored in [FIPS 140 validated](/azure/compliance/offerings/offering-fips-140-2) Hardware Security Modules (HSMs). The keys are owned and managed by the IL5 system owner (also known as customer-managed keys).
+In a recent PA for Azure Government, DISA approved logical separation of IL5 from other data via cryptographic means. In Azure, this approach involves data encryption via keys that are maintained in Azure Key Vault and stored in [FIPS 140 validated](/azure/compliance/offerings/offering-fips-140-2) Hardware Security Modules (HSMs). The keys are owned and managed by the IL5 system owner, also known as customer-managed keys (CMK).
Here's how this approach applies to
This approach ensures all key material for decrypting data is stored separately
IL5 guidelines require workloads to be deployed with a high degree of security, isolation, and control. The following configurations are required *in addition* to any other configurations or controls needed to meet IL5 requirements. Network isolation, access controls, and other necessary security measures aren't necessarily addressed in this article. > [!NOTE]
-> This article tracks Azure services that have received DoD IL5 PA and that require extra configuration options to meet IL5 isolation requirements. Services with IL5 PA that do not require any extra configuration options are not mentioned in this article. For a list of services in scope for DoD IL5 PA, see **[Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).**
+> This article tracks Azure services that have received DoD IL5 PA and that require extra configuration options to meet IL5 isolation requirements. Services with IL5 PA that do not require any extra configuration options are not mentioned in this article. For a list of services in scope for DoD IL5 PA in US Gov regions, see **[Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).**
Be sure to review the entry for each service you're using and ensure that all isolation requirements are implemented.
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md
recommendations: false Previously updated : 03/07/2022 Last updated : 04/02/2023 # Department of Defense (DoD) in Azure Government
Azure Government offers the following regions to DoD mission owners and their pa
|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|145| |US DoD Central </br> US DoD East|DoD IL5|60|
-**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East). For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+**Azure Government regions** US Gov Arizona, US Gov Texas, and US Gov Virginia (**US Gov regions**) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** US DoD Central and US DoD East (**US DoD regions**) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for US Gov regions vs. US DoD regions. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-The primary differences between DoD IL5 PAs that are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East) are:
+The primary differences between DoD IL5 PAs that are in place for US Gov regions vs. US DoD regions are:
-- **IL5 compliance scope:** Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) have many more services authorized provisionally at DoD IL5, which in turn enables DoD mission owners and their partners to deploy more realistic applications in these regions.
- - For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
- - For a complete list of services in scope for DoD IL5 in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope) in this article.
-- **IL5 configuration:** Azure Government DoD regions (US DoD Central and US DoD East) are physically isolated from the rest of Azure Government and reserved for exclusive DoD use. Therefore, no extra configuration is needed in DoD regions when deploying Azure services intended for IL5 workloads. In contrast, some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
+- **IL5 compliance scope:** US Gov regions have many more services authorized provisionally at DoD IL5, which in turn enables DoD mission owners and their partners to deploy more realistic applications in these regions.
+ - For a complete list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+ - For a complete list of services in scope for DoD IL5 in US DoD regions, see [Azure Government DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
+- **IL5 configuration:** US DoD regions are reserved for exclusive DoD use. Therefore, no extra configuration is needed in US DoD regions when deploying Azure services intended for IL5 workloads. In contrast, some Azure services deployed in US Gov regions require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
> [!NOTE]
-> If you are subject to DoD IL5 requirements, we recommend that you prioritize Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) for your workloads, as follows:
+> If you are subject to DoD IL5 requirements, we recommend that you prioritize US Gov regions for your workloads, as follows:
>
-> - **New deployments:** Choose Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) for your new deployments. Doing so will allow you to benefit from the latest cloud innovations while meeting your DoD IL5 isolation requirements.
-> - **Existing deployments:** If you have existing deployments in Azure Government DoD regions (US DoD Central and US DoD East), we encourage you to migrate these workloads to Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) to take advantage of additional services.
+> - **New deployments:** Choose US Gov regions for your new deployments. Doing so will allow you to benefit from the latest cloud innovations while meeting your DoD IL5 isolation requirements.
+> - **Existing deployments:** If you have existing deployments in US DoD regions, we encourage you to migrate these workloads to US Gov regions to take advantage of additional services.
Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Hyperscale cloud also offers a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, intelligent edge, and many more to help DoD mission owners implement their mission objectives. Using Azure Government cloud capabilities, you benefit from rapid feature growth, resiliency, and the cost-effective operation of the hyperscale cloud while still obtaining the levels of isolation, security, and confidence required to handle workloads subject to FedRAMP High, DoD IL4, and DoD IL5 requirements.
-## Azure Government regions IL5 audit scope
+## US Gov regions IL5 audit scope
-For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+For a complete list of services in scope for DoD IL5 PA in US Gov regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
-## Azure Government DoD regions IL5 audit scope
+## US DoD regions IL5 audit scope
-The following services are in scope for DoD IL5 PA in Azure Government DoD regions (US DoD Central and US DoD East):
+The following services are in scope for DoD IL5 PA in US DoD regions (US DoD Central and US DoD East):
- [API Management](https://azure.microsoft.com/services/api-management/) - [Application Gateway](https://azure.microsoft.com/services/application-gateway/)
The following services are in scope for DoD IL5 PA in Azure Government DoD regio
## Frequently asked questions ### What are the Azure Government DoD regions?
-Azure Government DoD regions (US DoD Central and US DoD East) are physically separated Azure Government regions reserved for exclusive use by the DoD.
+Azure Government DoD regions US DoD Central and US DoD East (US DoD regions) are physically separated Azure Government regions reserved for exclusive use by the DoD. They reside on the same isolated network as Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (US Gov regions) and use the same identity model. Both the network and identity model are separate from Azure commercial.
### What is the difference between Azure Government and Azure Government DoD regions?
-Azure Government is a US government community cloud providing services for federal, state and local government customers, tribal entities, and other entities subject to various US government regulations such as CJIS, ITAR, and others. All Azure Government regions are designed to meet the security requirements for DoD IL5 workloads. Azure Government DoD regions (US DoD Central and US DoD East) achieve DoD IL5 tenant separation requirements by being dedicated exclusively to DoD. In Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), some services require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
+Azure Government is a US government community cloud providing services for federal, state and local government customers, tribal entities, and other entities subject to various US government regulations such as CJIS, ITAR, and others. All Azure Government regions are designed to meet the security requirements for DoD IL5 workloads. They are deployed on a separate and isolated network and use a separate identity model from Azure commercial regions. US DoD regions achieve DoD IL5 tenant separation requirements by being dedicated exclusively to DoD. In US Gov regions , some services require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
-### How do Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) support IL5 data?
-Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
+### How do US Gov regions support IL5 data?
+Azure provides [extensive support for tenant isolation](./azure-secure-isolation-guidance.md) across compute, storage, and networking services to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications. Some Azure services deployed in US Gov regions require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).
### What is IL5 data? IL5 accommodates controlled unclassified information (CUI) that requires a higher level of protection than is afforded by IL4 as deemed necessary by the information owner, public law, or other government regulations. IL5 also supports unclassified National Security Systems (NSS). This impact level accommodates NSS and CUI categorizations based on CNSSI 1253 up to moderate confidentiality and moderate integrity (M-M-x). For more information on IL5 data, see [DoD IL5 overview](/azure/compliance/offerings/offering-dod-il5#dod-il5-overview).
All Azure Government regions are built to support DoD customers, including:
For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). ### What services are part of your IL5 authorization scope?
-For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope) in this article.
+For a complete list of services in scope for DoD IL5 PA in US Gov regions, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in US DoD regions, see [Azure Government DoD regions IL5 audit scope](#us-dod-regions-il5-audit-scope) in this article.
## Next steps
azure-government Documentation Government Plan Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-compliance.md
recommendations: false Previously updated : 12/05/2022 Last updated : 04/02/2023 # Azure Government compliance
For current Azure Government regions and available services, see [Products avail
> [!NOTE] > > - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).**
-> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](./documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](./documentation-government-overview-dod.md#us-dod-regions-il5-audit-scope).**
## Services in audit scope
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
The single most important part of your application is its security. No matter how good the user experience might be, if your application isn't secure a hacker can ruin it.
-The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. For more information, See the [introduction to Azure security](../security/fundamentals/overview.md).
+The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. For more information, See the [introduction to Azure security].
## Understanding security threats
-If a hacker gains access to your Azure Maps account, they can potentially use it to make an unlimited number of unauthorized requests, which could result in decreased performance due to QPS limits and significant billable transactions to your account.
-
-When considering best practices for securing your Azure Maps applications, you'll need to understand the different authentication options available.
---
+Hackers gaining access to your account could potentially make unlimited billable transactions, resulting in unexpected costs and decreased performance due to QPS limits.
+When considering best practices for securing your Azure Maps applications, you need to understand the different authentication options available.
## Authentication best practices in Azure Maps
-When creating a publicly facing client application with Azure Maps using any of the available SDKs whether it be Android, iOS or the Web SDK, you must ensure that your authentication secrets aren't publicly accessible.
+When creating publicly facing client applications with Azure Maps, you must ensure that your authentication secrets aren't publicly accessible.
-Subscription key-based authentication (Shared Key) can be used in either client side applications or web services, however it is the least secure approach to securing your application or web service. This is because the key grants access to all Azure Maps REST API that are available in the SKU (Pricing Tier) selected when creating the Azure Maps account and the key can be easily obtained from an HTTP request. If you do use subscription keys, be sure to [rotate them regularly](how-to-manage-authentication.md#manage-and-rotate-shared-keys) and keep in mind that Shared Key doesn't allow for configurable lifetime, it must be done manually. You should also consider using [Shared Key authentication with Azure Key Vault](how-to-secure-daemon-app.md#scenario-shared-key-authentication-with-azure-key-vault), which enables you to securely store your secret in Azure.
+Subscription key-based authentication (Shared Key) can be used in either client side applications or web services, however it's the least secure approach to securing your application or web service. The reason is the key is easily obtained from an HTTP request and grants access to all Azure Maps REST API available in the SKU (Pricing Tier). If you do use subscription keys, be sure to [rotate them regularly] and keep in mind that Shared Key doesn't allow for configurable lifetime, it must be done manually. You should also consider using [Shared Key authentication with Azure Key Vault], which enables you to securely store your secret in Azure.
-If using [Azure Active Directory (Azure AD) authentication](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Access Signature (SAS) Token authentication](azure-maps-authentication.md#shared-access-signature-token-authentication) (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)](azure-maps-authentication.md#authorization-with-role-based-access-control). RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
+If using [Azure Active Directory (Azure AD) authentication] or [Shared Access Signature (SAS) Token authentication] (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)]. RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
> [!TIP] >
-> For more information on configuring token lifetimes see:
-> - [Configurable token lifetimes in the Microsoft identity platform (preview)](../active-directory/develop/active-directory-configurable-token-lifetimes.md)
-> - [Create SAS tokens](azure-maps-authentication.md#create-sas-tokens)
+> For more information on configuring token lifetimes, see:
+>
+> - [Configurable token lifetimes in the Microsoft identity platform (preview)]
+> - [Create SAS tokens]
### Public client and confidential client applications
-There are different security concerns between public and confidential client applications. See [Public client and confidential client applications](../active-directory/develop/msal-client-applications.md) in the Microsoft identity platform documentation for more information about what is considered a *public* versus *confidential* client application.
+There are different security concerns between public and confidential client applications. For more information about what is considered a *public* versus *confidential* client application, see [Public client and confidential client applications] in the Microsoft identity platform documentation.
### Public client applications
-For apps that run on devices or desktop computers or in a web browser, you should consider defining which domains have access to your Azure Map account using [Cross origin resource sharing (CORS)](azure-maps-authentication.md#cross-origin-resource-sharing-cors). CORS instructs the clients' browser on which origins such as "https://microsoft.com" are allowed to request resources for the Azure Map account.
+For apps that run on devices or desktop computers or in a web browser, you should consider defining which domains have access to your Azure Map account using [Cross origin resource sharing (CORS)]. CORS instructs the clients' browser on which origins such as "https://microsoft.com" are allowed to request resources for the Azure Map account.
> [!NOTE] > If you're developing a web server or service, your Azure Maps account does not need to be configured with CORS. If you have JavaScript code in the client side web application, CORS does apply. ### Confidential client applications
-For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md). Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Azure Active Directory (Azure AD) authentication. In this case, your web service will use that identity to obtain the required Azure AD tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles](../active-directory/roles/delegate-by-task.md) possible.
+For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities]. Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Azure Active Directory (Azure AD) authentication. If so, your web service uses that identity to obtain the required Azure AD tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles] possible.
## Next steps
For apps that run on servers (such as web services and service/daemon apps), if
> [Manage authentication in Azure Maps](how-to-manage-authentication.md) > [!div class="nextstepaction"]
-> [Tutorial: Add app authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
+> [Tutorial: Add app authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
+
+[introduction to Azure security]: ../security/fundamentals/overview.md
+[rotate them regularly]: how-to-manage-authentication.md#manage-and-rotate-shared-keys
+[Shared Key authentication with Azure Key Vault]: how-to-secure-daemon-app.md#scenario-shared-key-authentication-with-azure-key-vault
+[Azure Active Directory (Azure AD) authentication]: ../active-directory/fundamentals/active-directory-whatis.md
+[Shared Access Signature (SAS) Token authentication]: azure-maps-authentication.md#shared-access-signature-token-authentication
+[role-based access control (RBAC)]: azure-maps-authentication.md#authorization-with-role-based-access-control
+[Configurable token lifetimes in the Microsoft identity platform (preview)]: ../active-directory/develop/active-directory-configurable-token-lifetimes.md
+[Create SAS tokens]: azure-maps-authentication.md#create-sas-tokens
+[Public client and confidential client applications]: ../active-directory/develop/msal-client-applications.md
+[Cross origin resource sharing (CORS)]: azure-maps-authentication.md#cross-origin-resource-sharing-cors
+[Managed Identities]: ../active-directory/managed-identities-azure-resources/overview.md
+[Least privileged roles]: ../active-directory/roles/delegate-by-task.md
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
The common schema includes information about the affected resource and the cause
If you want to route alert instances to specific teams based on criteria such as a resource group, you can use the fields in the **Essentials** section to provide routing logic for all alert types. The teams that receive the alert notification can then use the context fields for their investigation. - **Alert context**: Fields that vary depending on the type of the alert. The alert context fields describe the cause of the alert. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert.
+- **Custom Properties**: A ΓÇ£key: valueΓÇ¥ object, defined in the alert rule and added to the webhook notifications.
+If the custom properties are not set in the Alert rule, this field will be null. Note: today this is only supported for Metric Alerts other alert types will contain null in this field.
## Sample alert payload
The common schema includes information about the affected resource and the cause
} ] }
+ "customProperties":{
+ "Key1": "Value1",
+ "Key2": "Value2"
+ }
} } }
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Title: Get started with autoscale in Azure
-description: "Learn how to scale your resource web app, cloud service, virtual machine, or virtual machine scale set in Azure."
+description: "Learn how to scale your resource web app, cloud service, virtual machine, or Virtual Machine Scale Set in Azure."
Previously updated : 04/05/2022 Last updated : 04/10/2023 # Get started with autoscale in Azure
-This article describes how to set up your autoscale settings for your resource in the Azure portal.
+Autoscale allows you to automatically scale your applications or resources based on demand. Use Autoscale to provision enough resources to support the demand on your application without over provisioning and incurring unnecessary costs.
-Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md).
+This article describes how to configure the autoscale settings for your resources in the Azure portal.
-## Discover the autoscale settings in your subscription
+Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
+## Discover the autoscale settings in your subscription
+
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4u7ts]
-To discover all the resources for which autoscale is applicable in Azure Monitor, follow these steps.
+To discover the resources that you can autoscale, follow these steps.
-1. Open the [Azure portal.][1]
-1. Select the Azure Monitor icon at the top of the page.
+1. Open the [Azure portal.](https://portal.azure.com)
- [![Screenshot that shows how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
+1. Using the search bar at the top of the page, search for and select *Azure Monitor*
1. Select **Autoscale** to view all the resources for which autoscale is applicable, along with their current autoscale status.
- [![Screenshot that shows autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
-
-1. Use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
-
- [![Screenshot that shows viewing resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
+1. Use the filter pane at the top to select resources a specific resource group, resource types, or a specific resource.
- For each resource, you'll find the current instance count and the autoscale status. The autoscale status can be:
+ :::image type="content" source="./media/autoscale-get-started/view-resources.png" lightbox="./media/autoscale-get-started/view-resources.png" alt-text="A screenshot showing resources that can use autoscale and their statuses.":::
+ The page shows the instance count and the autoscale status for each resource. Autoscale statuses are:
- **Not configured**: You haven't enabled autoscale yet for this resource. - **Enabled**: You've enabled autoscale for this resource. - **Disabled**: You've disabled autoscale for this resource.
- You can also reach the scaling page by selecting **All Resources** on the home page and filter to the resource you're interested in scaling.
+ You can also reach the scaling page by selecting **Scaling** from the **Settings** menu for each resource.
- [![Screenshot that shows all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
+ :::image type="content" source="./media/autoscale-get-started/scaling-page.png" lightbox="./media/autoscale-get-started/scaling-page.png" alt-text="A screenshot showing a resource overview page with the scaling menu item.":::
-1. After you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+## Create your first autoscale setting
- [![Screenshot that shows the scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
+Follow the steps below to create your first autoscale setting.
-## Create your first autoscale setting
+1. Open the **Autoscale** pane in Azure Monitor and select a resource that you want to scale. The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.](../../app-service/quickstart-dotnetcore.md)
+1. The current instance count is 1. Select **Custom autoscale**.
-Let's now go through a step-by-step walkthrough to create your first autoscale setting.
+1. Enter a **Name** and **Resource group** or use the default.
-1. Open the **Autoscale** pane in Azure Monitor and select a resource that you want to scale. The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5]
-1. The current instance count is 1. Select **Custom autoscale**.
+1. Select **Scale based on a metric**.
+1. Select **Add a rule**. to open a context pane on the right side.
- [![Screenshot that shows scale setting for a new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
+ :::image type="content" source="./media/autoscale-get-started/custom-scale.png" lightbox="./media/autoscale-get-started/custom-scale.png" alt-text="A screenshot showing the Configure tab of the Autoscale Settings page.":::
-1. Provide a name for the scale setting. Select **Add a rule** to open a context pane on the right side. By default, this action sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and select **Add**.
+1. The default rule scales your resource by one instance if the CPU percentage is greater than 70 percent. Keep the default values and select **Add**.
- [![Screenshot that shows creating a scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
+1. You've now created your first scale-out rule. Best practice is to have at least one scale in rule. To add another rule, select **Add a rule**.
-1. You've now created your first scale rule. The UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so:
+1. Set **Operator** to *Less than*.
+1. Set **Metric threshold to trigger scale action** to *20*.
+1. Set **Operation** to *Decrease count by*.
+1. Select **Add**.
- 1. Select **Add a rule**.
- 1. Set **Operator** to **Less than**.
- 1. Set **Threshold** to **20**.
- 1. Set **Operation** to **Decrease count by**.
+ :::image type="content" source="./media/autoscale-get-started/scale-rule.png" lightbox="./media/autoscale-get-started/scale-rule.png" alt-text="A screenshot showing a scale rule.":::
- You should now have a scale setting that scales out and scales in based on CPU usage.
+ You now have a scale setting that scales out and scales in based on CPU usage, but you're still limited to a maximum of one instance.
- [![Screenshot that shows scale based on CPU.](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
+1. Under **Instance limits** set **Maximum** to *3*
1. Select **Save**.
-Congratulations! You've now successfully created your first scale setting to autoscale your web app based on CPU usage.
+ :::image type="content" source="./media/autoscale-get-started/instance-limits.png" lightbox="./media/autoscale-get-started/instance-limits.png" alt-text="A screenshot showing the configure tab of the autoscale setting page with configured rules.":::
-> [!NOTE]
-> The same steps are applicable to get started with a Virtual Machine Scale Sets or cloud service role.
+You have successfully created your first scale setting to autoscale your web app based on CPU usage. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 3 instances. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default there will be 1 instance.
-## Other considerations
+## Scheduled scale conditions
-The following sections introduce other considerations for autoscaling.
+The default scale condition defines the scale rules that are active when no other scale condition is in effect. You can add scale conditions that are active on a given date and time, or that recur on a weekly basis.
-### Scale based on a schedule
+### Scale based on a repeating schedule
-You can set your scale differently for specific days of the week.
+Set your resource to scale to a single instance on a Sunday.
1. Select **Add a scale condition**.
-1. Setting the scale mode and the rules is the same as the default condition.
-1. Select **Repeat specific days** for the schedule.
-1. Select the days and the start/end time for when the scale condition should be applied.
-[![Screenshot that shows the scale condition based on schedule.](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
+1. Enter a description for the scale condition.
+
+1. Select **Scale to a specific instance count**. You can also scale based on metrics and thresholds that are specific to this scale condition.
+1. Enter *1* in the **Instance count** field.
+
+1. Select **Sunday**
+1. Set the **Start time** and **End time** for when the scale condition should be applied. Outside of this time range, the default scale condition applies.
+1. Select **Save**
++
+You have now defined a scale condition that reduces the number of instances of your resource to 1 every Sunday.
### Scale differently on specific dates
-You can set your scale differently for specific dates.
+Set Autoscale to scale differently for specific dates, when you know that there will be an unusual level of demand for the service.
1. Select **Add a scale condition**.
-1. Setting the scale mode and the rules is the same as the default condition.
-1. Select **Specify start/end dates** for the schedule.
-1. Select the start/end dates and the start/end time for when the scale condition should be applied.
-[![Screenshot that shows the scale condition based on dates.](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
+1. Select **Scale based on a metric**.
+1. Select **Add a rule** to define your scale-out and scale-in rules. Set the rules to be same as the default condition.
+1. Set the **Maximum** instance limit to *10*
+1. Set the **Default** instance limit to *3*
+1. Enter the **Start date** and **End date** for when the scale condition should be applied.
+1. Select **Save**
+
-### View the scale history of your resource
+You have now defined a scale condition for a specific day. When CPU usage is greater than 70%, an additional instance is added, up to a maximum of 10 instances to handle anticipated load. When CPU usage is below 20%, an instance is removed up to a minimum of 1 instance. By default, autoscale will scale to 3 instances when this scale condition becomes active.
-Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the scale history of your resource for the past 24 hours by switching to the **Run history** tab.
+## Additional settings
-![Screenshot that shows a Run history screen.][12]
+### View the history of your resource's scale events
-To view the complete scale history for up to 90 days, select **Click here to see more details**. The activity log opens, with autoscale preselected for your resource and category.
+Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the history of the scale events in the **Run history** tab.
-### View the scale definition of your resource
-Autoscale is an Azure Resource Manager resource. To view the scale definition in JSON, switch to the **JSON** tab.
+### View the scale settings for your resource
-[![Screenshot that shows scale definition.](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
+Autoscale is an Azure Resource Manager resource. Like other resources, you can see the resource definition in JSON format. To view the autoscale settings in JSON, select the **JSON** tab.
+ You can make changes in JSON directly, if necessary. These changes will be reflected after you save them. ### Cool-down period effects
-Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
-
-## Route traffic to healthy instances (App Service)
+Autoscale uses a cool-down period with is the amount of time to wait after a scale operation before scaling again. For example, if the cooldown is 10 minutes, Autoscale won't attempt to scale again until 10 minutes after the previous scale action. The cooldown period allows the metrics to stabilize and avoids scaling more than once for the same condition. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation).
-<a id="health-check-path"></a>
+### Flapping
-When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [Monitor App Service instances using Health check](../../app-service/monitor-instances-health-check.md).
+Flapping refers to a loop condition that causes a series of opposing scale events. Flapping happens when one scale event triggers an opposite scale event. For example, scaling in reduces the number of instances causing the CPU to rise in the remaining instances. This in turn triggers scale out event, which causes CPU usage to drop, repeating the process. For more information, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md)
## Move autoscale to a different region
To learn more about moving resources between regions and disaster recovery in Az
- [Create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)--
-<!--Reference-->
-[1]:https://portal.azure.com
-[2]: ./media/autoscale-get-started/click-on-monitor-1.png
-[3]: ./media/autoscale-get-started/click-on-autoscale-2.png
-[4]: ./media/autoscale-get-started/view-all-resources-3.png
-[5]: ../../app-service/quickstart-dotnetcore.md
-[6]: ./media/autoscale-get-started/manual-scale-04.png
-[7]: ./media/autoscale-get-started/custom-scale-add-rule-05.png
-[8]: ./media/autoscale-get-started/scale-in-recommendation.png
-[9]: ./media/autoscale-get-started/custom-scale-results-06.png
-[10]: ./media/autoscale-get-started/scale-same-based-on-condition-07.png
-[11]: ./media/autoscale-get-started/scale-different-based-on-time-08.png
-[12]: ./media/autoscale-get-started/scale-history.png
-[13]: ./media/autoscale-get-started/view-scale-definition-09.png
-[14]: ./media/autoscale-get-started/disable-autoscale.png
-[15]: ./media/autoscale-get-started/set-manualscale.png
-[16]: ./media/autoscale-get-started/choose-all-resources.png
-[17]: ./media/autoscale-get-started/scaling-page.png
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 12/21/2022 Last updated : 04/03/2023
Azure NetApp Files backup expands the data protection capabilities of Azure NetA
Azure NetApp Files backup is supported for the following regions:
+* Australia Central
* Australia East
+* Brazil South
* Canada East
+* East Asia
* East US * East US 2 * France Central * Germany West Central * Japan East * North Europe
+* South Africa North
* South Central US * Southeast Asia * UK South
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
* Azure Managed Disk as an alternate storage back end * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview)
->>>>>>> 15252d24ac8fc6f9c2853c1a0deeb10d3393f104
## June 2022
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | automationaccounts | **Yes** | **Yes** | **Yes** (using template) <br/><br/> [Using geo-replication](../../automation/automation-managing-data.md#geo-replication-in-azure-automation) |
+> | automationaccounts | **Yes** | **Yes** | **Yes** [PowerShell script](../../automation/automation-disaster-recovery.md) |
> | automationaccounts / configurations | **Yes** | **Yes** | No | > | automationaccounts / runbooks | **Yes** | **Yes** | No |
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-messages-and-connections.md
Messages larger than 2 KB are counted as multiple messages of 2 KB each. The mes
For example, imagine you have one application server, and three clients:
-* When the application server broadcasts a 1-KB message to all connected clients, the message from the application server to the service is considered a free inbound message.
+* When the application server broadcasts a 1-KB message to all connected clients, the message from the application server to the service is considered a free inbound message. The three messages sent from service to each of the clients are outbound messages and are billed.
* When *client A* sends a 1 KB inbound message to *client B*, without going through app server, the message is a free inbound message. The message routed from service to *client B* is billed as an outbound message.
azure-signalr Signalr Data Plane Rest V20220601 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/swagger/signalr-data-plane-rest-v20220601.md
Remove a user from all groups.
### Models
-#### CodeLevel
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| CodeLevel | integer | | |
- #### ErrorDetail The error object.
The error object.
| details | [ [ErrorDetail](#errordetail) ] | An array of details about specific errors that led to this reported error. | No | | inner | [InnerError](#innererror) | | No |
-#### ErrorKind
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| ErrorKind | integer | | |
-
-#### ErrorScope
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| ErrorScope | integer | | |
- #### InnerError | Name | Type | Description | Required |
The error object.
| Name | Type | Description | Required | | - | - | -- | -- | | code | string | | No |
-| level | [CodeLevel](#codelevel) | | No |
-| scope | [ErrorScope](#errorscope) | | No |
-| errorKind | [ErrorKind](#errorkind) | | No |
+| level | string | _Enum:_ `"Info"`, `"Warning"`, `"Error"` | No |
+| scope | string | _Enum:_ `"Unknown"`, `"Request"`, `"Connection"`, `"User"`, `"Group"` | No |
+| errorKind | string | _Enum:_ `"Unknown"`, `"NotExisted"`, `"NotInGroup"`, `"Invalid"` | No |
| message | string | | No | | jsonObject | | | No | | isSuccess | boolean | | No |
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
NC2 on Azure supports the following regions using AN36P:
* East US 2 * Southeast Asia * Australia East
+* UK South
## Next steps
cognitive-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/image-retrieval.md
The `retrieval:vectorizeImage` API lets you convert an image's data to a vector.
```bash curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " {
-'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png'
+'url':'https://learn.microsoft.com/azure/cognitive-services/computer-vision/media/quickstarts/presentation.png'
}" ```
cognitive-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md
The `imageanalysis:analyze` API does ordinary Image Analysis operations. By spec
```bash curl.exe -v -X POST "https://<endpoint>/computervision/imageanalysis:analyze?model-version=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png'
+{'url':'https://learn.microsoft.com/azure/cognitive-services/computer-vision/media/quickstarts/presentation.png'
}" ```
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
See the [project development lifecycle](../overview.md#project-development-lifec
## Data labeling guidelines
-After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entities you need to extract. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
--
-<!-- Composition guidance where does this live -->
-
-<!--
- > [!NOTE]
- > An entity's learned components is only defined when you label utterances for that entity. You can also have entities that include _only_ list or prebuilt components without labelling learned components. see the [entity components](../concepts/entity-components.md) article for more information.
- -->
+After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words and sentences will be associated with the intents and entities in your project. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
As you add utterances and label them, keep in mind:
As you add utterances and label them, keep in mind:
* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label precisely**: Label each intent and entity to its right type always. Only include what you want classified and extracted, avoid unnecessary data in your labels.
* **Label consistently**: The same entity should have the same label across all the utterances.
- * **Label completely**: Label all the instances of the entity in all your utterances.
+ * **Label completely**: Provide varied utterances for every intent. Label all the instances of the entity in all your utterances.
* For [Multilingual projects](../language-support.md#multi-lingual-option), adding utterances in other languages increases the model's performance in these languages, but avoid duplicating your data across all the languages you would like to support. For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
Use the following steps to label your utterances:
> [!NOTE]
- > list and prebuilt components are not shown in the data labeling page, and all labels here only apply to the **learned component**.
+ > List and prebuilt components are not shown in the data labeling page, and all labels here only apply to the **learned component**.
To remove a label: 1. From within your utterance, select the entity you want to remove a label from. 3. Scroll through the menu that appears, and select **Remove label**.
-To delete or rename an entity:
+To delete an entity:
1. Select the entity you want to edit in the right side pane. 2. Click on the three dots next to the entity, and select the option you want from the drop-down menu.
+## Suggest utterances with Azure OpenAI
+
+In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT models. You first need to get access and create a resource in Azure OpenAI. You'll then need to create a deployment for the GPT models. Follow the pre-requisite steps [here](../../../openai/how-to/create-resource.md).
+
+In the Data Labeling page:
+
+1. Click on the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
+2. On selection of an Azure OpenAI resource, click **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service.
+3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `text-davinci-002`.
+4. Select the intent you'd like to get suggestions for. Make sure the intent you have selected has at least 5 saved utterances to be enabled for utterance suggestions. The suggestions provided by Azure OpenAI are based on the **most recent utterances** you've added for that intent.
+5. Click on **Generate utterances**. Once complete, the suggested utterances will show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances will be part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
++
+Using this feature entails a charge to your Azure OpenAI resource for a similar number of tokens to the suggested utterances generated. Details for Azure OpenAI's pricing can be found [here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
+ ## Next Steps * [Train Model](./train-model.md)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/capabilities.md
+
+ Title: Get local user capabilities
+
+description: Use Azure Communication Services SDKs to get capabilities of the local user in a call.
+++++ Last updated : 03/24/2023++
+# Observe user's capabilities
+Do I have permission to turn video on, do I have permission to turn mic on, do I have permission to share screen? Those are some examples of participant capabilities that you can learn from the capabilities API. Learning the capabilities can help build a user interface that only shows the buttons related to the actions the local user has permissions to.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quick start to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+## Supported Platform - Web
+
+## Next steps
+- [Learn how to manage video](./manage-video.md)
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to record calls](./record-calls.md)
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Features include:
## Configuration - The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container. ```json
The following code is an example of the `containers` array in the [`properties.t
| `command` | The container's startup command. | Equivalent to Docker's [entrypoint](https://docs.docker.com/engine/reference/builder/) field. | | `args` | Start up command arguments. | Entries in the array are joined together to create a parameter list to pass to the startup command. | | `env` | An array of key/value pairs that define environment variables. | Use `secretRef` instead of the `value` field to refer to a secret. |
-| `resources.cpu` | The number of CPUs allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to 2<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br> For example, `1.25` is valid, but `1.555` is invalid.<br> The default is 0.25 CPU per container.<br><br>When using the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply except CPU must be less than or equal to 4.<br><br>When using a Dedicated workload profile in the Consumption + Dedicated plan structure, the maximum CPU must be less than or equal to the number of cores available in the profile. |
-| `resources.memory` | The amount of RAM allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to `4Gi`<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br>For example, `1.25Gi` is valid, but `1.555Gi` is invalid.<br>The default is `0.5Gi` per container.<br><br>When using the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply except memory must be less than or equal to `8Gi`.<br><br>When using a dedicated workload profile in the Consumption + Dedicated plan structure, the maximum memory must be less than or equal to the amount of memory available in the profile. |
+| `resources.cpu` | The number of CPUs allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to 2<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br> For example, `1.25` is valid, but `1.555` is invalid.<br> The default is 0.25 CPU per container.<br><br>When you use the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply, except CPU must be less than or equal to 4.<br><br>When you use a Dedicated workload profile in the Consumption + Dedicated plan structure, the maximum CPU must be less than or equal to the number of cores available in the profile. |
+| `resources.memory` | The amount of RAM allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to `4Gi`<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br>For example, `1.25Gi` is valid, but `1.555Gi` is invalid.<br>The default is `0.5Gi` per container.<br><br>When you use the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply except memory must be less than or equal to `8Gi`.<br><br>When you use a dedicated workload profile in the Consumption + Dedicated plan structure, the maximum memory must be less than or equal to the amount of memory available in the profile. |
| `volumeMounts` | An array of volume mount definitions. | You can define a temporary volume or multiple permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).| | `probes`| An array of health probes enabled in the container. | This feature is based on Kubernetes health probes. For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).|
Alternatively, the Consumption workload profile in the Consumption + Dedicated p
| `3.75` | `7.5Gi` | | `4.0` | `8.0Gi` | -- The total of the CPU requests in all of your containers must match one of the values in the vCPUs column.
+- The total of the CPU requests in all of your containers must match one of the values in the *vCPUs* column.
- The total of the memory requests in all your containers must match the memory value in the memory column in the same row of the CPU column. When you use a Dedicated workload profile in the Consumption + Dedicated plan structure, the total CPU and memory allocations requested for all the containers in a container app must be less than or equal to the cores and memory available in the profile.
When assigning a managed identity to a registry, use the managed identity resour
For more information about configuring user-assigned identities, see [Add a user-assigned identity](managed-identity.md#add-a-user-assigned-identity). - ## Limitations Azure Container Apps has the following limitations:
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Last updated 03/29/2023
This article shows you how to use user defined routes (UDR) with [Azure Firewall](../firewall/overview.md) to lock down outbound traffic from your Container Apps to back-end Azure resources or other network resources.
-Azure creates a default route table for your virtual networks on create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. In this guide, you'll setup UDR on the Container Apps virtual network to restrict outbound traffic with Azure Firewall.
+Azure creates a default route table for your virtual networks on create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. In this guide, your setup UDR on the Container Apps virtual network to restrict outbound traffic with Azure Firewall.
-You can also use a NAT gateway or any other 3rd party appliances instead of Azure Firewall.
+You can also use a NAT gateway or any other third party appliances instead of Azure Firewall.
For more information on networking concepts in Container Apps, see [Networking Architecture in Azure Container Apps](./networking.md). ## Prerequisites
-* An **internal** container app environment on the workload profiles architecture that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles architecture](./workload-profiles-manage-cli.md). Ensure that you're creating an **internal** environment.
+* **Internal environment**: An internal container app environment on the workload profiles architecture that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles architecture](./workload-profiles-manage-cli.md).
-* In your container app, have a container that supports `curl` commands. You can use `curl` to verify the container app is deployed correctly. The *helloworld* container from the sample container image already supports `curl` commands.
+* **`curl` support**: Your container app must have a container that supports `curl` commands. You use `curl` to verify the container app is deployed correctly. The *helloworld* container from the sample container image already supports `curl` commands.
## Create the firewall subnet A subnet called **AzureFirewallSubnet** is required in order to deploy a firewall into the integrated virtual network.
-1. In the [Azure portal](https://portal.azure.com), navigate to the virtual network that's integrated with your app.
+1. Open the virtual network that's integrated with your app in the [Azure portal](https://portal.azure.com).
1. From the menu on the left, select **Subnets**, then select **+ Subnet**.
A subnet called **AzureFirewallSubnet** is required in order to deploy a firewal
| Setting | Action | | | - |
- | **Name** | Enter **AzureFirewallSubnet**. |
+ | **Name** | Enter **AzureFirewallSubnet**. |
| **Subnet address range** | Use the default or specify a [subnet range /26 or larger](../firewall/firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).+ 1. Select **Save** ## Deploy the firewall
A subnet called **AzureFirewallSubnet** is required in order to deploy a firewal
## Route all traffic to the firewall
-Your virtual networks in Azure have default route tables in place upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. In the following steps, you create a UDR to route all traffic to your Azure Firewall.
+Your virtual networks in Azure have default route tables in place when you create the network. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. In the following steps, you create a UDR to route all traffic to your Azure Firewall.
1. On the Azure portal menu or the *Home* page, select **Create a resource**.
Your virtual networks in Azure have default route tables in place upon create. B
1. Select **Add** to create the route.
-1. From the menu on the left, select **Subnets**, then select **Associate** to associate your route table with the subnet your Container App is integrated with.
+1. From the menu on the left, select **Subnets**, then select **Associate** to associate your route table with the container app's subnet.
1. Configure the *Associate subnet* with the following values: | Setting | Action | |--|--|
- | **Address prefix** | Select the virtual network your container app is integrated with |
- | **Next hop type** | Select the subnet your container app is integrated with |
+ | **Address prefix** | Select the virtual network for your container app. |
+ | **Next hop type** | Select the subnet your for container app. |
1. Select **OK**.
Now, all outbound traffic from your container app is routed to the firewall. Cur
| **Action** | Select *Allow* | >[!Note]
- > If you are using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through your firewall, you will need to add the following FQDNs to your rules destination list above: *hub.docker.com*, *registry-1.docker.io*, and *production.cloudflare.docker.com*.
+ > If you are using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through your firewall, you will need to add the following FQDNs to your rules destination list: *hub.docker.com*, *registry-1.docker.io*, and *production.cloudflare.docker.com*.
1. Select **Add**.
To verify your firewall configuration is set up correctly, you can use the `curl
1. Navigate to your Container App that is configured with Azure Firewall.
-1. From the menu on the left, select **Console**, then select your container that supports the `curl` command. If you're using the helloworld container from the sample container image quickstart, you can run the `curl` command.
+1. From the menu on the left, select **Console**, then select your container that supports the `curl` command. If you're using the *helloworld* container from the sample container image quickstart, you can run the `curl` command.
1. In the **Choose start up command** menu, select **/bin/sh**, and select **Connect**. 1. In the console, run `curl -s https://mcr.microsoft.com`. You should see a successful response as you added `mcr.microsoft.com` to the allowlist for your firewall policies.
-1. Run `curl -s https://<fqdn-address>` for a URL that doesn't match any of your destination rules such as `example.com`. The example command would be `curl -s https://example.com`. You should get no response, which indicates that your firewall has blocked the request.
+1. Run `curl -s https://<FQDN_ADDRESS>` for a URL that doesn't match any of your destination rules such as `example.com`. The example command would be `curl -s https://example.com`. You should get no response, which indicates that your firewall has blocked the request.
## Next steps
container-apps Waf App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/waf-app-gateway.md
zone_pivot_groups: azure-cli-or-portal
-# Protect Azure Container Apps with Web Application Firewall on Application Gateway
+# Protect Azure Container Apps with Web Application Firewall on Application Gateway
When you host your apps or microservices in Azure Container Apps, you may not always want to publish them directly to the internet. Instead, you may want to expose them through a reverse proxy.
Reverse proxies allow you to place services in front of your apps that supports
- Routing - Caching - Rate limiting-- Security layers - Load balancing
+- Security layers
- Request filtering This article demonstrates how to protect your container apps using a [Web Application Firewall (WAF) on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md) with an internal Container Apps environment.
For more information on networking concepts in Container Apps, see [Networking A
## Prerequisites -- Have a container app that is on an internal environment and integrated with a custom virtual network. For more information on how to create a custom virtual network integrated app, see [provide a virtual network to an internal Azure Container Apps environment](./vnet-custom-internal.md).-- If you must use TLS/SSL encryption to the application gateway, a valid public certificate that's used to bind to your application gateway is required.
+- **Internal environment with custom VNet**: Have a container app that is on an internal environment and integrated with a custom virtual network. For more information on how to create a custom virtual network integrated app, see [provide a virtual network to an internal Azure Container Apps environment](./vnet-custom-internal.md).
+
+- **Security certificates**: If you must use TLS/SSL encryption to the application gateway, a valid public certificate that's used to bind to your application gateway is required.
## Retrieve your container app's domain
-In the following steps, you retrieve the values of the **default domain** and the **static IP** which you use to set up your Private DNS Zone.
+Use the following steps to retrieve the values of the **default domain** and the **static IP** to set up your Private DNS Zone.
1. From the resource group's *Overview* window in the portal, select your container app.+ 1. On the *Overview* window for your container app resource, select the link for **Container Apps Environment**
-1. On the *Overview* window for your container app environment resource, select **JSON View** in the upper right-hand corner of the page to view the JSON representation of the container apps environment.
+1. On the *Overview* window for your container app environment resource, select **JSON View** in the upper right-hand corner of the page to view the JSON representation of the container apps environment.
+ 1. Copy the values for the **defaultDomain** and **staticIp** properties and paste them into a text editor. You'll create a private DNS zone using these values for the default domain in the next section. ## Create and configure an Azure Private DNS zone
-1. On the Azure portal menu or the **Home** page, select **Create a resource**.
+1. On the Azure portal menu or the *Home* page, select **Create a resource**.
+ 1. Search for *Private DNS Zone*, and select **Private DNS Zone** from the search results.+ 1. Select the **Create** button.+ 1. Enter the following values: | Setting | Action |
In the following steps, you retrieve the values of the **default domain** and th
| Resource group location | Leave as the default. A value isn't needed as Private DNS Zones are global. | 1. Select **Review + create**. After validation finishes, select **Create**.+ 1. After the private DNS zone is created, select **Go to resource**.+ 1. In the *Overview* window, select **+Record set**, to add a new record set.+ 1. In the *Add record set* window, enter the following values: | Setting | Action |
In the following steps, you retrieve the values of the **default domain** and th
| IP address | Enter the **staticIp** property of the Container Apps Environment from the previous section. | 1. Select **OK** to create the record set.+ 1. Select **+Record set** again, to add a second record set.+ 1. In the *Add record set* window, enter the following values: | Setting | Action |
In the following steps, you retrieve the values of the **default domain** and th
| IP address | Enter the **staticIp** property of the Container Apps Environment from the previous section. | 1. Select **OK** to create the record set.
-1. Select the **Virtual network links** window from the menu on the left side of the page.
+
+1. Select the **Virtual network links** window from the menu on the left side of the page.
+ 1. Select **+Add** to create a new link with the following values: | Setting | Action |
In the following steps, you retrieve the values of the **default domain** and th
| WAF Policy | Select **Create new** and enter **my-waf-policy** for the WAF Policy. Select **OK**. If you chose **Standard V2** for the tier, skip this step. | | Virtual network | Select the virtual network that your container app is integrated with. | | Subnet | Select **Manage subnet configuration**. If you already have a subnet you wish to use, use that instead, and skip to [the Frontends section](#frontends-tab). |
-
+ 1. From within the *Subnets* window of *my-custom-vnet*, select **+Subnet** and enter the following values: | Setting | Action |
In the following steps, you retrieve the values of the **default domain** and th
| Subnet address range | Keep the default values. | 1. For the remainder of the settings, keep the default values.+ 1. Select **Save** to create the new subnet. 1. Close the *Subnets* window to return to the *Create application gateway* window.+ 1. Select the following values: | Setting | Action | ||| | Subnet | Select the **appgateway-subnet** you created. |
-
+ 1. Select **Next: Frontends**, to proceed. ### Frontends tab
In the following steps, you retrieve the values of the **default domain** and th
The backend pool is used to route requests to the appropriate backend servers. Backend pools can be composed of any combination of the following resources: - NICs-- Virtual Machine Scale Sets - Public IP addresses - Internal IP addresses
+- Virtual Machine Scale Sets
- Fully qualified domain names (FQDN) - Multi-tenant back-ends like Azure App Service and Container Apps In this example, you create a backend pool that targets your container app. 1. Select **Add a backend pool**.
-1. Open a new tab and navigate to your container app.
+
+1. Open a new tab and navigate to your container app.
+ 1. In the *Overview* window of the Container App, find the **Application Url** and copy it.+ 1. Return to the *Backends* tab, and enter the following values in the **Add a backend pool** window: | Setting | Action |
In this example, you create a backend pool that targets your container app.
| Target | Enter the **Container App Application Url** you copied and remove the *https://* prefix. This location is the FQDN of your container app. | 1. Select **Add**.+ 1. On the *Backends* tab, select **Next: Configuration**. ### Configuration tab
On the *Configuration* tab, you connect the frontend and backend pool you create
1. In the *Add a routing rule* window, select **Add** again. 1. Select **Next: Tags**.+ 1. Select **Next: Review + create**, and then select **Create**. ## Add private link to your Application Gateway
On the *Configuration* tab, you connect the frontend and backend pool you create
This step is required for internal only container app environments as it allows your Application Gateway to communicate with your Container App on the backend through the virtual network. 1. Once the Application Gateway is created, select **Go to resource**.+ 1. From the menu on the left, select **Private link**, then select **Add**.+ 1. Enter the following values: | Setting | Action |
This step is required for internal only container app environments as it allows
| Frontend IP Configuration | Select the frontend IP for your Application Gateway. | 1. Under **Private IP address settings** select **Add**.+ 1. Select **Add** at the bottom of the window. ## Verify the container app
This step is required for internal only container app environments as it allows
# [Default domain](#tab/default-domain) 1. Find the public IP address for the application gateway on its *Overview* page, or you can search for the address. To search, select *All resources* and enter **my-container-apps-agw-pip** in the search box. Then, select the IP in the search results.+ 1. Navigate to the public IP address of the application gateway.+ 1. Your request is automatically routed to the container app, which verifies the application gateway was successfully created. # [Custom domain](#tab/custom-domain)
When you no longer need the resources that you created, delete the resource grou
To delete the resource group: 1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups*.+ 1. On the *Resource groups* page, search for and select **my-container-apps**.+ 1. On the *Resource group page*, select **Delete resource group**.+ 1. Enter **my-container-apps** under *TYPE THE RESOURCE GROUP NAME* and then select **Delete** ## Next steps
container-apps Workload Profiles Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-overview.md
You can constrain the memory and CPU usage of each app inside a workload profile
When demand for new apps or more replicas of an existing app exceeds the profile's current resources, profile instances may be added. Inversely, if the number of apps or replicas goes down, profile instances may be removed. You have control over the constraints on the minimum and maximum number of profile instances. Azure calculates [billing](billing.md#consumption-dedicated) largely based on the number of running profile instances.
+## Networking
+
+When using workload profiles in the Consumption + Dedicated plan structure, additional networking features to fully secure your ingress/egress networking traffic such as user defined routes are available. To learn more about what networking features are supported, see [networking concepts](./networking.md), and for steps on how to secure your network with Container Apps, see the [lock down your Container App environment section](./networking.md#lock-down-your-container-app-environment).
+ ## Next steps > [!div class="nextstepaction"]
cosmos-db Manage Data Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-python.md
ms.devlang: python Previously updated : 08/13/2020 Last updated : 04/03/2023 # Quickstart: Build a Cassandra app with Python SDK and Azure Cosmos DB
In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account,
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.-- [Python 2.7 or 3.6+](https://www.python.org/downloads/).
+- [Python 3.7+](https://www.python.org/downloads/).
- [Git](https://git-scm.com/downloads). - [Python Driver for Apache Cassandra](https://github.com/datastax/python-driver).
Before you can create a document database, you need to create a Cassandra accoun
## Clone the sample application
-Now let's clone a API for Cassandra app from GitHub, set the connection string, and run it. You see how easy it is to work with data programmatically.
+Now let's clone an API for Cassandra app from GitHub, set the connection string, and run it. You see how easy it's to work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now go back to the Azure portal to get your connection string information and co
Line 10 should now look similar to
- `'contactPoint': 'cosmos-db-quickstarts.cassandra.cosmosdb.azure.com:10350'`
+ `'contactPoint': 'cosmos-db-quickstarts.cassandra.cosmosdb.azure.com'`
+
+1. Paste the PORT value from the portal over `<FILLME>` on line 12.
+
+ Line 12 should now look similar to
+
+ `'port': 10350,`
1. Copy the USERNAME value from the portal and paste it over `<FILLME>` on line 6.
Now go back to the Azure portal to get your connection string information and co
`'password' = '2Ggkr662ifxz2Mg==`';` 1. Save the *config.py* file.
-
-## Use the X509 certificate
-
-1. Copy the Baltimore CyberTrust Root certificate details from [https://www.digicert.com/kb/digicert-root-certificates.htm](https://www.digicert.com/kb/digicert-root-certificates.htm) into a text file. Save the file using the file extension *.cer*.
-
- The certificate has serial number `02:00:00:b9` and SHA1 fingerprint `d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74`.
-
-2. Open *pyquickstart.py* and change the `path\to\cert` to point to your new certificate.
-
-3. Save *pyquickstart.py*.
## Run the Python app
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Python app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB account with API for Cassandra, and run a Cassandra Python app that creates a Cassandra database and container. You can now import other data into your Azure Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Docker Emulator Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/docker-emulator-linux.md
Use the following steps to run the emulator on Linux:
For Java-based applications, the certificate must be imported to the [Java trusted store.](local-emulator-export-ssl-certificates.md) ```bash
- keytool -keystore ~/cacerts -importcert -alias emulator_cert -file ~/emulatorcert.crt
+ keytool -import -alias emulator_cert -keystore -file ~/emulatorcert.crt -storepass changeit -noprompt
java -ea -Djavax.net.ssl.trustStore=~/cacerts -Djavax.net.ssl.trustStorePassword="changeit" $APPLICATION_ARGUMENTS ```
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
Previously updated : 03/16/2023 Last updated : 04/03/2023
To view the list of parameters, type `Microsoft.Azure.Cosmos.Emulator.exe /?` at
The emulator comes with a PowerShell module to start, stop, uninstall, and retrieve the status of the service. Run the following cmdlet to use the PowerShell module: ```powershell
-Import-Module "$env:ProgramFiles\emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator"
+Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator"
``` or place the `PSModules` directory on your `PSModulePath` and import it as shown in the following command: ```powershell
-$env:PSModulePath += ";$env:ProgramFiles\emulator\PSModules"
+$env:PSModulePath += ";$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules"
Import-Module Microsoft.Azure.CosmosDB.Emulator ```
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-overview.md
Title: Indexing in Azure Cosmos DB
-description: Understand how indexing works in Azure Cosmos DB, different types of indexes such as Range, Spatial, composite indexes supported.
+ Title: Overview of indexing
+
+description: Understand how indexing works in Azure Cosmos DB. Also explore how different types of indexes such as range, spatial, and composite are supported.
++ - Previously updated : 08/26/2021-- Last updated : 04/03/2023+
-# Indexing in Azure Cosmos DB - Overview
+# Overview of indexing in Azure Cosmos DB
+ Azure Cosmos DB is a schema-agnostic database that allows you to iterate on your application without having to deal with schema or index management. By default, Azure Cosmos DB automatically indexes every property for all items in your [container](resource-model.md#azure-cosmos-db-containers) without having to define any schema or configure secondary indexes.
-The goal of this article is to explain how Azure Cosmos DB indexes data and how it uses indexes to improve query performance. It is recommended to go through this section before exploring how to customize [indexing policies](index-policy.md).
+The goal of this article is to explain how Azure Cosmos DB indexes data and how it uses indexes to improve query performance. It's recommended to go through this section before exploring how to customize [indexing policies](index-policy.md).
## From items to trees
-Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. This means that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
+Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. This conversion means that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
As an example, consider this item: ```json
- {
- "locations": [
- { "country": "Germany", "city": "Berlin" },
- { "country": "France", "city": "Paris" }
- ],
- "headquarters": { "country": "Belgium", "employees": 250 },
- "exports": [
- { "city": "Moscow" },
- { "city": "Athens" }
- ]
- }
+{
+ "locations": [
+ { "country": "Germany", "city": "Berlin" },
+ { "country": "France", "city": "Paris" }
+ ],
+ "headquarters": { "country": "Belgium", "employees": 250 },
+ "exports": [
+ { "city": "Moscow" },
+ { "city": "Athens" }
+ ]
+}
```
-It would be represented by the following tree:
+This tree represents the example JSON item:
Note how arrays are encoded in the tree: every entry in an array gets an intermediate node labeled with the index of that entry within the array (0, 1 etc.). ## From trees to property paths
-The reason why Azure Cosmos DB transforms items into trees is because it allows properties to be referenced by their paths within those trees. To get the path for a property, we can traverse the tree from the root node to that property, and concatenate the labels of each traversed node.
+The reason why Azure Cosmos DB transforms items into trees is because it allows the system to reference properties using their paths within those trees. To get the path for a property, we can traverse the tree from the root node to that property, and concatenate the labels of each traversed node.
-Here are the paths for each property from the example item described above:
+Here are the paths for each property from the example item described previously:
-- /locations/0/country: "Germany"-- /locations/0/city: "Berlin"-- /locations/1/country: "France"-- /locations/1/city: "Paris"-- /headquarters/country: "Belgium"-- /headquarters/employees: 250-- /exports/0/city: "Moscow"-- /exports/1/city: "Athens"
+- `/locations/0/country`: "Germany"
+- `/locations/0/city`: "Berlin"
+- `/locations/1/country`: "France"
+- `/locations/1/city`: "Paris"
+- `/headquarters/country`: "Belgium"
+- `/headquarters/employees`: 250
+- `/exports/0/city`: "Moscow"
+- `/exports/1/city`: "Athens"
-When an item is written, Azure Cosmos DB effectively indexes each property's path and its corresponding value.
+Azure Cosmos DB effectively indexes each property's path and its corresponding value when an item is written.
-## <a id="index-types"></a>Types of indexes
+## Types of indexes
Azure Cosmos DB currently supports three types of indexes. You can configure these index types when defining the indexing policy.
Azure Cosmos DB currently supports three types of indexes. You can configure the
- Equality queries: ```sql
- SELECT * FROM container c WHERE c.property = 'value'
+ SELECT * FROM container c WHERE c.property = 'value'
```
+ ```sql
+ SELECT * FROM c WHERE c.property IN ("value1", "value2", "value3")
+ ```
- ```sql
- SELECT * FROM c WHERE c.property IN ("value1", "value2", "value3")
- ```
+- Equality match on an array element
- Equality match on an array element
- ```sql
+ ```sql
SELECT * FROM c WHERE ARRAY_CONTAINS(c.tags, "tag1")
- ```
+ ```
- Range queries:
- ```sql
- SELECT * FROM container c WHERE c.property > 'value'
- ```
- (works for `>`, `<`, `>=`, `<=`, `!=`)
+ ```sql
+ SELECT * FROM container c WHERE c.property > 'value'
+ ```
+
+ > [!NOTE]
+ > (works for `>`, `<`, `>=`, `<=`, `!=`)
- Checking for the presence of a property:
- ```sql
- SELECT * FROM c WHERE IS_DEFINED(c.property)
- ```
+ ```sql
+ SELECT * FROM c WHERE IS_DEFINED(c.property)
+ ```
- String system functions:
- ```sql
- SELECT * FROM c WHERE CONTAINS(c.property, "value")
- ```
+ ```sql
+ SELECT * FROM c WHERE CONTAINS(c.property, "value")
+ ```
- ```sql
- SELECT * FROM c WHERE STRINGEQUALS(c.property, "value")
- ```
+ ```sql
+ SELECT * FROM c WHERE STRINGEQUALS(c.property, "value")
+ ```
- `ORDER BY` queries:
- ```sql
- SELECT * FROM container c ORDER BY c.property
- ```
+ ```sql
+ SELECT * FROM container c ORDER BY c.property
+ ```
- `JOIN` queries:
- ```sql
- SELECT child FROM container c JOIN child IN c.properties WHERE child = 'value'
- ```
+ ```sql
+ SELECT child FROM container c JOIN child IN c.properties WHERE child = 'value'
+ ```
Range indexes can be used on scalar values (string or number). The default indexing policy for newly created containers enforces range indexes for any string or number. To learn how to configure range indexes, see [Range indexing policy examples](how-to-manage-indexing-policy.md#range-index)
Range indexes can be used on scalar values (string or number). The default index
- Geospatial distance queries:
- ```sql
- SELECT * FROM container c WHERE ST_DISTANCE(c.property, { "type": "Point", "coordinates": [0.0, 10.0] }) < 40
- ```
+ ```sql
+ SELECT * FROM container c WHERE ST_DISTANCE(c.property, { "type": "Point", "coordinates": [0.0, 10.0] }) < 40
+ ```
- Geospatial within queries:
- ```sql
- SELECT * FROM container c WHERE ST_WITHIN(c.property, {"type": "Point", "coordinates": [0.0, 10.0] })
- ```
+ ```sql
+ SELECT * FROM container c WHERE ST_WITHIN(c.property, {"type": "Point", "coordinates": [0.0, 10.0] })
+ ```
- Geospatial intersect queries:
- ```sql
- SELECT * FROM c WHERE ST_INTERSECTS(c.property, { 'type':'Polygon', 'coordinates': [[ [31.8, -5], [32, -5], [31.8, -5] ]] })
- ```
+ ```sql
+ SELECT * FROM c WHERE ST_INTERSECTS(c.property, { 'type':'Polygon', 'coordinates': [[ [31.8, -5], [32, -5], [31.8, -5] ]] })
+ ```
Spatial indexes can be used on correctly formatted [GeoJSON](./sql-query-geospatial-intro.md) objects. Points, LineStrings, Polygons, and MultiPolygons are currently supported. To learn how to configure spatial indexes, see [Spatial indexing policy examples](how-to-manage-indexing-policy.md#spatial-index) ### Composite indexes
-**Composite** indexes increase the efficiency when you are performing operations on multiple fields. The composite index type is used for:
+**Composite** indexes increase the efficiency when you're performing operations on multiple fields. The composite index type is used for:
- `ORDER BY` queries on multiple properties:
-```sql
- SELECT * FROM container c ORDER BY c.property1, c.property2
-```
+ ```sql
+ SELECT * FROM container c ORDER BY c.property1, c.property2
+ ```
- Queries with a filter and `ORDER BY`. These queries can utilize a composite index if the filter property is added to the `ORDER BY` clause.
-```sql
- SELECT * FROM container c WHERE c.property1 = 'value' ORDER BY c.property1, c.property2
-```
+ ```sql
+ SELECT * FROM container c WHERE c.property1 = 'value' ORDER BY c.property1, c.property2
+ ```
-- Queries with a filter on two or more properties where at least one property is an equality filter
+- Queries with a filter on two or more properties were at least one property is an equality filter
-```sql
- SELECT * FROM container c WHERE c.property1 = 'value' AND c.property2 > 'value'
-```
+ ```sql
+ SELECT * FROM container c WHERE c.property1 = 'value' AND c.property2 > 'value'
+ ```
-As long as one filter predicate uses one of the index type, the query engine will evaluate that first before scanning the rest. For example, if you have a SQL query such as `SELECT * FROM c WHERE c.firstName = "Andrew" and CONTAINS(c.lastName, "Liu")`
+As long as one filter predicate uses one of the index type, the query engine evaluates that first before scanning the rest. For example, if you have a SQL query such as `SELECT * FROM c WHERE c.firstName = "Andrew" and CONTAINS(c.lastName, "Liu")`
-* The above query will first filter for entries where firstName = "Andrew" by using the index. It then pass all of the firstName = "Andrew" entries through a subsequent pipeline to evaluate the CONTAINS filter predicate.
+- The above query will first filter for entries where firstName = "Andrew" by using the index. It then pass all of the firstName = "Andrew" entries through a subsequent pipeline to evaluate the CONTAINS filter predicate.
-* You can speed up queries and avoid full container scans when using functions that don't use the index (e.g. CONTAINS) by adding additional filter predicates that do use the index. The order of filter clauses isn't important. The query engine will figure out which predicates are more selective and run the query accordingly.
+- You can speed up queries and avoid full container scans when using functions that perform a full scan like CONTAINS. You can add more filter predicates that use the index to speed up these queries. The order of filter clauses isn't important. The query engine figures out which predicates are more selective and run the query accordingly.
To learn how to configure composite indexes, see [Composite indexing policy examples](how-to-manage-indexing-policy.md#composite-index)
There are five ways that the query engine can evaluate query filters, sorted by
- Full index scan - Full scan
-When you index property paths, the query engine will automatically use the index as efficiently as possible. Aside from indexing new property paths, you don't need to configure anything to optimize how queries use the index. A query's RU charge is a combination of both the RU charge from index usage and the RU charge from loading items.
+When you index property paths, the query engine automatically uses the index as efficiently as possible. Aside from indexing new property paths, you don't need to configure anything to optimize how queries use the index. A query's RU charge is a combination of both the RU charge from index usage and the RU charge from loading items.
-Here is a table that summarizes the different ways indexes are used in Azure Cosmos DB:
+Here's a table that summarizes the different ways indexes are used in Azure Cosmos DB:
-| Index lookup type | Description | Common Examples | RU charge from index usage | RU charge from loading items from transactional data store |
+| Index lookup type | Description | Common Examples | RU charge from index usage | RU charges from loading items from transactional data store |
| | | -- | | | | Index seek | Read only required indexed values and load only matching items from the transactional data store | Equality filters, IN | Constant per equality filter | Increases based on number of items in query results | | Precise index scan | Binary search of indexed values and load only matching items from the transactional data store | Range comparisons (>, <, <=, or >=), StartsWith | Comparable to index seek, increases slightly based on the cardinality of indexed properties | Increases based on number of items in query results |
Here is a table that summarizes the different ways indexes are used in Azure Cos
| Full index scan | Read distinct set of indexed values and load only matching items from the transactional data store | Contains, EndsWith, RegexMatch, LIKE | Increases linearly based on the cardinality of indexed properties | Increases based on number of items in query results | | Full scan | Load all items from the transactional data store | Upper, Lower | N/A | Increases based on number of items in container |
-When writing queries, you should use filter predicate that uses the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
+When writing queries, you should use filter predicate that uses the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it does a precise index scan instead of a full index scan.
## Index usage details
-In this section, we'll cover more details about how queries use indexes. This isn't necessary to learn to get started with Azure Cosmos DB but is documented in detail for curious users. We'll reference the example item shared earlier in this document:
+In this section, we cover more details about how queries use indexes. This level of detail isn't necessary to learn to get started with Azure Cosmos DB but is documented in detail for curious users. We reference the example item shared earlier in this document:
Example items: ```json
- {
- "id": 1,
- "locations": [
- { "country": "Germany", "city": "Berlin" },
- { "country": "France", "city": "Paris" }
- ],
- "headquarters": { "country": "Belgium", "employees": 250 },
- "exports": [
- { "city": "Moscow" },
- { "city": "Athens" }
- ]
- }
+{
+ "id": 1,
+ "locations": [
+ { "country": "Germany", "city": "Berlin" },
+ { "country": "France", "city": "Paris" }
+ ],
+ "headquarters": { "country": "Belgium", "employees": 250 },
+ "exports": [
+ { "city": "Moscow" },
+ { "city": "Athens" }
+ ]
+}
``` ```json
- {
- "id": 2,
- "locations": [
- { "country": "Ireland", "city": "Dublin" }
- ],
- "headquarters": { "country": "Belgium", "employees": 200 },
- "exports": [
- { "city": "Moscow" },
- { "city": "Athens" },
- { "city": "London" }
- ]
- }
+{
+ "id": 2,
+ "locations": [
+ { "country": "Ireland", "city": "Dublin" }
+ ],
+ "headquarters": { "country": "Belgium", "employees": 200 },
+ "exports": [
+ { "city": "Moscow" },
+ { "city": "Athens" },
+ { "city": "London" }
+ ]
+}
```
-Azure Cosmos DB uses an inverted index. The index works by mapping each JSON path to the set of items that contain that value. The item ID mapping is represented across many different index pages for the container. Here is a sample diagram of an inverted index for a container that includes the two example items:
+Azure Cosmos DB uses an inverted index. The index works by mapping each JSON path to the set of items that contain that value. The item ID mapping is represented across many different index pages for the container. Here's a sample diagram of an inverted index for a container that includes the two example items:
| Path | Value | List of item IDs | | -- | - | - |
Azure Cosmos DB uses an inverted index. The index works by mapping each JSON pat
| /locations/0/city | Dublin | 2 | | /locations/1/country | France | 1 | | /locations/1/city | Paris | 1 |
-| /headquarters/country | Belgium | 1,2 |
+| /headquarters/country | Belgium | 1, 2 |
| /headquarters/employees | 200 | 2 | | /headquarters/employees | 250 | 1 | The inverted index has two important attributes:+ - For a given path, values are sorted in ascending order. Therefore, the query engine can easily serve `ORDER BY` from the index. - For a given path, the query engine can scan through the distinct set of possible values to identify the index pages where there are results.
The query engine can utilize the inverted index in four different ways:
### Index seek
-Consider the following query:
+Consider the following query:
```sql SELECT location
FROM location IN company.locations
WHERE location.country = 'France'` ```
-The query predicate (filtering on items where any location has "France" as its country/region) would match the path highlighted in red below:
+The query predicate (filtering on items where any location has "France" as its country/region) would match the path highlighted in this diagram:
-Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek, we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
+Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek, we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
### Precise index scan
-Consider the following query:
+Consider the following query:
```sql SELECT *
Because the query engine can do a binary search to avoid scanning unnecessary in
### Expanded index scan
-Consider the following query:
+Consider the following query:
```sql SELECT *
FROM company
WHERE STARTSWITH(company.headquarters.country, "United", true) ```
-The query predicate (filtering on items that have headquarters in a country that start with case-insensitive "United") can be evaluated with an expanded index scan of the `headquarters/country` path. Operations that do an expanded index scan have optimizations that can help avoid needs to scan every index page but are slightly more expensive than a precise index scan's binary search.
+The query predicate (filtering on items that have headquarters in a location that start with case-insensitive "United") can be evaluated with an expanded index scan of the `headquarters/country` path. Operations that do an expanded index scan have optimizations that can help avoid needs to scan every index page but are slightly more expensive than a precise index scan's binary search.
-For example, when evaluating case-insensitive `StartsWith`, the query engine will check the index for different possible combinations of uppercase and lowercase values. This optimization allows the query engine to avoid reading the majority of index pages. Different system functions have different optimizations that they can use to avoid reading every index page, so we'll broadly categorize these as expanded index scan.
+For example, when evaluating case-insensitive `StartsWith`, the query engine checks the index for different possible combinations of uppercase and lowercase values. This optimization allows the query engine to avoid reading most index pages. Different system functions have different optimizations that they can use to avoid reading every index page, so they're broadly categorized as expanded index scan.
### Full index scan
-Consider the following query:
+Consider the following query:
```sql SELECT *
FROM company
WHERE CONTAINS(company.headquarters.country, "United") ```
-The query predicate (filtering on items that have headquarters in a country that contains "United") can be evaluated with an index scan of the `headquarters/country` path. Unlike a precise index scan, a full index scan will always scan through the distinct set of possible values to identify the index pages where there are results. In this case, `Contains` is run on the index. The index lookup time and RU charge for index scans increases as the cardinality of the path increases. In other words, the more possible distinct values that the query engine needs to scan, the higher the latency and RU charge involved in doing a full index scan.
+The query predicate (filtering on items that have headquarters in a location that contains "United") can be evaluated with an index scan of the `headquarters/country` path. Unlike a precise index scan, a full index scan always scans through the distinct set of possible values to identify the index pages where there are results. In this case, `Contains` is run on the index. The index lookup time and RU charge for index scans increases as the cardinality of the path increases. In other words, the more possible distinct values that the query engine needs to scan, the higher the latency and RU charge involved in doing a full index scan.
-For example, consider two properties: town and country. The cardinality of town is 5,000 and the cardinality of country is 200. Here are two example queries that each have a [Contains](sql-query-contains.md) system function that does a full index scan on the `town` property. The first query will use more RUs than the second query because the cardinality of town is higher than country.
+For example, consider two properties: `town` and `country`. The cardinality of town is 5,000 and the cardinality of `country` is 200. Here are two example queries that each have a [Contains](sql-query-contains.md) system function that does a full index scan on the `town` property. The first query uses more RUs than the second query because the cardinality of town is higher than `country`.
```sql
- SELECT *
- FROM c
- WHERE CONTAINS(c.town, "Red", false)
+SELECT *
+FROM c
+WHERE CONTAINS(c.town, "Red", false)
``` ```sql
- SELECT *
- FROM c
- WHERE CONTAINS(c.country, "States", false)
+SELECT *
+FROM c
+WHERE CONTAINS(c.country, "States", false)
``` ### Full scan
-In some cases, the query engine may not be able to evaluate a query filter using the index. In this case, the query engine will need to load all items from the transactional store in order to evaluate the query filter. Full scans do not use the index and have an RU charge that increases linearly with the total data size. Luckily, operations that require full scans are rare.
+In some cases, the query engine may not be able to evaluate a query filter using the index. In this case, the query engine needs to load all items from the transactional store in order to evaluate the query filter. Full scans don't use the index and have an RU charge that increases linearly with the total data size. Luckily, operations that require full scans are rare.
### Queries with complex filter expressions
To execute this query, the query engine must do an index seek on `headquarters/e
## Index utilization for scalar aggregate functions
-Queries with aggregate functions must rely exclusively on the index in order to use it.
+Queries with aggregate functions must rely exclusively on the index in order to use it.
-In some cases, the index can return false positives. For example, when evaluating `Contains` on the index, the number of matches in the index may exceed the number of query results. The query engine will load all index matches, evaluate the filter on the loaded items, and return only the correct results.
+In some cases, the index can return false positives. For example, when evaluating `Contains` on the index, the number of matches in the index may exceed the number of query results. The query engine loads all index matches, evaluate the filter on the loaded items, and return only the correct results.
-For most queries, loading false positive index matches will not have any noticeable impact on index utilization.
+For most queries, loading false positive index matches don't have any noticeable effect on index utilization.
For example, consider the following query:
FROM company
WHERE CONTAINS(company.headquarters.country, "United") ```
-The `Contains` system function may return some false positive matches, so the query engine will need to verify whether each loaded item matches the filter expression. In this example, the query engine may only need to load an extra few items, so the impact on index utilization and RU charge is minimal.
+The `Contains` system function may return some false positive matches, so the query engine needs to verify whether each loaded item matches the filter expression. In this example, the query engine may only need to load an extra few items, so the effect on index utilization and RU charge is minimal.
However, queries with aggregate functions must rely exclusively on the index in order to use it. For example, consider the following query with a `Count` aggregate:
FROM company
WHERE CONTAINS(company.headquarters.country, "United") ```
-Like in the first example, the `Contains` system function may return some false positive matches. Unlike the `SELECT *` query, however, the `Count` query can't evaluate the filter expression on the loaded items to verify all index matches. The `Count` query must rely exclusively on the index, so if there's a chance a filter expression will return false positive matches, the query engine will resort to a full scan.
+Like in the first example, the `Contains` system function may return some false positive matches. Unlike the `SELECT *` query, however, the `Count` query can't evaluate the filter expression on the loaded items to verify all index matches. The `Count` query must rely exclusively on the index, so if there's a chance a filter expression returns false positive matches, the query engine resorts to a full scan.
Queries with the following aggregate functions must rely exclusively on the index, so evaluating some system functions requires a full scan.
Read more about indexing in the following articles:
- [Indexing policy](index-policy.md) - [How to manage indexing policy](how-to-manage-indexing-policy.md)-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Item cache is used for point reads (key/value look ups based on the Item ID and
- New writes, updates, and deletes are automatically populated in the item cache of the node that the request is routed through - Items from point read requests where the item isnΓÇÖt already in the cache (cache miss) of the node the request is routed through are added to the item cache-- Requests that are part of a [transactional batch](./nosql/transactional-batch.md) or written in [bulk mode](./nosql/how-to-migrate-from-bulk-executor-library.md#enable-bulk-support) don't populate the item cache
+- Requests that are part of a [transactional batch](./nosql/transactional-batch.md) or in [bulk mode](./nosql/how-to-migrate-from-bulk-executor-library.md#enable-bulk-support) don't populate the item cache
### Item cache invalidation and eviction
The easiest way to configure either session or eventual consistency for all read
### Session consistency
-[Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region and globally distributed Azure Cosmos DB accounts. With session consistency, single client sessions can read their own writes. Clients outside of the session performing writes will see eventual consistency when they are using the integrated cache.
+[Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region and globally distributed Azure Cosmos DB accounts. With session consistency, single client sessions can read their own writes. Any reads with session consistency that don't have a matching session token will incur RU charges. This includes the first request for a given item or query when the client application is started or restarted, unless you explicitly pass a valid session token. Clients outside of the session performing writes will see eventual consistency when they are using the integrated cache.
## MaxIntegratedCacheStaleness
cosmos-db Tutorial Nodejs Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/tutorial-nodejs-web-app.md
For the most straightforward dev environment, we use GitHub Codespaces so that y
1. Create a new GitHub Codespace on the `main` branch of the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository. > [!div class="nextstepaction"]
- > [Open this project in GitHub Codespaces](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app/codespaces)
+ > [Open this project in GitHub Codespaces](https://github.com/codespaces/new?hide_repo_select=true&ref=main&repo=611024069)
1. Wait for the Codespace to start. This startup process can take two to three minutes.
Start by running the sample application's API with the local MongoDB container t
> [!NOTE] > The object ids (`_id`) are randomnly generated and will differ from this truncated example output.
-1. In the **client/** directory, create a new **.env** file.
+1. In the **server/** directory, create a new **.env** file.
-1. In the **client/.env** file, add an environment variable for this value:
+1. In the **server/.env** file, add an environment variable for this value:
| Environment Variable | Value | | | |
- | `CONNECTION_STRING` | The connection string to the Azure Cosmos DB for MongoDB vCore cluster. For now, use `mongodb://localhost`. |
+ | `CONNECTION_STRING` | The connection string to the Azure Cosmos DB for MongoDB vCore cluster. For now, use `mongodb://localhost:27017?directConnection=true`. |
```env CONNECTION_STRING=mongodb://localhost:27017?directConnection=true
Now, let's validate that the application works seamlessly with Azure Cosmos DB f
```shell exit ```
+1. In the **client/** directory, create a new **.env** file.
-1. Open the **client/.env** file again. Then, update the value of the `CONNECTION_STRING` environment variables with the connection string you used with the mongo shell:
+1. In the **client/.env** file, add an environment variable for this value:
+
+ | Environment Variable | Value |
+ | | |
+ | `CONNECTION_STRING` | The connection string to the Azure Cosmos DB for MongoDB vCore cluster. Use the same connection string you used with the mongo shell:
```output CONNECTION_STRING=<your-connection-string>
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
- Previously updated : 11/08/2022 Last updated : 04/23/2023+ # Monitor Azure Cosmos DB data by using diagnostic settings in Azure [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources:
Platform metrics and the Activity logs are collected automatically, whereas you
| | | | | | **DataPlaneRequests** | All APIs | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` | | **MongoRequests** | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **CassandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **Ca/ssandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
| **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` | | **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the top three keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
+ | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
| **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` | | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
> [!NOTE] > Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
-Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
+Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, you're able to view the deobfuscated query for all requests within your Azure Cosmos DB account. You also give permission for Azure Cosmos DB to access and surface this data in your logs.
### [Azure portal](#tab/azure-portal)
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
:::image type="content" source="media/monitor/full-text-query-features.png" lightbox="media/monitor/full-text-query-features.png" alt-text="Screenshot of navigation to the Features page.":::
-2. Select `Enable`, this setting will then be applied within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
+2. Select `Enable`. This setting is applied within a few minutes. All newly ingested logs have the full-text or PIICommand text for each request.
:::image type="content" source="media/monitor/select-enable-full-text.png" alt-text="Screenshot of full-text being enabled.":::
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
) ```
-1. Check if full-text query is already enabled by querying the resource using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `GET` verb.
+1. Query the resource using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `GET` verb to check if full-text query is already enabled.
```azurecli az rest \ --method GET \ --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
- --query "{AccountName:name, FullTextQueryEnabled:properties.diagnosticLogSettings.enableFullTextQuery}"
+ --query "{accountName:name,fullTextQuery:{state:properties.diagnosticLogSettings.enableFullTextQuery}}"
+ ```
+
+ If full-text query isn't enabled, the output would be similar to this example.
+
+ ```json
+ {
+ "accountName": "<account-name>",
+ "fullTextQuery": {
+ "state": "None"
+ }
+ }
``` 1. If full-text query isn't already enabled, enable it using `az rest` again with an HTTP `PATCH` verb and a JSON payload.
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
--body '{"properties": {"diagnosticLogSettings": {"enableFullTextQuery": "True"}}}' ```
+ > [!NOTE]
+ > If you are using Azure CLI within a PowerShell prompt, you will need to escape the double-quotes using a backslash (`\`) character.
+ 1. Wait a few minutes for the operation to complete. Check the status of full-text query by using `az rest` again. ```azurecli az rest \ --method GET \ --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
- --query "{AccountName:name, FullTextQueryEnabled:properties.diagnosticLogSettings.enableFullTextQuery}"
+ --query "{accountName:name,fullTextQuery:{state:properties.diagnosticLogSettings.enableFullTextQuery}}"
``` The output should be similar to this example. ```json {
- "AccountName": "<account-name>",
- "FullTextQueryEnabled": "True"
+ "accountName": "<account-name>",
+ "fullTextQuery": {
+ "state": "True"
+ }
} ```
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-conflicts.md
const { container: lwwContainer } = await database.containers.createIfNotExists(
### <a id="create-custom-conflict-resolution-policy-lww-python"></a>Python SDK ```python
-udp_collection = {
- 'id': self.udp_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'LastWriterWins',
- 'conflictResolutionPath': '/myCustomId'
- }
-}
-udp_collection = self.try_create_document_collection(
- create_client, database, udp_collection)
+database = client.get_database_client(database=database_id)
+lww_conflict_resolution_policy = {'mode': 'LastWriterWins', 'conflictResolutionPath': '/regionId'}
+lww_container = database.create_container(id=lww_container_id, partition_key=PartitionKey(path="/id"),
+ conflict_resolution_policy=lww_conflict_resolution_policy)
``` ## Create a custom conflict resolution policy using a stored procedure
After your container is created, you must create the `resolver` stored procedure
### <a id="create-custom-conflict-resolution-policy-stored-proc-python"></a>Python SDK ```python
-udp_collection = {
- 'id': self.udp_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'Custom',
- 'conflictResolutionProcedure': 'dbs/' + self.database_name + "/colls/" + self.udp_collection_name + '/sprocs/resolver'
- }
-}
-udp_collection = self.try_create_document_collection(
- create_client, database, udp_collection)
+database = client.get_database_client(database=database_id)
+udp_custom_resolution_policy = {'mode': 'Custom' }
+udp_container = database.create_container(id=udp_container_id, partition_key=PartitionKey(path="/id"),
+ conflict_resolution_policy=udp_custom_resolution_policy)
``` After your container is created, you must create the `resolver` stored procedure.
const {
### <a id="create-custom-conflict-resolution-policy-python"></a>Python SDK ```python
-database = client.ReadDatabase("dbs/" + self.database_name)
-manual_collection = {
- 'id': self.manual_collection_name,
- 'conflictResolutionPolicy': {
- 'mode': 'Custom'
- }
-}
-manual_collection = client.CreateContainer(database['_self'], collection)
+database = client.get_database_client(database=database_id)
+manual_resolution_policy = {'mode': 'Custom'}
+manual_container = database.create_container(id=manual_container_id, partition_key=PartitionKey(path="/id"),
+ conflict_resolution_policy=manual_resolution_policy)
``` ## Read from conflict feed
const { result: conflicts } = await container.conflicts.readAll().toArray();
### <a id="read-from-conflict-feed-python"></a>Python ```python
-conflicts_iterator = iter(client.ReadConflicts(self.manual_collection_link))
+conflicts_iterator = iter(container.list_conflicts())
conflict = next(conflicts_iterator, None) while conflict: # Do something with conflict
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-time-to-live.md
Use the following steps to enable time to live on an item:
### [.NET SDK v3](#tab/dotnet-sdk-v3) ```csharp
-public record SalesOrder(string id, string customerId, int? ttl);
+public record SalesOrder(string id, string customerId, int ttl);
``` ```csharp
cosmos-db Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getting-started.md
Previously updated : 02/28/2023 Last updated : 04/03/2023
In Azure Cosmos DB for NoSQL accounts, there are two ways to read data:
- [Python SDK](../samples-python.md#item-examples) - [Go SDK](../samples-go.md#item-examples)
+> [!IMPORTANT]
+> The query language is only used to query items in Azure Cosmos DB for NoSQL. You cannot use the query language to perform operations (Create, Update, Delete, etc.) on items.
+ The remainder of this article shows how to get started writing queries in Azure Cosmos DB. queries can be run through either the SDK or Azure portal. ## Upload sample data
cosmos-db Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java.md
The Analytical storage Collection CRUD Samples files for [sync](https://github.c
| | | | Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java#L91-L106) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java#L91-L106) |
-## Document examples
+## Item examples
The Document CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+> [!NOTE]
+> You must specify a partition key when performing operations against a specific item.
+ | Task | API reference | | | | | Create a document | [CosmosContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L132-L146) <br> [CosmosAsyncContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L188-L212) |
The Document CRUD Samples files for [sync](https://github.com/Azure-Samples/azur
| Transactional batch | [batch samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/batch/async/SampleBatchQuickStartAsync.java) | ## Indexing examples
-The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav#include-exclude-paths) conceptual articles.
+The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav#include-exclude-paths) conceptual articles.
| Task | API reference | | | |
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/sample
## Indexing examples
-The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#types-of-indexes), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
| Task | API reference | | | |
cosmos-db Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md
The [document_management.py](https://github.com/Azure/azure-sdk-for-python/blob/
## Indexing examples
-The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py) Python sample shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py) Python sample shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#types-of-indexes), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
| Task | API reference | | | |
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-service-unavailable.md
Title: Troubleshoot Azure Cosmos DB service unavailable exceptions description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions. ++ Previously updated : 08/31/2022- - Last updated : 04/03/2023 # Diagnose and troubleshoot Azure Cosmos DB service unavailable exceptions+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] The SDK wasn't able to connect to Azure Cosmos DB. This scenario can be transient or permanent depending on the network conditions.
-It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for service unavailable errors.
+It's important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for service unavailable errors.
When evaluating the case for service unavailable errors:
-* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* What is the effect measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
* Is the P99 latency / availability affected? * Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
The following list contains known causes and solutions for service unavailable e
### Verify the substatus code
-In certain conditions, the HTTP 503 Service Unavailable error will include a substatus code that helps to identify the cause.
+In certain conditions, the HTTP 503 Service Unavailable error includes a substatus code that helps to identify the cause.
-| SubStatus Code | Description |
+| Substatus Code | Description |
|-|-| | 20001 | The service unavailable error happened because there are client side [connectivity issues](#client-side-transient-connectivity-issues) (failures attempting to connect). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. | | 20002 | The service unavailable error happened because there are client side [timeouts](troubleshoot-dotnet-sdk-request-timeout.md#troubleshooting-steps). The client attempted to recover by [retrying](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) but all retries failed. | | 20003 | The service unavailable error happened because there are underlying I/O errors related to the operating system. See the exception details for the related I/O error. | | 20004 | The service unavailable error happened because [client machine's CPU is overloaded](troubleshoot-dotnet-sdk-request-timeout.md#high-cpu-utilization). |
-| 20005 | The service unavailable error happened because client machine's threadpool is starved. Verify any potential [blocking async calls in your code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). |
+| 20005 | The service unavailable error happened because client machine's thread pool is starved. Verify any potential [blocking async calls in your code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). |
+| 20006 | The connection between the service and client was interrupted or terminated in an unexpected manner. |
| >= 21001 | This service unavailable error happened due to a transient service condition. Verify the conditions in the above section, confirm if you have retry policies in place. If the volume of these errors is high compared with successes, reach out to Azure Support. | ### The required ports are being blocked
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Title: Get started with Azure Cosmos DB Partial Document Update
-description: Learn how to use Partial Document Update with .NET, Java, and Node SDKs for Azure Cosmos DB with these examples.
+ Title: Get started with partial document update
+
+description: Learn how to use the partial document update feature with the .NET, Java, and Node SDKs for Azure Cosmos DB for NoSQL.
+ Previously updated : 03/06/2023- Last updated : 04/03/2023 # Get started with Azure Cosmos DB Partial Document Update+ [!INCLUDE[NoSQL](includes/appliesto-nosql.md)] This article provides examples that illustrate how to use Partial Document Update with .NET, Java, and Node SDKs. It also describes common errors that you might encounter.
This article links to code samples for the following scenarios:
- Use conditional patch syntax based on filter predicate - Run patch operation as part of a transaction
+## Prerequisites
+
+- An existing Azure Cosmos DB account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+ ## [.NET](#tab/dotnet) Support for Partial Document Update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](nosql/sdk-dotnet-v3.md) is available starting with version *3.23.0*. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0).
Support for Partial Document Update (Patch API) in the [Azure Cosmos DB .NET v3
```csharp List<PatchOperation> operations = new () {
- PatchOperation.Add($"/color", "silver"),
+ PatchOperation.Add("/color", "silver"),
PatchOperation.Remove("/used"),
- PatchOperation.Increment("/price", 50.00)
+ PatchOperation.Increment("/price", 50.00),
+ PatchOperation.Add("/tags/-", "featured-bikes")
}; ItemResponse<Product> response = await container.PatchItemAsync<Product>(
Here's some common errors that you might encounter while using this feature:
## Next steps -- [Partial Document Update in Azure Cosmos DB](partial-document-update.md) - [Frequently asked questions about Partial Document Update in Azure Cosmos DB](partial-document-update-faq.yml)
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
Title: Partial document update-
-description: Learn how to conditionally modify a document using the partial document update feature in Azure Cosmos DB.
+
+description: Learn how to conditionally modify a document using the partial document update feature in Azure Cosmos DB for NoSQL.
+ Previously updated : 04/29/2022- Last updated : 04/03/2023
An example target JSON document:
```json {
- "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
- "name": "R-410 Road Bicycle",
- "price": 455.95,
- "inventory": {
- "quantity": 15
- },
- "used": false,
- "categoryId": "road-bikes"
+ "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ "name": "R-410 Road Bicycle",
+ "price": 455.95,
+ "inventory": {
+ "quantity": 15
+ },
+ "used": false,
+ "categoryId": "road-bikes",
+ "tags": [
+ "r-series"
+ ]
} ```
A JSON Patch document:
```json [
- { "op": "add", "path": "/color", "value": "silver" },
- { "op": "remove", "path": "/used" },
- { "op": "set", "path": "/price", "value": 355.45 }
- { "op": "incr", "path": "/inventory/quantity", "value": 10 }
+ { "op": "add", "path": "/color", "value": "silver" },
+ { "op": "remove", "path": "/used" },
+ { "op": "set", "path": "/price", "value": 355.45 }
+ { "op": "incr", "path": "/inventory/quantity", "value": 10 },
+ { "op": "add", "path": "/tags/-", "value": "featured-bikes" }
] ```
The resulting JSON document:
```json {
- "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
- "name": "R-410 Road Bicycle",
- "price": 355.45,
- "inventory": {
- "quantity": 25
- },
- "categoryId": "road-bikes",
- "color": "silver"
+ "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ "name": "R-410 Road Bicycle",
+ "price": 355.45,
+ "inventory": {
+ "quantity": 25
+ },
+ "categoryId": "road-bikes",
+ "color": "silver",
+ "tags": [
+ "r-series",
+ "featured-bikes"
+ ]
} ```
Partial document update feature supports the following modes of operation. Refer
- **Multi-document patch**: Multiple documents within the same partition key can be patched as a [part of a transaction](transactional-batch.md). This multi-document transaction is committed only if all the operations succeed in the order they're described. If any operation fails, the entire transaction is rolled back. -- **Conditional Update**: For the aforementioned modes, it's also possible to add a SQL-like filter predicate (for example, `from c where c.taskNum = 3`) such that the operation fails if the pre-condition specified in the predicate isn't satisfied.
+- **Conditional Update**: For the aforementioned modes, it's also possible to add a SQL-like filter predicate (for example, `from c where c.taskNum = 3`) such that the operation fails if the precondition specified in the predicate isn't satisfied.
- You can also use the bulk APIs of supported SDKs to execute one or more patch operations on multiple documents.
Different clients issue Patch operations concurrently across different regions:
:::image type="content" source="./media/partial-document-update/patch-multi-region-conflict-resolution.png" alt-text="An image that shows conflict resolution in concurrent multi-region partial update operations." border="false" lightbox="./media/partial-document-update/patch-multi-region-conflict-resolution.png":::
-Since Patch requests were made to non-conflicting paths within the document, these requests are conflict resolved automatically and transparently (as opposed to Last Writer Wins at a document level).
+Since Patch requests were made to nonconflicting paths within the document, these requests are conflict resolved automatically and transparently (as opposed to Last Writer Wins at a document level).
The client will see the following document after conflict resolution:
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
# Using satellite imagery in Azure Data Manager for Agriculture
-Our data manager supports geospatial and temporal data. Remote sensing satellite imagery (which is geospatial and temporal) has huge applications in the field of agriculture. Farmers, agronomists and data scientists use of satellite imagery extensively to generate insights. Using satellite data in Data Manager for agriculture involves following steps.
+Satellite imagery makes up a foundational pillar of agriculture data. To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. This BYOL experience allows you to manage your own costs while keeping the convenience of storing your field-clipped historical and up to date imagery in the linked context of the relevant fields.
+
+## Prerequisites
+* To search and ingest imagery, you need a user account that has suitable subscription entitlement with Sentinel Hub: https://www.sentinel-hub.com/pricing/
+* Read the Sinergise Sentinel Hub terms of service and privacy policy: https://www.sentinel-hub.com/tos/
+* Have your providerClientId and providerClientSecret ready
+
+## Ingesting boundary-clipped imagery
+Using satellite data in Data Manager for Agriculture involves following steps:
+ > [!NOTE] > Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
->:::image type="content" source="./media/satellite-flow.png" alt-text="Diagram showing satellite data ingestion flow..":::
- ## Satellite sources supported by Azure Data Manager for Agriculture In our public preview, we support ingesting data from Sentinel-2 constellation.
-## Sentinel-2
-[Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) is a satellite constellation launched by 'European Space Agency' (ESA) under the Copernicus mission. This constellation has a pair of satellites and carries a Multi-Spectral Instrument (MSI) payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution.
+### Sentinel-2
+[Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) is a satellite constellation launched by 'European Space Agency' (ESA) under the Copernicus mission. This constellation has a pair of satellites and carries a Multi-Spectral Instrument (MSI) payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60-m spatial resolution.
> [!Tip]
-> Sentinel-2 has two products: Level 1 (top of the atmosphere) data and its atmospherically corrected variant Level 2 (bottom of the atmosphere) data. We support ingesting and retrieving Level 1 and Level 2 data from Sentinel 2.
+> Sentinel-2 has two products: Level 1 (top of the atmosphere) data and its atmospherically corrected variant Level 2 (bottom of the atmosphere) data. We support ingesting and retrieving Sentinel_2_L2A and Sentinel_2_L1C data from Sentinel 2.
-## Image names and resolutions
-The image names and resolutions that are supported by APIs used to ingest and read satellite data (for Sentinel-2) in our service:
+### Image names and resolutions
+The image names and resolutions supported by APIs used to ingest and read satellite data (for Sentinel-2) in our service:
| Category | Image Name | Description | Native resolution | |:--:|:-:|:-:|:-:|
The image names and resolutions that are supported by APIs used to ingest and re
* A maximum of five satellite jobs can be run concurrently, per tenant. * A satellite job can ingest data for a maximum of one year in a single API call. * Only TIFs are supported.
- * Only 10 m, 20 m and 60 m images are supported.
+ * Only 10 m, 20 m and 60-m images are supported.
## Next steps
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The following table displays roles and allowed actions in Defender for Cloud.
| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö | | Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö | | View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Exempt security recommendations | - | - | Γ£ö | Γ£ö | Γ£ö |
The specific role required to deploy monitoring components depends on the extension you're deploying. Learn more about [monitoring components](monitoring-components.md).
This article explained how Defender for Cloud uses Azure RBAC to assign permissi
- [Set security policies in Defender for Cloud](tutorial-security-policy.md) - [Manage security recommendations in Defender for Cloud](review-security-recommendations.md) - [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)-- [Monitor partner security solutions](./partner-integration.md)
+- [Monitor partner security solutions](./partner-integration.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## April 2023
+
+Updates in April include:
+
+- [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely)
+
+### Changes in the recommendation "Machines should be configured securely"
+
+The recommendation `Machines should be configured securely` was updated. The update improves the performance and stability of the recommendation and aligns its experience with the generic behavior of Defender for Cloud's recommendations.
+
+As part of this update, the recommendation's ID was changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
+
+No action is required on the customer side, and there's no expected impact on the secure score.
++ ## March 2023 Updates in March include:
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
For a bandwidth cap, define the maximum bandwidth you want the sensor to use for
### Subnet
-To define your sensor's subnets, do any of the following:
+To focus the Azure device inventory on devices that are in your IoT/OT scope, you will need to manually edit the subnet list to include only the locally monitored subnets that are in your IoT/OT scope. Once the subnets have been configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets will be displayed as *local*, while devices associated with detected subnets not included in the list will be displayed as *routed*.
-- Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch.
+**To configure your subnets in the Azure portal**:
-- Enter values in the **IP Address**, **Mask**, and **Name** fields to add subnet details manually. Select **Add subnet** to add more subnets as needed.
+1. In the Azure portal, go to **Sites and sensors** > **Sensor settings**.
+
+1. Under **Subnets**, review the detected subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete.
+
+1. To modify additional settings, select any subnet and then select **Edit** for the following options:
+
+ - Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch.
+
+ - Enter values in the **IP Address**, **Mask**, and **Name** fields to add subnet details manually. Select **Add subnet** to add additional subnets as needed.
### VLAN naming
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
Mark OT devices as *important* to highlight them for extra tracking. On an OT se
The following table lists the columns available in the Defender for IoT device inventory on the Azure portal. Starred items **(*)** are also available from the OT sensor.
+> [!NOTE]
+> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ |Name |Description ||| |**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value may need to change as the device security changes. |
The following table lists the columns available in the Defender for IoT device i
| **MAC Address** * | The device's MAC address. | |**Model** *| Editable The device's hardware model. | |**Name** * | Mandatory, and editable. The device's name as the sensor discovered it, or as entered by the user. |
+|**Network location** (Public preview) | The device's network location. Displays whether the device is defined as *local* or *routed*, according to the configured subnets. |
|**OS architecture** | Editable. The device's operating system architecture. | |**OS distribution** | Editable. The device's operating system distribution, such as Android, Linux, and Haiku. | |**OS platform** * | Editable. The device's operating system, if detected. On the OT sensor, shown as **Operating System**. |
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
This step is performed by your deployment teams.
## Define OT and IoT subnets
-Subnet configurations affect how devices are displayed in the sensor's [device maps](how-to-work-with-the-sensor-device-map.md). In the device maps, IT devices are automatically aggregated by subnet, where you can expand and collapse each subnet view to drill down as needed.
+After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, define the sensor's subnet settings directly on the OT sensor console to determine how devices are displayed in the sensor's [device map](how-to-work-with-the-sensor-device-map.md) and the [Azure device inventory](device-inventory.md).
-While the OT network sensor automatically learns the subnets in your network, we recommend confirming the learned settings and updating them as needed to optimize your map views.
+- **In the device map**, IT devices are automatically aggregated by subnet, where you can expand and collapse each subnet view to drill down as needed.
+- **In the Azure device inventory**, once the subnets have been configured, use the *Network location* (Public preview) filter to view *local* or *routed* devices as defined in your subnets list. All of the devices associated with the listed subnets will be displayed as *local*, while devices associated with detected subnets not included in the list will be displayed as *routed*.
-Any subnets not listed as subnets are treated as external networks.
+> [!TIP]
+> When you're ready to start managing your OT sensor settings at scale, define subnets from the Azure portal. Once you apply settings from the Azure portal, settings on the sensor console are read-only. For more information, see [Configure OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md).
+
+While the OT network sensor automatically learns the subnets in your network, we recommend confirming the learned settings and updating them as needed to optimize your map views and device inventory. Any subnets not listed as subnets are treated as external networks.
> [!NOTE]
-> For cloud-connected sensors, you may eventually start configuring OT sensor settings from the Azure portal. Once you start configuring settings from the Azure portal, the **Subnets** pane on the OT sensor is read-only. For more information, see [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md).
+> If sensor settings have already been applied from the Azure portal, the subnet settings on the individual sensor will be read-only and are managed from the Azure portal.
+
+**To configure your subnets on a locally managed sensor**:
-**To define subnets**:
+1. Sign into your OT sensor as an Admin user and select **System settings** > **Basic** > **Subnets**.
-1. Sign into your OT sensor as an **Admin** user and select **System settings > Basic > Subnets**.
+1. Disable the **Auto subnet learning** setting to manually edit the subnets.
-1. Confirm the current subnets listed and modify settings as needed.
+1. Review the discovered subnets list and delete any subnets unrelated to your IoT/OT network scope. We recommend giving each subnet a meaningful name to specify the network role. Subnet names can have up to 60 characters.
- We recommend giving each subnet a meaningful name to differentiate between IT and OT networks. Subnet names can have up to 60 characters.
+ Once the **Auto subnet learning** setting is disabled and the subnet list has been edited to include only the locally monitored subnets that are in your IoT/OT scope, you can filter the Azure device inventory by *Network location* to view only the devices defined as *local*.
1. Use any of the following options to help you optimize your subnet settings: |Name |Description | |||
- |**Import subnets** | Import a .CSV file of subnet definitions |
+ |**Import subnets** | Import a .CSV file of subnet definitions. The subnet information is updated with the information that you imported. If you import an empty field, you'll lose the data in that field. |
|**Export subnets** | Export the currently listed subnets to a .CSV file. |
- |**Clear all** | Clear all currently defined subnets |
- |**Auto subnet learning** | Selected by default. Clear this option to define your subnets manually instead of having them be automatically detected by your OT sensor as new devices are detected. |
+ |**Clear all** | Clear all currently defined subnets. |
+ |**Auto subnet learning** | Selected by default. Clear this option to define your subnets manually instead of having them automatically detected by your OT sensor as new devices are detected. |
|**Resolve all Internet traffic as internal/private** | Select to consider all public IP addresses as private, local addresses. If selected, public IP addresses are treated as local addresses, and alerts aren't sent about unauthorized internet activity. <br><br>This option reduces notifications and alerts received about external addresses. |
- |**ICS Subnet** | Select to define a specific subnet as a separate OT subnet. Selecting this option helps you collapse device maps to a minimum of IT network elements. |
+ |**ICS subnet** | Read-only. ICS/OT subnets are marked automatically when the system recognizes OT activity or protocols. |
|**Segregated** | Select to show this subnet separately when displaying the device map according to Purdue level. | 1. When you're done, select **Save** to save your updates.
defender-for-iot How To Troubleshoot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-sensor.md
To connect a sensor controlled by the management console to NTP:
Sometimes ICS devices are configured with external IP addresses. These ICS devices aren't shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows: 1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
-1. Copy the public ranges that are private, and add them to the subnet list. For more information, see [Define ICS or IoT and segregated subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets).
+1. Copy the public ranges that are private, and add them to the subnet list. For more information, see [Define OT and IoT subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets).
1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
The following table lists available responses for each notification, and when we
| Type | Description | Available responses | Auto-resolve| |--|--|--|--| | **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. |**Dismiss** |
-| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** |
+| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnet Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** |
| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. |No automatic handling|
-| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**<br />Remove the notification. |**Dismiss** |
+| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**: <br />Remove the notification. |**Dismiss** |
| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling| ## View a device map for a specific zone
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see [Tutorial: Investigate and detect threats for IoT devi
|Service area |Updates | |||
-| **OT networks** | **Cloud features**: <br>- [Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2](#microsoft-sentinel-microsoft-defender-for-iot-solution-version-202) <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) |
+| **OT networks** | **Cloud features**: <br>- [Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2](#microsoft-sentinel-microsoft-defender-for-iot-solution-version-202) <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br>- [Focused inventory in the Azure device inventory (Public preview)](#focused-inventory-in-the-azure-device-inventory-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) |
| **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | ### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2
Rich security, governance and admin controls also provide the ability to assign
The **Device inventory** page on the Azure portal supports new grouping categories. Now you can group your device inventory by *class*, *data source*, *location*, *Purdue level*, *site*, *type*, *vendor*, and *zone*. For more information, see [View full device details](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+### Focused inventory in the Azure device inventory (Public preview)
+
+The **Device inventory** page on the Azure portal now includes a network location indication for your devices, to help focus your device inventory on the devices within your IoT/OT scope. See and filter which devices are defined as *local* or *routed*, according to your configured subnets. The *Network location* filter is on by default, and the *Network location* column can be added by editing the columns in the device inventory. For more information, see [Subnet](configure-sensor-settings-portal.md#subnet).
+ ### Configure OT sensor settings from the Azure portal (Public preview) For sensor versions 22.2.3 and higher, you can now configure selected settings for cloud-connected sensors using the new **Sensor settings (Preview)** page, accessed via the Azure portal's **Sites and sensors** page. For example:
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
To learn more about Azure Compute Gallery and how to create galleries, see:
## Prerequisites -- A dev center. If you don't have one available, follow the steps in [Create a dev center](./quickstart-configure-dev-box-service.md#create-a-dev-center).
+- A dev center. If you don't have one available, follow the steps in [1. Create a dev center](quickstart-configure-dev-box-service.md#1-create-a-dev-center).
- A compute gallery. For you to use a gallery to configure dev box definitions, it must have at least [one image definition and one image version](../virtual-machines/image-version.md): - The image version must meet the [Windows 365 image requirements](/windows-365/enterprise/device-images#image-requirements): - Generation 2.
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
To provision a custom image that you created by using VM Image Builder, you need
- Owner or Contributor permissions on an Azure subscription or on a specific resource group. - A resource group.-- A dev center with an attached network connection. If you don't have a one, follow the steps in [Create a network connection](./quickstart-configure-dev-box-service.md#create-a-network-connection).
+- A dev center with an attached network connection. If you don't have a one, follow the steps in [2. Configure a network connection](quickstart-configure-dev-box-service.md#2-configure-a-network-connection).
## Create a Windows image and distribute it to Azure Compute Gallery
After the gallery images are available in the dev center, you can use the custom
## Next steps -- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
You can delete a dev box pool when you're no longer using it.
## Next steps - [Provide access to projects for project admins](./how-to-project-admin.md)-- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
To assign administrative access to a project, select the DevCenter Project Admin
## Next steps - [Manage dev box pools](./how-to-manage-dev-box-pools.md)-- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
To make role assignments:
## Next steps - [Provide access to projects for project admins](./how-to-project-admin.md)-- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Last updated 01/24/2023- #Customer intent: As an enterprise admin, I want to understand how to create and configure dev box components so that I can provide dev box projects for my users. # Quickstart: Configure Microsoft Dev Box Preview
-This quickstart describes how to configure Microsoft Dev Box Preview by using the Azure portal to enable development teams to self-serve their dev boxes.
+This quickstart describes how to set up Microsoft Dev Box Preview to enable development teams to self-serve their dev boxes. The setup process involves two distinct phases. In the first phase, dev infra admins configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase.
-This quickstart takes you through the process of setting up your Dev Box environment. You create a dev center to organize your dev box resources, configure network components to enable dev boxes to connect to your organizational resources, and create a dev box definition that will form the basis of your dev boxes. You then create a project and a dev box pool, which work together to help you give access to users who will manage or use the dev boxes.
+The following graphic shows the steps required to configure Microsoft Dev Box in the Azure portal.
-After you complete this quickstart, you'll have a Dev Box configuration ready for users to create and connect to dev boxes.
+
+First, you create a dev center to organize your dev box resources. Next, you configure network components to enable dev boxes to connect to your organizational resources. Then, you create a dev box definition that is used to create dev boxes. After that, you create a project and a dev box pool. Users who have access to a project can create dev boxes from the pools associated with that project.
+
+After you complete this quickstart, you'll have Microsoft Dev Box set up ready for users to create and connect to dev boxes.
+
+If you already have a Microsoft Dev Box configured and you want to learn how to create and connect to dev boxes, refer to: [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md).
## Prerequisites To complete this quickstart, you need: - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Owner or Contributor role on an Azure subscription or a specific resource group.-
+- Owner or Contributor role on an Azure subscription or resource group.
- User licenses. To use Dev Box Preview, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory (Azure AD) P1. These licenses are available independently and are included in the following subscriptions:- - Microsoft 365 F3 - Microsoft 365 E3, Microsoft 365 E5 - Microsoft 365 A3, Microsoft 365 A5 - Microsoft 365 Business Premium - Microsoft 365 Education Student Use Benefit - [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/), which allows you to use your Windows licenses on Azure with Dev Box.-- Certain ports to be open so that the Dev Box service can function if your organization routes egress traffic through a firewall. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).
-## Create a dev center
+- If your organization routes egress traffic through a firewall, open the appropriate ports. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).
+## 1. Create a dev center
Use the following steps to create a dev center so that you can manage your dev box resources:
Use the following steps to create a dev center so that you can manage your dev b
1. When the deployment is complete, select **Go to resource**. Confirm that the dev center page appears.
-## Create a network connection
+## 2. Configure a network connection
Network connections determine the region in which dev boxes are deployed. They also allow dev boxes to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box Preview.
To create the network connection, complete the steps on the relevant tab.
#### [Azure AD join](#tab/AzureADJoin/)
-1. 1. 1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search box, enter **network connections**. In the list of results, select **Network connections**.
To create the network connection, complete the steps on the relevant tab.
1. When the deployment is complete, select **Go to resource**. The network connection appears on the **Network connections** page.
-## Attach a network connection to a dev center
+### Attach a network connection to a dev center
To provide network configuration information for dev boxes, associate a network connection with a dev center:
After you attach a network connection, the Azure portal runs several health chec
To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection).
-## Create a dev box definition
+## 3. Create a dev box definition
-Dev box definitions define the image and SKU (compute + storage) that will be used in creation of the dev boxes. To create and configure a dev box definition:
+Dev box definitions define the image and SKU (compute + storage) that's used in the creation of the dev boxes. To create and configure a dev box definition:
1. Open the dev center in which you want to create the dev box definition.
Dev box definitions define the image and SKU (compute + storage) that will be us
|-|-|-| |**Name**|Enter a descriptive name for your dev box definition.| |**Image**|Select the base operating system for the dev box. You can select an image from Azure Marketplace or from Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To access custom images when you create a dev box definition, you can use Azure Compute Gallery. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).|
- |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|Selecting the **Latest** image version enables the dev box pool to use the most recent version of your chosen image from the gallery. This way, the created dev boxes will stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated.|
+ |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|Selecting the **Latest** image version enables the dev box pool to use the most recent version of your chosen image from the gallery. This way, the created dev boxes stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated.|
|**Compute**|Select the compute combination for your dev box definition.|| |**Storage**|Select the amount of storage for your dev box definition.||
Dev box definitions define the image and SKU (compute + storage) that will be us
1. Select **Create**.
-## Create a project
+## 4. Create a project
Dev box projects enable you to manage team-level settings. These settings include providing access to development teams so that developers can create dev boxes.
To create and configure a project in a dev box:
|-|-| |**Subscription**|Select the subscription in which you want to create the project.| |**Resource group**|Select an existing resource group, or select **Create new** and then enter a name for the new resource group.|
- |**Dev center**|Select the dev center that you want to associate with this project. All the settings at the dev center level will be applied to the project.|
+ |**Dev center**|Select the dev center that you want to associate with this project. All the settings at the dev center level apply to the project.|
|**Name**|Enter a name for the project. | |**Description**|Enter a brief description of the project. |
To create and configure a project in a dev box:
1. Verify that the project appears on the **Projects** page.
-## Create a dev box pool
+## 5. Create a dev box pool
-A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and network connections that dev boxes will use. You must associate at least one pool with your project before users can create a dev box.
+A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and network connections that dev boxes use. You must associate at least one pool with your project before users can create a dev box.
To create a dev box pool that's associated with a project:
To create a dev box pool that's associated with a project:
|**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes that are created in this pool.| |**Dev box Creator Privileges**|Select **Local Administrator** or **Standard User**.| |**Enable Auto-stop**|**Yes** is the default. Select **No** to disable an auto-stop schedule. You can configure an auto-stop schedule after the pool is created.|
- |**Stop time**| Select a time to shut down all the dev boxes in the pool. All dev boxes in this pool will be shut down at this time every day.|
+ |**Stop time**| Select a time to shut down all the dev boxes in the pool. All dev boxes in this pool will shut down at this time every day.|
|**Time zone**| Select the time zone that the stop time is in.| |**Licensing**| Select this checkbox to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
The Azure portal deploys the dev box pool and runs health checks to ensure that
:::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-pool-grid-populated.png" alt-text="Screenshot that shows a list of dev box pools and status information.":::
-## Provide access to a dev box project
+## 6. Provide access to a dev box project
Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
To assign roles:
Microsoft Dev Box Preview makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their teams, like creating and managing dev box pools. To give users permissions to manage projects, assign the DevCenter Project Admin role to them.
-You can assign the DevCenter Project Admin role by using the steps described earlier in [Provide access to a dev box ](#provide-access-to-a-dev-box-project)project and select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md).
+You can assign the DevCenter Project Admin role by using the steps described earlier in [6. Provide access to a dev box project](#6-provide-access-to-a-dev-box-project) and select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md).
[!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] ## Next steps
-In this quickstart, you created a dev box project and the resources that are necessary to support it. To learn how to create and connect to a dev box, advance to the next quickstart:
+In this quickstart, you configured the Microsoft Dev Box resources that are required to enable users to create their own dev boxes. To learn how to create and connect to a dev box, advance to the next quickstart:
> [!div class="nextstepaction"] > [Create a dev box](./quickstart-create-dev-box.md)
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
You can create and manage multiple dev boxes as a dev box user. Create a dev box
To complete this quickstart, you need: -- Permissions as a [Dev Box User](./quickstart-configure-dev-box-service.md#provide-access-to-a-dev-box-project) for a project that has an available dev box pool. If you don't have permissions to a project, contact your administrator.
+- Permissions as a [Dev Box User](quickstart-configure-dev-box-service.md#6-provide-access-to-a-dev-box-project) for a project that has an available dev box pool. If you don't have permissions to a project, contact your administrator.
## Create a dev box
digital-twins How To Create Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-endpoints.md
These services are the supported types of endpoints that you can create for your
* [Event Hubs](../event-hubs/event-hubs-about.md) hub * [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) topic
->[!TIP]
-> For more information on the different endpoint types, see [Choose between Azure messaging services](../event-grid/compare-messaging-services.md).
To link an endpoint to Azure Digital Twins, the Event Grid topic, event hub, or Service Bus topic that you're using for the endpoint needs to exist already.
event-grid Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/compare-messaging-services.md
- Title: Compare Azure messaging services
-description: Describes the three Azure messaging services - Azure Event Grid, Event Hubs, and Service Bus. Recommends which service to use for different scenarios.
- Previously updated : 11/01/2022--
-# Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
-
-Azure offers three services that assist with delivering events or messages throughout a solution. These services are:
--- Azure Event Grid-- Azure Event Hubs-- Azure Service Bus-
-Although they have some similarities, each service is designed for particular scenarios. This article describes the differences between these services, and helps you understand which one to choose for your application. In many cases, the messaging services are complementary and can be used together.
-
-## Event vs. message services
-There's an important distinction between services that deliver an event and services that deliver a message.
-
-### Event
-An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
-
-Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn't have the data that triggered the event. For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn't have the file itself. Discrete events are ideal for serverless solutions that need to scale.
-
-A series of events reports a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
-
-### Message
-A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.
--
-## Azure Event Grid
-Event Grid is an eventing backplane that enables event-driven, reactive programming. It uses the publish-subscribe model. Publishers emit events, but have no expectation about how the events are handled. Subscribers decide on which events they want to handle.
-
-Event Grid is deeply integrated with Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications. Event Grid isn't a data pipeline, and doesn't deliver the actual object that was updated.
-
-It has the following characteristics:
--- Dynamically scalable-- Low cost-- Serverless-- At least once delivery of an event-
-Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-premises or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
-
-## Azure Event Hubs
-Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to various stream-processing infrastructures and analytics services. It's available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing, and repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis.
-
-It has the following characteristics:
--- Low latency-- Can receive and process millions of events per second-- At least once delivery of an event-
-For more information, see [Event Hubs overview](../event-hubs/event-hubs-about.md).
-
-## Azure Service Bus
-Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that can't be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.
-
-Service Bus is a brokered messaging system. It stores messages in a "broker" (for example, a queue) until the consuming party is ready to receive the messages. It has the following characteristics:
--- Reliable asynchronous message delivery (enterprise messaging as a service) that requires polling-- Advanced messaging features like first-in and first-out (FIFO), batching/sessions, transactions, dead-lettering, temporal control, routing and filtering, and duplicate detection-- At least once delivery of a message-- Optional ordered delivery of messages-
-For more information, see [Service Bus overview](../service-bus-messaging/service-bus-messaging-overview.md).
-
-## Comparison of services
-
-| Service | Purpose | Type | When to use |
-| - | - | - | -- |
-| Event Grid | Reactive programming | Event distribution (discrete) | React to status changes |
-| Event Hubs | Big data pipeline | Event streaming (series) | Telemetry and distributed data streaming |
-| Service Bus | High-value enterprise messaging | Message | Order processing and financial transactions |
-
-## Use the services together
-In some cases, you use the services side by side to fulfill distinct roles. For example, an e-commerce site can use Service Bus to process the order, Event Hubs to capture site telemetry, and Event Grid to respond to events like an item was shipped.
-
-In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to Azure Synapse Analytics, see [Stream big data into a Azure Synapse Analytics](event-hubs-integration.md). The following image shows the workflow for streaming the data.
--
-## Next steps
-See the following articles:
-- [Asynchronous messaging options in Azure](/azure/architecture/guide/technology-choices/messaging)-- [Events, Data Points, and Messages - Choosing the right Azure messaging service for your data](https://azure.microsoft.com/blog/events-data-points-and-messages-choosing-the-right-azure-messaging-service-for-your-data/).-
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
In the browser tab where you have the query window open, query the table in your
* For more information about setting up and running the sample, see [Event Hubs Capture and Event Grid sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo). * In this tutorial, you created an event subscription for the `CaptureFileCreated` event. For more information about this event and all the events supported by Azure Blob Storage, see [Azure Event Hubs as an Event Grid source](event-schema-event-hubs.md). * To learn more about the Event Hubs Capture feature, see [Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../event-hubs/event-hubs-capture-overview.md).
-* To learn about differences in the Azure messaging services, see [Choose between Azure services that deliver messages](compare-messaging-services.md).
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Here are some of the key features of Azure Event Grid:
* **Built-in Events** - Get up and running quickly with resource-defined built-in events. * **Custom Events** - Use Event Grid to route, filter, and reliably deliver custom events in your app.
-For a comparison of Event Grid, Event Hubs, and Service Bus, see [Choose between Azure services that deliver messages](compare-messaging-services.md).
## What can I do with Event Grid?
firewall Enable Top Ten And Flow Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/enable-top-ten-and-flow-trace.md
Title: Enable Top 10 flows and Flow trace logs in Azure Firewall
-description: Learn how to enable the Top 10 flows and Flow trace logs in Azure Firewall
+ Title: Enable Top flows and Flow trace logs in Azure Firewall
+description: Learn how to enable the Top flows and Flow trace logs in Azure Firewall
Last updated 03/27/2023
-# Enable Top 10 flows (preview) and Flow trace logs (preview) in Azure Firewall
+# Enable Top flows (preview) and Flow trace logs (preview) in Azure Firewall
Azure Firewall has two new diagnostics logs you can use to help monitor your firewall: -- Top 10 flows
+- Top flows
- Flow trace
-## Top 10 flows
+## Top flows
-The Top 10 flows log (known in the industry as Fat Flows), shows the top connections that are contributing to the highest throughput through the firewall.
+The Top flows log (known in the industry as Fat Flows), shows the top connections that are contributing to the highest throughput through the firewall.
### Prerequisites
There are a few ways to verify the update was successful, but you can navigate t
2. Select **Queries**, then load **Azure Firewall Top Flow Logs** by hovering over the option and selecting **Load to editor**. 3. When the query loads, select **Run**.
- :::image type="content" source="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png" alt-text="Screenshot showing the Top 10 flow log." lightbox="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png":::
+ :::image type="content" source="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png" alt-text="Screenshot showing the Top flow log." lightbox="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png":::
## Flow trace
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
New resource specific tables are now available in Diagnostic setting that allows
- [Application rule aggregation log](/azure/azure-monitor/reference/tables/azfwapplicationruleaggregation) - Contains aggregated Application rule log data for Policy Analytics. - [Network rule aggregation log](/azure/azure-monitor/reference/tables/azfwnetworkruleaggregation) - Contains aggregated Network rule log data for Policy Analytics. - [NAT rule aggregation log](/azure/azure-monitor/reference/tables/azfwnatruleaggregation) - Contains aggregated NAT rule log data for Policy Analytics.-- [Top 10 flows log (preview)](/azure/azure-monitor/reference/tables/azfwfatflow) - The Top 10 Flows (Fat Flows) log shows the top connections that are contributing to the highest throughput through the firewall.
+- [Top flow log (preview)](/azure/azure-monitor/reference/tables/azfwfatflow) - The Top Flows (Fat Flows) log shows the top connections that are contributing to the highest throughput through the firewall.
- [Flow trace (preview)](/azure/azure-monitor/reference/tables/azfwflowtrace) - Contains flow information, flags, and the time period when the flows were recorded. You'll be able to see full flow information such as SYN, SYN-ACK, FIN, FIN-ACK, RST, INVALID (flows). ## Enable/disable structured logs
Additional KQL log queries were added to query structured firewall logs.
- For more information, see [Exploring the New Resource Specific Structured Logging in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/exploring-the-new-resource-specific-structured-logging-in-azure/ba-p/3620530). -- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md)
+- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md)
firewall Logs And Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md
The following metrics are available for Azure Firewall:
- Monitor and alert if there are any latency or performance issues, so IT teams can proactively engage.
- There may be various reasons that can cause high latency in Azure Firewall.
+ - There may be various reasons that can cause high latency in Azure Firewall. For example, high CPU utilization, high throughput, or a possible networking issue.
- This metric does not measure end-to-end latency of a given network path. In other words, this latency health probe does not measure how much latency Azure Firewall adds.
+ This metric does not measure end-to-end latency of a given network path. In other words, this latency health probe does not measure how much latency Azure Firewall adds.
+ - When the latency metric is functioning as expected, a value of 0 appears in the metrics dashboard.
+ - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment.
## Next steps
hdinsight Hdinsight Overview Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-versioning.md
Title: Versioning introduction - Azure HDInsight
description: Learn how versioning works in Azure HDInsight. Previously updated : 09/26/2022 Last updated : 04/03/2023 # How versioning works in HDInsight
Resources in Azure are made available by a Resource provider. HDInsight Resource
HDInsight uses images to put together open-source software (OSS) components that can be deployed on a cluster. These images contain the base Ubuntu operating system and core components such as Spark, Hadoop, Kafka, HBase or Hive.
+[How to check the image number?](./view-hindsight-cluster-image-version.md)
+ ## Versioning in HDInsight Microsoft periodically upgrades the images and the HDInsight Resource provider to include new improvements and features.
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/network-security.md
By default, when you create a new key vault, the Azure Key Vault firewall is dis
### Key Vault Firewall Enabled (Trusted Services Only)
-When you enable the Key Vault Firewall, you'll be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps isn't on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
+When you enable the Key Vault Firewall, you'll be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps isn't on the trusted services list. **This does not imply that services that do not appear on the trusted services list are not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
To determine if a service you are trying to use is on the trusted service list, see [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md#trusted-services). For how-to guide, follow the instructions here for [Portal, Azure CLI and PowerShell](how-to-azure-key-vault-network-security.md)
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 02/28/2023 Last updated : 04/03/2023 tags: connectors
To send IDocs from SAP to your logic app workflow, you need the following minimu
> [!IMPORTANT] > Use these steps only when you test your SAP configuration with your logic app workflow. Production environments require additional configuration.
-1. [Configure an RFC destination in SAP.](#create-rfc-destination)
+1. [Create an RFC destination.](#create-rfc-destination)
-1. [Create an ABAP connection to your RFC destination.](#create-abap-connection)
+1. [Create an ABAP connection.](#create-abap-connection)
1. [Create a receiver port.](#create-receiver-port)
To send IDocs from SAP to your logic app workflow, you need the following minimu
#### Create RFC destination
+This destination will identify your logic app workflow for the receiver port.
+ 1. To open the **Configuration of RFC Connections** settings, in your SAP interface, use the **sm59** transaction code (T-Code) with the **/n** prefix. 1. Select **TCP/IP Connections** > **Create**.
To send IDocs from SAP to your logic app workflow, you need the following minimu
#### Create ABAP connection
+This destination will identify your SAP system for the sender port.
+ 1. To open the **Configuration of RFC Connections** settings, in your SAP interface, use the **sm59*** transaction code (T-Code) with the **/n** prefix. 1. Select **ABAP Connections** > **Create**.
-1. For **RFC Destination**, enter the identifier for [your test SAP system](#create-rfc-destination).
+1. For **RFC Destination**, enter the identifier for your test SAP system.
+
+1. By leaving the target host empty in the Technical Settings, you are creating a local connection to the SAP system itself.
1. Save your changes.
For more information about the SAP connector, review the [connector reference](/
* [Connect to on-premises systems](logic-apps-gateway-connection.md) from Azure Logic Apps * Learn how to validate, transform, and use other message operations with the [Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md) * [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Learn how to use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
-You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for online and real-time scoring.
+You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for real-time scoring.
-Online endpoints are endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints, and differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+Online endpoints are endpoints that are used for real-time inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
-Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure.
+Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure.
-The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion.
-
-> [!TIP]
-> To create managed online endpoints in the Azure Machine Learning studio, see [Use managed online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md).
+The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document that are inline with the managed online endpoint discussion.
## Prerequisites # [Azure CLI](#tab/azure-cli) + [!INCLUDE [basic prereqs cli](../../includes/machine-learning-cli-prereqs.md)] * Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
-* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
-
- ```azurecli
- az account set --subscription <subscription ID>
- az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
- ```
- * (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
-> [!IMPORTANT]
-> The examples in this document assume that you are using the Bash shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
- # [Python](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
The main example in this doc uses managed online endpoints for deployment. To us
* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+# [Studio](#tab/azure-studio)
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+ # [ARM template](#tab/arm) > [!NOTE]
The main example in this doc uses managed online endpoints for deployment. To us
* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
-* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
- ```azurecli
- az account set --subscription <subscription ID>
- az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
- ```
+### Virtual machine quota allocation for deployment
-> [!IMPORTANT]
-> The examples in this document assume that you are using the Bash shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+For managed online endpoints, Azure Machine Learning reserves 20% of your compute resources for performing upgrades. Therefore, if you request a given number of instances in a deployment, you must have a quota for `ceil(1.2*number of instances requested for deployment)* number of cores for the VM SKU` available to avoid getting an error. For example, if you request 10 instances of a [Standard_DS2_v2](/azure/virtual-machines/dv2-dsv2-series) VM (that comes with 2 cores) in a deployment, you should have a quota for 24 cores (`12 instances*2 cores`) available. To view your usage and request quota increases, see [View your usage and quotas in the Azure portal](how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal).
+<!-- In this tutorial, you'll request one instance of a Standard_DS2_v2 VM SKU (that comes with 2 cores) in your deployment; therefore, you should have a minimum quota for 4 cores (`2 instances*2 cores`) available. -->
## Prepare your system # [Azure CLI](#tab/azure-cli)
-### Clone the sample repository
+### Set environment variables
+
+If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
+ ```azurecli
+ az account set --subscription <subscription ID>
+ az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
+
+### Clone the examples repository
-To follow along with this article, first clone the [samples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the samples directory:
+To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the repository's `cli/` directory:
```azurecli git clone --depth 1 https://github.com/Azure/azureml-examples
cd cli
> [!TIP] > Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
-### Set an endpoint name
-
-To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
-
-For Unix, run this command:
-
+The commands in this tutorial are in the files `deploy-local-endpoint.sh` and `deploy-managed-online-endpoint.sh` in the `cli` directory, and the YAML configuration files are in the `endpoints/online/managed/sample/` subdirectory.
> [!NOTE]
-> Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
+> The YAML configuration files for Kubernetes online endpoints are in the `endpoints/online/kubernetes/` subdirectory.
# [Python](#tab/python)
-### Clone the sample repository
+### Clone the examples repository
To run the training examples, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples) and change into the `azureml-examples/sdk/python/endpoints/online/managed` directory:
The information in this article is based on the [online-endpoints-simple-deploym
### Connect to Azure Machine Learning workspace
-The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks. To follow along, open your `online-endpoints-simple-deployment.ipynb` notebook.
1. Import the required libraries:
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
from azure.identity import DefaultAzureCredential ```
+ > [!NOTE]
+ > If you're using the Kubernetes online endpoint, import the `KubernetesOnlineEndpoint` and `KubernetesOnlineDeployment` class from the `azure.ai.ml.entities` library.
+ 1. Configure workspace details and get a handle to the workspace: To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
) ```
-# [ARM template](#tab/arm)
+# [Studio](#tab/azure-studio)
-### Clone the sample repository
+If you have Git installed on your local machine, you can follow the instructions to clone the examples repository. Otherwise, follow the instructions to download files from the examples repository.
-To follow along with this article, first clone the [samples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the samples directory:
+### Clone the examples repository
-```azurecli
+To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples) and then change into the `azureml-examples/cli/endpoints/online/model-1` directory.
+
+```bash
git clone --depth 1 https://github.com/Azure/azureml-examples
-cd azureml-examples
+cd azureml-examples/cli/endpoints/online/model-1
``` > [!TIP] > Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
-### Set an endpoint name
+### Download files from the examples repository
-To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
+If you cloned the examples repo, your local machine already has copies of the files for this example, and you can skip to the next section. If you didn't clone the repo, you can download it to your local machine.
-For Unix, run this command:
+1. Go to [https://github.com/Azure/azureml-examples/](https://github.com/Azure/azureml-examples/).
+1. Go to the **<> Code** button on the page, and then select **Download ZIP** from the **Local** tab.
+1. Locate the folder `/cli/endpoints/online/model-1/model` and the file `/cli/endpoints/online/model-1/onlinescoring/score.py`.
+# [ARM template](#tab/arm)
-> [!NOTE]
-> Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
+### Set environment variables
-Also set the following environment variables, as they are used in the examples in this article. Replace the values with your Azure subscription ID, the Azure region where your workspace is located, the resource group that contains the workspace, and the workspace name:
+Set the following environment variables, as they're used in the examples in this article. Replace the values with your Azure subscription ID, the Azure region where your workspace is located, the resource group that contains the workspace, and the workspace name:
```bash export SUBSCRIPTION_ID="your Azure subscription ID"
export RESOURCE_GROUP="Azure resource group that contains your workspace"
export WORKSPACE="Azure Machine Learning workspace name" ```
-A couple of the template examples require you to upload files to the Azure Blob store for your workspace. The following steps will query the workspace and store this information in environment variables used in the examples:
+A couple of the template examples require you to upload files to the Azure Blob store for your workspace. The following steps query the workspace and store this information in environment variables used in the examples:
1. Get an access token:
A couple of the template examples require you to upload files to the Azure Blob
:::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="get_storage_details":::
+### Clone the examples repository
+
+To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the examples directory:
+
+```azurecli
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
-## Define the endpoint and deployment
+## Define the endpoint
+
+To define an endpoint, you need to specify:
+
+* Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+* Authentication mode: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+* Optionally, you can add a description and tags to your endpoint.
# [Azure CLI](#tab/azure-cli)
-The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* file:
+### Set an endpoint name
+To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
-> [!NOTE]
-> For a full description of the YAML, see [Online endpoint YAML reference](reference-yaml-endpoint-online.md).
+For Linux, run this command:
-The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+
+### Configure the endpoint
+
+The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* file:
++
+The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
| Key | Description | | -- | -- |
-| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser. |
-| `name` | The name of the endpoint. It must be unique in the Azure region.<br>Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
-| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. `key` doesn't expire, but `aml_token` does expire. (Get the most recent token by using the `az ml online-endpoint get-credentials` command.) |
+| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding code snippet in a browser. |
+| `name` | The name of the endpoint. |
+| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. To get the most recent token, use the `az ml online-endpoint get-credentials` command. |
+
+# [Python](#tab/python)
+
+### Configure an endpoint
+
+In this article, we first define the name of the online endpoint.
+
+```python
+# Define an endpoint name
+endpoint_name = "my-endpoint"
+
+# Example way to define a random name
+import datetime
+
+endpoint_name = "endpt-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+# create an online endpoint
+endpoint = ManagedOnlineEndpoint(
+ name = endpoint_name,
+ description="this is a sample endpoint"
+ auth_mode="key"
+)
+```
+
+For the authentication mode, we've used `key` for key-based authentication. To use Azure Machine Learning token-based authentication, use `aml_token`.
+
+# [Studio](#tab/azure-studio)
-The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
+### Configure an endpoint
+
+When you deploy to Azure, you'll create an endpoint and a deployment to add to it. At that time, you'll be prompted to provide names for the endpoint and deployment.
+
+# [ARM template](#tab/arm)
+
+### Set an endpoint name
+
+To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
+
+For Linux, run this command:
++
+### Configure the endpoint
+
+To define the endpoint and deployment, this article uses the Azure Resource Manager templates [online-endpoint.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint.json) and [online-endpoint-deployment.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint-deployment.json). To use the templates for defining an online endpoint and deployment, see the [Deploy to Azure](#deploy-to-azure) section.
+++
+## Define the deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To deploy a model, you must have:
- Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.-- The code that's required to score the model. In this case, we have a *score.py* file.-- An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
+- A scoring script, that is, code that executes the model on a given input request. The scoring script receives data submitted to a deployed web service and passes it to the model. The script then executes the model and returns its response to the client. The scoring script is specific to your model and must understand the data that the model expects as input and returns as output. In this example, we have a *score.py* file.
+- An environment in which your model runs. The environment can be a Docker image with Conda dependencies or a Dockerfile.
- Settings to specify the instance type and scaling capacity.
-The following snippet shows the *endpoints/online/managed/sample/blue-deployment.yml* file, with all the required inputs:
+The following table describes the key attributes of a deployment:
+| Attribute | Description |
+|--|-|
+| Name | The name of the deployment. |
+| Endpoint name | The name of the endpoint to create the deployment under. |
+| Model | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. |
+| Code path | The path to the directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| Scoring script | The relative path to the scoring file in the source code directory. This Python code must have an `init()` function and a `run()` function. The `init()` function will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
+| Environment | The environment to host the model and code. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. |
+| Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). |
+| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+
+# [Azure CLI](#tab/azure-cli)
-The table describes the attributes of a `deployment`:
+### Configure a deployment
-| Key | Description |
-| -- | -- |
-| `name` | The name of the deployment. |
-| `model` | In this example, we specify the model properties inline: `path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
-| `code_configuration.code.path` | The directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
-| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.path` scoring directory on the local development environment. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
-| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include the`path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. |
-| `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
-| `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+The following snippet shows the *endpoints/online/managed/sample/blue-deployment.yml* file, with all the required inputs to configure a deployment:
++
+> [!NOTE]
+> In the _blue-deployment.yml_ file, we've specified the following deployment attributes:
+> * `model` - In this example, we specify the model properties inline using the `path`. Model files are automatically uploaded and registered with an autogenerated name.
+> * `environment` - In this example, we have inline definitions that include the `path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image.
During deployment, the local files such as the Python source for the scoring model, are uploaded from the development environment.
For more information about the YAML schema, see the [online endpoint YAML refere
# [Python](#tab/python)
-In this article, we first define names of online endpoint and deployment for debug locally.
-
-1. Define endpoint (with name for local endpoint):
- ```python
- # Creating a local endpoint
- import datetime
+### Configure a deployment
- local_endpoint_name = "local-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+To configure a deployment:
- # create an online endpoint
- endpoint = ManagedOnlineEndpoint(
- name=local_endpoint_name, description="this is a sample local endpoint"
- )
- ```
-
-1. Define deployment (with name for local deployment)
-
- The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
+```python
+model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
+)
- * Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
- * The code that's required to score the model. In this case, we have a score.py file.
- * An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
- * Settings to specify the instance type and scaling capacity.
+blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_DS3_v2",
+ instance_count=1,
+)
+```
- **Key aspects of deployment**
- * `name` - Name of the deployment.
- * `endpoint_name` - Name of the endpoint to create the deployment under.
- * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
- * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
- * `code_configuration` - the configuration for the source code and scoring script
- * `path`- Path to the source code directory for scoring the model
- * `scoring_script` - Relative path to the scoring file in the source code directory
- * `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
- * `instance_count` - The number of instances to use for the deployment
+# [Studio](#tab/azure-studio)
- ```python
- model = Model(path="../model-1/model/sklearn_regression_model.pkl")
- env = Environment(
- conda_file="../model-1/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
- )
+### Configure a deployment
- blue_deployment = ManagedOnlineDeployment(
- name="blue",
- endpoint_name=local_endpoint_name,
- model=model,
- environment=env,
- code_configuration=CodeConfiguration(
- code="../model-1/onlinescoring", scoring_script="score.py"
- ),
- instance_type="Standard_DS2_v2",
- instance_count=1,
- )
- ```
+When you deploy to Azure, you'll create an endpoint and a deployment to add to it. At that time, you'll be prompted to provide names for the endpoint and deployment.
# [ARM template](#tab/arm)
-The Azure Resource Manager templates [online-endpoint.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint.json) and [online-endpoint-deployment.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint-deployment.json) are used by the steps in this article.
+### Configure the deployment
+
+To define the endpoint and deployment, this article uses the Azure Resource Manager templates [online-endpoint.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint.json) and [online-endpoint-deployment.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint-deployment.json). To use the templates for defining an online endpoint and deployment, see the [Deploy to Azure](#deploy-to-azure) section.
In this example, we specify the `path` (where to upload files from) inline. The
For registration, you can extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
+For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the CLI](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-cli). For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment).
+ # [Python](#tab/python)
-In this example, we specify the `path` (where to upload files from) inline. The SDK automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the codes.
+In this example, we specify the `path` (where to upload files from) inline. The SDK automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the codes.
+
+For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the SDK](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-sdk).
+
+For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment).
+
+# [Studio](#tab/azure-studio)
+
+### Register the model
+
+A model registration is a logical entity in the workspace that may contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model.
+
+To register the example model, follow these steps:
-For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the SDK](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-sdk)
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Models** page.
+1. Select **Register**, and then choose **From local files**.
+1. Select __Unspecified type__ for the __Model type__.
+1. Select __Browse__, and choose __Browse folder__.
-For more information on creating an environment, see
-[Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment)
+ :::image type="content" source="media/how-to-deploy-online-endpoints/register-model-folder.png" alt-text="A screenshot of the browse folder option." lightbox="media/how-to-deploy-online-endpoints/register-model-folder.png":::
+
+1. Select the `\azureml-examples\cli\endpoints\online\model-1\model` folder from the local copy of the repo you cloned or downloaded earlier. When prompted, select __Upload__ and wait for the upload to complete.
+1. Select __Next__ after the folder upload is completed.
+1. Enter a friendly __Name__ for the model. The steps in this article assume the model is named `model-1`.
+1. Select __Next__, and then __Register__ to complete registration.
+
+For more information on working with registered models, see [Register and work with models](how-to-manage-models.md).
+
+For information on creating an environment in the studio, see [Create an environment](how-to-manage-environments-in-studio.md#create-an-environment).
# [ARM template](#tab/arm)
For more information on creating an environment, see
-### Use different CPU and GPU instance types
+### Use different CPU and GPU instance types and images
+
+# [Azure CLI](#tab/azure-cli)
+
+The preceding definition in the _blue-deployment.yml_ file uses a general-purpose type `Standard_DS2_v2` instance and a non-GPU Docker image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest`. For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
+
+For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
+
+> [!NOTE]
+> To use Kubernetes instead of managed endpoints as a compute target, see [Introduction to Kubernetes compute target](./how-to-attach-kubernetes-anywhere.md).
+
+# [Python](#tab/python)
-The preceding YAML uses a general-purpose type (`Standard_DS2_v2`) and a non-GPU Docker image (in the YAML, see the `image` attribute). For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
+The preceding definition of the `blue_deployment` uses a general-purpose type `Standard_DS2_v2` instance and a non-GPU Docker image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest`. For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers). > [!NOTE]
-> To use Kubernetes instead of managed endpoints as a compute target, see [Introduction to Kubernetes compute target](./how-to-attach-kubernetes-anywhere.md)
+> To use Kubernetes instead of managed endpoints as a compute target, see [Introduction to Kubernetes compute target](./how-to-attach-kubernetes-anywhere.md).
+
+# [Studio](#tab/azure-studio)
+
+When using the studio to deploy to Azure, you'll be prompted to specify the compute properties (instance type and instance count) and environment to use for your deployment.
+
+For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For more information on environments, see [Manage software environments in Azure Machine Learning studio](how-to-manage-environments-in-studio.md).
+
+# [ARM template](#tab/arm)
+
+The preceding registration of the environment specifies a non-GPU docker image `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1` by passing the value to the `environment-version.json` template using the `dockerImage` parameter. For a GPU compute, provide a value for a GPU docker image to the template (using the `dockerImage` parameter) and provide a GPU compute type SKU to the `online-endpoint-deployment.json` template (using the `skuName` parameter).
+
+For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
++
-### Use more than one model
+### Use more than one model in a deployment
-Currently, you can specify only one model per deployment in the YAML. If you have more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained. For an example of deploying multiple models to one deployment, see [Deploy multiple models to one deployment](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel).
+Currently, you can specify only one model per deployment in the deployment definition when you use the Azure CLI, Python SDK, or any of the other client tools.
+
+To use more than one model in a deployment, register a model folder that contains all the models as files or subdirectories. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure will be retained. For an example of deploying multiple models to one deployment, see [Deploy multiple models to one deployment (CLI example)](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel) and [Deploy multiple models to one deployment (SDK example)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/custom-container/online-endpoints-custom-container-multimodel.ipynb).
> [!TIP] > If you have more than 1500 files to register, you may consider compressing the files or subdirectories as .tar.gz when registering the model. To consume the models, you can uncompress the files or subdirectories in the init() function from the scoring script. Alternatively, when you register the model, set the `azureml.unpack` property to `True`, which will allow automatic uncompression. In either case, uncompression happens once in the initialization stage. ++ ## Understand the scoring script > [!TIP] > The format of the scoring script for online endpoints is the same format that's used in the preceding version of the CLI and in the Python SDK. # [Azure CLI](#tab/azure-cli)
-As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function.
+As noted earlier, the scoring script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function.
# [Python](#tab/python)
-As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function.
+The scoring script must have an `init()` function and a `run()` function.
+
+# [Studio](#tab/azure-studio)
+The scoring script must have an `init()` function and a `run()` function.
# [ARM template](#tab/arm)
-As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
+The scoring script must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
When using a template for deployment, you must first upload the scoring file(s) to an Azure Blob store, and then register it:
This example uses the [score.py file](https://github.com/Azure/azureml-examples/
__score.py__ :::code language="python" source="~/azureml-examples-main/cli/endpoints/online/model-1/onlinescoring/score.py" :::
-The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
+The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. The `init` function is the place to write logic for global initialization operations like caching the model in memory (as we do in this example).
-## Deploy and debug locally by using local endpoints
+The `run()` function is called for every invocation of the endpoint, and it does the actual scoring and prediction. In this example, we'll extract data from a JSON input, call the scikit-learn model's `predict()` method, and then return the result.
-To save time debugging, we *highly recommend* that you test-run your endpoint locally. For more, see [Debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
+## Deploy and debug locally by using local endpoints
-> [!NOTE]
-> * To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.
-> * Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
+We *highly recommend* that you test-run your endpoint locally by validating and debugging your code and configuration before you deploy to Azure. Azure CLI and Python SDK support local endpoints and deployments, while Azure Machine Learning studio and ARM template don't.
-> [!IMPORTANT]
-> The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
-> - Local endpoints do *not* support traffic rules, authentication, or probe settings.
-> - Local endpoints support only one deployment per endpoint.
-> - Local endpoints do *not* support registered models. To use models already registered, you can download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download) and refer to them in the deployment definition.
+To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed and running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
> [!TIP] > You can use [Azure Machine Learning inference HTTP server Python package](how-to-inference-server-http.md) to debug your scoring script locally **without Docker Engine**. Debugging with the inference server helps you to debug the scoring script before deploying to local endpoints so that you can debug without being affected by the deployment container configurations.
+Local endpoints have the following limitations:
+- They do *not* support traffic rules, authentication, or probe settings.
+- They support only one deployment per endpoint.
+- They support local model files only. If you want to test registered models, first download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download), then use `path` in the deployment definition to refer to the parent folder.
+
+For more information on debugging online endpoints locally before deploying to Azure, see [Debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
+ ### Deploy the model locally First create an endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. Deploying models locally is useful for development and testing purposes.
First create an endpoint. Optionally, for a local endpoint, you can skip this st
ml_client.online_endpoints.begin_create_or_update(endpoint, local=True) ```
+# [Studio](#tab/azure-studio)
+
+The studio doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ # [ARM template](#tab/arm) The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
ml_client.online_deployments.begin_create_or_update(
The `local=True` flag directs the SDK to deploy the endpoint in the Docker environment.
+# [Studio](#tab/azure-studio)
+
+The studio doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ # [ARM template](#tab/arm) The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
The output should appear similar to the following JSON. The `provisioning_state`
# [Python](#tab/python) ```python
-ml_client.online_endpoints.get(name=local_endpoint_name, local=True)
+ml_client.online_endpoints.get(name=endpoint_name, local=True)
``` The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint). The `provisioning_state` is `Succeeded`. ```python
-ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'local-10061534497697', 'description': 'this is a sample local endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None})
+ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'endpt-10061534497697', 'description': 'this is a sample endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None})
```
+# [Studio](#tab/azure-studio)
+
+The studio doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ # [ARM template](#tab/arm) The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
Invoke the endpoint to score the model by using the convenience command invoke a
```python ml_client.online_endpoints.invoke(
- endpoint_name=local_endpoint_name,
+ endpoint_name=endpoint_name,
request_file="../model-1/sample-request.json", local=True, )
endpoint = ml_client.online_endpoints.get(endpoint_name)
scoring_uri = endpoint.scoring_uri ```
+# [Studio](#tab/azure-studio)
+
+The studio doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ # [ARM template](#tab/arm) The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
You can view this output by using the `get_logs` method:
```python ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=local_endpoint_name, local=True, lines=50
+ name="blue", endpoint_name=endpoint_name, local=True, lines=50
) ```
+# [Studio](#tab/azure-studio)
+
+The studio doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
+ # [ARM template](#tab/arm) The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
This deployment might take up to 15 minutes, depending on whether the underlying
> * If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status. > [!IMPORTANT]
-> The `--all-traffic` flag in the above `az ml online-deployment create` allocates 100% of the traffic to the endpoint to the newly created deployment. Though this is helpful for development and testing purposes, for production, you might want to open traffic to the new deployment through an explicit command. For example,
-> `az ml online-endpoint update -n $ENDPOINT_NAME --traffic "blue=100"`
+> The `--all-traffic` flag in the above `az ml online-deployment create` allocates 100% of the endpoint traffic to the newly created blue deployment. Though this is helpful for development and testing purposes, for production, you might want to open traffic to the new deployment through an explicit command. For example, `az ml online-endpoint update -n $ENDPOINT_NAME --traffic "blue=100"`.
# [Python](#tab/python)
-1. Configure online endpoint:
-
- > [!TIP]
- > * `endpoint_name`: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
- > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
- > * Optionally, you can add description, tags to your endpoint.
-
- ```python
- # Creating a unique endpoint name with current datetime to avoid conflicts
- import datetime
-
- online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
-
- # create an online endpoint
- endpoint = ManagedOnlineEndpoint(
- name=online_endpoint_name,
- description="this is a sample online endpoint",
- auth_mode="key",
- tags={"foo": "bar"},
- )
- ```
- 1. Create the endpoint:
- Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+ Using the `endpoint` we defined earlier and the `MLClient` created earlier, we'll now create the endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
```python ml_client.online_endpoints.begin_create_or_update(endpoint) ```
-2. Configure online deployment:
+1. Create the deployment:
- A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
-
- ```python
- model = Model(path="../model-1/model/sklearn_regression_model.pkl")
- env = Environment(
- conda_file="../model-1/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
- )
-
- blue_deployment = ManagedOnlineDeployment(
- name="blue",
- endpoint_name=online_endpoint_name,
- model=model,
- environment=env,
- code_configuration=CodeConfiguration(
- code="../model-1/onlinescoring", scoring_script="score.py"
- ),
- instance_type="Standard_DS2_v2",
- instance_count=1,
- )
- ```
-
-3. Create the deployment:
-
- Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+ Using the `blue_deployment` that we defined earlier and the `MLClient` we created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
```python ml_client.online_deployments.begin_create_or_update(blue_deployment)
This deployment might take up to 15 minutes, depending on whether the underlying
ml_client.online_endpoints.begin_create_or_update(endpoint) ```
+# [Studio](#tab/azure-studio)
+
+### Create a managed online endpoint and deployment
+
+Use the studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You can't create an empty managed online endpoint.
+
+One way to create a managed online endpoint in the studio is from the **Models** page. This method also provides an easy way to add a model to an existing managed online deployment. To deploy the model named `model-1` that you registered previously in the [Register the model](#register-the-model) section:
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Models** page.
+1. Select the model named `model-1` by checking the circle next to its name.
+1. Select **Deploy** > **Deploy to real-time endpoint**.
+
+ :::image type="content" source="media/how-to-deploy-online-endpoints/deploy-from-models-page.png" lightbox="media/how-to-deploy-online-endpoints/deploy-from-models-page.png" alt-text="A screenshot of creating a managed online endpoint from the Models UI.":::
+
+ This action opens up a window where you can specify details about your endpoint.
+
+ :::image type="content" source="media/how-to-deploy-online-endpoints/online-endpoint-wizard.png" lightbox="media/how-to-deploy-online-endpoints/online-endpoint-wizard.png" alt-text="A screenshot of a managed online endpoint create wizard.":::
+
+1. Enter an __Endpoint name__.
+
+ > [!NOTE]
+ > * Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+ > * Authentication type: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A `key` doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+ > * Optionally, you can add a description and tags to your endpoint.
+
+1. Keep the default selections: __Managed__ for the compute type and __key-based authentication__ for the authentication type.
+1. Select __Next__, until you get to the "Deployment" page. Here, check the box for __Enable Application Insights diagnostics and data collection__ to allow you view graphs of your endpoint's activities in the studio later.
+1. Select __Next__ to go to the "Environment" page. Here, select the following options:
+
+ * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-1\onlinescoring\score.py` file from the repo you cloned or downloaded earlier.
+ * __Choose an environment__ section: Select the **Scikit-learn 0.24.1** curated environment.
+
+1. Select __Next__, accepting defaults, until you're prompted to create the deployment.
+1. Review your deployment settings and select the __Create__ button.
+
+Alternatively, you can create a managed online endpoint from the **Endpoints** page in the studio.
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select **+ Create**.
+
+ :::image type="content" source="media/how-to-deploy-online-endpoints/endpoint-create-managed-online-endpoint.png" lightbox="media/how-to-deploy-online-endpoints/endpoint-create-managed-online-endpoint.png" alt-text="A screenshot for creating managed online endpoint from the Endpoints tab.":::
+
+This action opens up a window for you to specify details about your endpoint and deployment. Enter settings for your endpoint and deployment as described in the previous steps 5-10, accepting defaults until you're prompted to __Create__ the deployment.
+ # [ARM template](#tab/arm) 1. The following example demonstrates using the template to create an online endpoint:
This deployment might take up to 15 minutes, depending on whether the underlying
# [Azure CLI](#tab/azure-cli)
-The `show` command contains information in `provisioning_status` for endpoint and deployment:
+The `show` command contains information in `provisioning_state` for the endpoint and deployment:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_status" :::
az ml online-endpoint list --output table
Check the status to see whether the model was deployed without error: ```python
-ml_client.online_endpoints.get(name=online_endpoint_name)
+ml_client.online_endpoints.get(name=endpoint_name)
``` You can list all the endpoints in the workspace in a table format by using the `list` method:
for endpoint in ml_client.online_endpoints.list():
print(endpoint.name) ```
-The method returns list (iterator) of `ManagedOnlineEndpoint` entities. You can get other information by specifying [parameters](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint#parameters).
+The method returns a list (iterator) of `ManagedOnlineEndpoint` entities. You can get other information by specifying [parameters](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint#parameters).
For example, output the list of endpoints like a table:
for endpoint in ml_client.online_endpoints.list():
print(f"{endpoint.kind}\t{endpoint.location}\t{endpoint.name}") ```
+# [Studio](#tab/azure-studio)
+
+### View managed online endpoints
+
+You can view all your managed online endpoints in the **Endpoints** page. Go to the endpoint's **Details** page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
+
+1. In the left navigation bar, select **Endpoints**. Here, you can see a list of all the endpoints in the workspace.
+1. (Optional) Create a **Filter** on **Compute type** to show only **Managed** compute types.
+1. Select an endpoint name to view the endpoint's __Details__ page.
++ # [ARM template](#tab/arm) > [!TIP]
-> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources.
+> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources. Use the Azure CLI, Python SDK, or the studio to perform these operations. The following code uses the Azure CLI.
-The `show` command contains information in `provisioning_status` for endpoint and deployment:
+The `show` command contains information in the `provisioning_state` for the endpoint and deployment:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_status" :::
az ml online-endpoint list --output table
### Check the status of the online deployment
-Check the logs to see whether the model was deployed without error:
+Check the logs to see whether the model was deployed without error.
# [Azure CLI](#tab/azure-cli)
+To see log output from a container, use the following CLI command:
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
-By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
+By default, logs are pulled from the inference server container. To see logs from the storage initializer container, add the `--container storage-initializer` flag. For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
# [Python](#tab/python)
You can view this output by using the `get_logs` method:
```python ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=online_endpoint_name, lines=50
+ name="blue", endpoint_name=endpoint_name, lines=50
) ```
-By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `container_type="storage-initializer"` option.
+By default, logs are pulled from the inference server container. To see logs from the storage initializer container, add the `container_type="storage-initializer"` option. For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
```python ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=online_endpoint_name, lines=50, container_type="storage-initializer"
+ name="blue", endpoint_name=endpoint_name, lines=50, container_type="storage-initializer"
) ```
+# [Studio](#tab/azure-studio)
+
+To view log output, select the **Deployment logs** tab in the endpoint's **Details** page. If you have multiple deployments in your endpoint, use the dropdown to select the deployment whose log you want to see.
++
+By default, logs are pulled from the inference server. To see logs from the storage initializer container, use the Azure CLI or Python SDK (see each tab for details). For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
+ # [ARM template](#tab/arm) > [!TIP]
-> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources.
+> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources. Use the Azure CLI, Python SDK, or the studio to perform these operations. The following code uses the Azure CLI.
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
-By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
+By default, logs are pulled from the inference server container. To see logs from the storage initializer container, add the `--container storage-initializer` flag. For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
-For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
- ### Invoke the endpoint to score data by using your model # [Azure CLI](#tab/azure-cli)
To see the invocation logs, run `get-logs` again.
For information on authenticating using a token, see [Authenticate to online endpoints](how-to-authenticate-online-endpoint.md). - # [Python](#tab/python) Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
We'll send a sample request using a [json](https://github.com/Azure/azureml-exam
```python # test the blue deployment with some sample data ml_client.online_endpoints.invoke(
- endpoint_name=online_endpoint_name,
+ endpoint_name=endpoint_name,
deployment_name="blue", request_file="../model-1/sample-request.json", ) ```
+# [Studio](#tab/azure-studio)
+
+Use the **Test** tab in the endpoint's details page to test your managed online deployment. Enter sample input and view the results.
+
+1. Select the **Test** tab in the endpoint's detail page.
+1. Use the dropdown to select the deployment you want to test.
+1. Enter sample input.
+1. Select **Test**.
++ # [ARM template](#tab/arm) > [!TIP]
-> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources.
+> While templates are useful for deploying resources, they can't be used to list, show, or invoke resources. Use the Azure CLI, Python SDK, or the studio to perform these operations. The following code uses the Azure CLI.
You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data:
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file cli/endpoints/
# [Azure CLI](#tab/azure-cli)
-If you want to update the code, model, or environment, update the YAML file, and then run the `az ml online-endpoint update` command.
+If you want to update the code, model, or environment, update the YAML file, and then run the `az ml online-endpoint update` command.
> [!NOTE]
-> If you update instance count and along with other model settings (code, model, or environment) in a single `update` command: first the scaling operation will be performed, then the other updates will be applied. In production environment is a good practice to perform these operations separately.
+> If you update instance count (to scale your deployment) along with other model settings (such as code, model, or environment) in a single `update` command, the scaling operation will be performed first, then the other updates will be applied. It's a good practice to perform these operations separately in a production environment.
To understand how `update` works:
To understand how `update` works:
> Updating by using YAML is declarative. That is, changes in the YAML are reflected in the underlying Azure Resource Manager resources (endpoints and deployments). A declarative approach facilitates [GitOps](https://www.atlassian.com/git/tutorials/gitops): *All* changes to endpoints and deployments (even `instance_count`) go through the YAML. > [!TIP]
- > With the `update` command, you can use the [`--set` parameter in the Azure CLI](/cli/azure/use-cli-effectively#generic-update-parameters) to override attributes in your YAML *or* to set specific attributes without passing the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).
+ > You can use [generic update parameters](/cli/azure/use-cli-effectively#generic-update-parameters), such as the `--set` parameter, with the CLI `update` command to override attributes in your YAML *or* to set specific attributes without passing them in the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).
-1. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
+1. Because you modified the `init()` function, which runs when the endpoint is created or updated, the message `Updated successfully` will be in the logs. Retrieve the logs by running:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
The `update` command also works with local deployments. Use the same `az ml onli
# [Python](#tab/python)
-If you want to update the code, model, or environment, update the configuration, and then run the `MLClient`'s [`online_deployments.begin_create_or_update`](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-begin-create-or-update) module/method.
+If you want to update the code, model, or environment, update the configuration, and then run the `MLClient`'s `online_deployments.begin_create_or_update` method to [create or update a deployment](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-begin-create-or-update).
> [!NOTE]
-> If you update instance count and along with other model settings (code, model, or environment) in a single `begin_create_or_update` method: first the scaling operation will be performed, then the other updates will be applied. In production environment is a good practice to perform these operations separately.
+> If you update instance count (to scale your deployment) along with other model settings (such as code, model, or environment) in a single `begin_create_or_update` method, the scaling operation will be performed first, then the other updates will be applied. It's a good practice to perform these operations separately in a production environment.
To understand how `begin_create_or_update` works:
To understand how `begin_create_or_update` works:
ml_client.online_deployments.begin_create_or_update(blue_deployment) ```
-5. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
+5. Because you modified the `init()` function, which runs when the endpoint is created or updated, the message `Updated successfully` will be in the logs. Retrieve the logs by running:
```python ml_client.online_deployments.get_logs(
- name="blue", endpoint_name=online_endpoint_name, lines=50
+ name="blue", endpoint_name=endpoint_name, lines=50
) ``` The `begin_create_or_update` method also works with local deployments. Use the same method with the `local=True` flag.
+# [Studio](#tab/azure-studio)
+
+Currently, the studio allows you to make updates only to the instance count of a deployment. Use the following instructions to scale an individual deployment up or down by adjusting the number of instances:
+
+1. Open the endpoint's **Details** page and find the card for the deployment you want to update.
+1. Select the edit icon (pencil icon) next to the deployment's name.
+1. Update the instance count associated with the deployment. You can choose between **Default** or **Target Utilization** for "Deployment scale type".
+ - If you select **Default**, you cal also specify a numerical value for the **Instance count**.
+ - If you select **Target Utilization**, you can specify values to use for parameters when autoscaling the deployment.
+1. Select **Update** to finish updating the instance counts for your deployment.
++ # [ARM template](#tab/arm)
-There currently is not an option to update the deployment using an ARM template.
+There currently isn't an option to update the deployment using an ARM template.
> [!Note]
-> The above is an example of inplace rolling update.
-> * For managed online endpoint, the same deployment is updated with the new configuration, with 20% nodes at a time, i.e. if the deployment has 10 nodes, 2 nodes at a time will be updated.
-> * For Kubernetes online endpoint, the system will iterately create a new deployment instance with the new configuration and delete the old one.
-> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-online-endpoints.md), which offers a safer alternative.
+> The previous update to the deployment is an example of an inplace rolling update.
+> * For a managed online endpoint, the deployment is updated to the new configuration with 20% nodes at a time. That is, if the deployment has 10 nodes, 2 nodes at a time will be updated.
+> * For a Kubernetes online endpoint, the system will iteratively create a new deployment instance with the new configuration and delete the old one.
+> * For production usage, you should consider [blue-green deployment](how-to-safely-rollout-online-endpoints.md), which offers a safer alternative for updating a web service.
### (Optional) Configure autoscaling
To view metrics and set alerts based on your SLA, complete the steps that are de
### (Optional) Integrate with Log Analytics
-The `get-logs` command for CLI or the `get_logs` method for SDK provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs)
+The `get-logs` command for CLI or the `get_logs` method for SDK provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs).
+<!-- [!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)] -->
## Delete the endpoint and the deployment
-If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
- # [Azure CLI](#tab/azure-cli)
+If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
+ ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="delete_endpoint" ::: # [Python](#tab/python)
+If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
+ ```python
-ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+ml_client.online_endpoints.begin_delete(name=endpoint_name)
```
+# [Studio](#tab/azure-studio)
+
+If you aren't going use the endpoint and deployment, you should delete them. By deleting the endpoint, you'll also delete all its underlying deployments.
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select an endpoint by checking the circle next to the model name.
+1. Select **Delete**.
+
+Alternatively, you can delete a managed online endpoint directly by selecting the **Delete** icon in the [endpoint details page](#view-managed-online-endpoints).
+ # [ARM template](#tab/arm)
+If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
+ ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="delete_endpoint" ::: ## Next steps
-Try safe rollout of your models as a next step:
- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-
-To learn more, review these articles:
- - [Deploy models with REST](how-to-deploy-with-rest.md)-- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)
+- [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)
- [Access Azure resources from an online endpoint with a managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md) - [Enable network isolation with managed online endpoints](how-to-secure-online-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
+- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
Batch Endpoints can be used for processing tabular data, but also any other file
## About this sample
-The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`. The model has the following constrains that are important to keep in mind for deployment:
+The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from [here](https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip). The model has the following constrains that is important to keep in mind for deployment:
* It works with images of size 244x244 (tensors of `(224, 224, 3)`). * It requires inputs to be scaled to the range `[0,1]`.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch/deploy-models/imagenet-classifier` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/imagenet-classifier` if you are using our SDK for Python.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/batch
+cd azureml-examples/cli/endpoints/batch/deploy-models/imagenet-classifier
``` ### Follow along in Jupyter Notebooks
-You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/imagenet-classifier-batch.ipynb).
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/imagenet-classifier/imagenet-classifier-batch.ipynb).
## Prerequisites
You can follow along this sample in a Jupyter Notebook. In the cloned repository
In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/).
+### Create the endpoint
+
+First, let's create the endpoint that will host the model:
+
+# [Azure CLI](#tab/azure-cli)
+
+Decide on the name of the endpoint:
+
+```azurecli
+ENDPOINT_NAME="imagenet-classifier-batch"
+```
+
+The following YAML file defines a batch endpoint:
+
+__endpoint.yml__
++
+Run the following code to create the endpoint.
++
+# [Python](#tab/python)
+
+Decide on the name of the endpoint:
+
+```python
+endpoint_name="imagenet-classifier-batch"
+```
+
+Configure the endpoint:
+
+```python
+endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="An batch service to perform ImageNet image classification",
+)
+```
+
+Run the following code to create the endpoint:
+
+```python
+ml_client.batch_endpoints.begin_create_or_update(endpoint)
+```
+++ ### Registering the model Batch Endpoint can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
Batch Endpoint can only deploy registered models so we need to register it. You
# [Azure CLI](#tab/cli)
- ```azurecli
- wget https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip
- mkdir -p imagenet-classifier
- unzip model.zip -d imagenet-classifier
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_model" :::
# [Python](#tab/sdk)
Batch Endpoint can only deploy registered models so we need to register it. You
```azurecli MODEL_NAME='imagenet-classifier'
- az ml model create --name $MODEL_NAME --type "custom_model" --path "imagenet-classifier/model"
+ az ml model create --name $MODEL_NAME --path "model"
``` # [Python](#tab/sdk)
We need to create a scoring script that can read the images provided by the batc
> * The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects. > * It returns the classes and the probabilities associated with the predictions.
-__imagenet_scorer.py__
-
-```python
-import os
-import numpy as np
-import pandas as pd
-import tensorflow as tf
-from os.path import basename
-from PIL import Image
-from tensorflow.keras.models import load_model
--
-def init():
- global model
- global input_width
- global input_height
-
- # AZUREML_MODEL_DIR is an environment variable created during deployment
- model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
-
- # load the model
- model = load_model(model_path)
- input_width = 244
- input_height = 244
-
-def run(mini_batch):
- results = []
-
- for image in mini_batch:
- data = Image.open(image).resize((input_width, input_height)) # Read and resize the image
- data = np.array(data)/255.0 # Normalize
- data_batch = tf.expand_dims(data, axis=0) # create a batch of size (1, 244, 244, 3)
-
- # perform inference
- pred = model.predict(data_batch)
-
- # Compute probabilities, classes and labels
- pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
- pred_class = tf.math.argmax(pred, axis=-1).numpy()
+__code/score-by-file/batch_driver.py__
- results.append([basename(image), pred_class[0], pred_prob])
-
- return pd.DataFrame(results)
-```
> [!TIP] > Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions). However, there are certain cases where doing so enables high throughput in the scoring task. This is the case for instance of batch deployments over a GPU hardware where we want to achieve high GPU utilization. See [High throughput deployments](#high-throughput-deployments) for an example of a scoring script that takes advantage of it.
def run(mini_batch):
One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+1. Ensure you have a compute cluster created where we can create the deployment. In this example we are going to use a compute cluster named `gpu-cluster`. Although it's not required, we use GPUs to speed up the processing.
+ 1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file. # [Azure CLI](#tab/cli)
- No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
+ The environment definition will be included in the deployment file.
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-file.yml" range="7-10":::
# [Python](#tab/sdk)
One the scoring script is created, it's time to create a batch deployment for it
```python environment = Environment(
- conda_file="./imagenet-classifier/environment/conda.yml",
- image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ name="tensorflow27-cuda11-gpu",
+ conda_file="environment/conda.yml",
+ image="mcr.microsoft.com/azureml/curated/tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu:latest",
) ``` 1. Now, let create the deployment.
- > [!NOTE]
- > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
- # [Azure CLI](#tab/cli) To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
- endpoint_name: imagenet-classifier-batch
- name: imagenet-classifier-resnetv2
- description: A ResNetV2 model architecture for performing ImageNet classification in batch
- model: azureml:imagenet-classifier@latest
- compute: azureml:cpu-cluster
- environment:
- image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
- conda_file: ./imagenet-classifier/environment/conda.yml
- code_configuration:
- code: ./imagenet-classifier/code/
- scoring_script: imagenet_scorer.py
- resources:
- instance_count: 2
- max_concurrency_per_instance: 1
- mini_batch_size: 5
- output_action: append_row
- output_file_name: predictions.csv
- retry_settings:
- max_retries: 3
- timeout: 300
- error_threshold: -1
- logging_level: info
- ```
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-file.yml":::
Then, create the deployment with the following command:
- ```azurecli
- DEPLOYMENT_NAME="imagenet-classifier-resnetv2"
- az ml batch-deployment create -f deployment.yml
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
# [Python](#tab/sdk)
One the scoring script is created, it's time to create a batch deployment for it
model=model, environment=environment, code_configuration=CodeConfiguration(
- code="./imagenet-classifier/code/",
- scoring_script="imagenet_scorer.py",
+ code="code/score-by-file",
+ scoring_script="batch_driver.py",
), compute=compute_name, instance_count=2,
For testing our endpoint, we are going to use a sample of 1000 images from the o
# [Azure CLI](#tab/cli)
- ```bash
- wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
- unzip imagenet-1000.zip -d /tmp/imagenet-1000
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_sample_data" :::
# [Python](#tab/sdk) ```python !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
- !unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ !unzip imagenet-1000.zip -d data
``` 2. Now, let's create the data asset from the data just downloaded
For testing our endpoint, we are going to use a sample of 1000 images from the o
__imagenet-sample-unlabeled.yml__
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
- name: imagenet-sample-unlabeled
- description: A sample of 1000 images from the original ImageNet dataset.
- type: uri_folder
- path: /tmp/imagenet-1000
- ```
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/imagenet-sample-unlabeled.yml":::
Then, create the data asset:
- ```azurecli
- az ml data create -f imagenet-sample-unlabeled.yml
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_sample_data_asset" :::
# [Python](#tab/sdk) ```python
- data_path = "/tmp/imagenet-1000"
+ data_path = "data"
dataset_name = "imagenet-sample-unlabeled" imagenet_sample = Data(
For testing our endpoint, we are going to use a sample of 1000 images from the o
# [Azure CLI](#tab/cli)
- ```azurecli
- JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:imagenet-sample-unlabeled@latest | jq -r '.name')
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="start_batch_scoring_job" :::
> [!NOTE] > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
For testing our endpoint, we are going to use a sample of 1000 images from the o
# [Azure CLI](#tab/cli)
- ```azurecli
- az ml job show --name $JOB_NAME
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
# [Python](#tab/sdk)
For testing our endpoint, we are going to use a sample of 1000 images from the o
To download the predictions, use the following command:
- ```azurecli
- az ml job download --name $JOB_NAME --output-name score --download-path ./
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_scores" :::
# [Python](#tab/sdk)
On those cases, we may want to perform inference on the entire batch of data. Th
> [!WARNING] > Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. Batch again (as done in this example) or decrease the size of the batches created by the batch deployment to avoid out-of-memory exceptions.
-__imagenet_scorer_batch.py__
-
-```python
-import os
-import numpy as np
-import pandas as pd
-import tensorflow as tf
-from tensorflow.keras.models import load_model
-
-def init():
- global model
- global input_width
- global input_height
-
- # AZUREML_MODEL_DIR is an environment variable created during deployment
- model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+1. Creating the scoring script:
- # load the model
- model = load_model(model_path)
- input_width = 244
- input_height = 244
-
-def decode_img(file_path):
- file = tf.io.read_file(file_path)
- img = tf.io.decode_jpeg(file, channels=3)
- img = tf.image.resize(img, [input_width, input_height])
- return img/255.
-
-def run(mini_batch):
- images_ds = tf.data.Dataset.from_tensor_slices(mini_batch)
- images_ds = images_ds.map(decode_img).batch(64)
-
- # perform inference
- pred = model.predict(images_ds)
+ __code/score-by-batch/batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-file/batch_driver.py" :::
- # Compute probabilities, classes and labels
- pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
- pred_class = tf.math.argmax(pred, axis=-1).numpy()
+ > [!TIP]
+ > * Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
+ > * The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception.
+ > * Once predictions are computed, the tensors are converted to `numpy.ndarray`.
- return pd.DataFrame([mini_batch, pred_prob, pred_class], columns=['file', 'probability', 'class'])
-```
+1. Now, let create the deployment.
-Remarks:
-* Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
-* The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception.
-* Once predictions are computed, the tensors are converted to `numpy.ndarray`.
+ # [Azure CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-batch.yml":::
+
+ Then, create the deployment with the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_batch_deployment_ht" :::
+
+ # [Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="imagenet-classifier-resnetv2",
+ description="A ResNetV2 model architecture for performing ImageNet classification in batch",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="code/score-by-batch",
+ scoring_script="batch_driver.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ tags={ "device_acceleration": "CUDA", "device_batching": "16" }
+ max_concurrency_per_instance=1,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+1. You can use this new deployment with the sample data shown before. Remember that to invoke this deployment you should either indicate the name of the deployment in the invocation method or set it as the default one.
## Considerations for MLflow models processing images
-MLflow models in Batch Endpoints support reading images as input data. Remember that MLflow models don't require a scoring script. Have the following considerations when using them:
+MLflow models in Batch Endpoints support reading images as input data. Since MLflow deployments don't require a scoring script, have the following considerations when using them:
> [!div class="checklist"] > * Image files supported includes: `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` and `.gif`. > * MLflow models should expect to recieve a `np.ndarray` as input that will match the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor will invoke the MLflow model once per image file. > * MLflow models are highly encouraged to include a signature, and if they do it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred.
-> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature will allow batches of 3 channeled images. Specify the signature when you register the model with `mlflow.<flavor>.log_model(..., signature=signature)`.
+> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature example will allow batches of 3 channeled images.
```python import numpy as np
input_schema = Schema([
TensorSpec(np.dtype(np.uint8), (-1, -1, -1, 3)), ]) signature = ModelSignature(inputs=input_schema)+
+(...)
+
+mlflow.<flavor>.log_model(..., signature=signature)
```
-For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+You can find a working example in the Jupyter notebook [imagenet-classifier-mlflow.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/imagenet-classifier/imagenet-classifier-mlflow.ipynb). For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
## Next steps
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
Consider the following limitations when working on batch endpoints deployed rega
- If you change the networking configuration of the workspace from public to private, or from private to public, such doesn't affect existing batch endpoints networking configuration. Batch endpoints rely on the configuration of the workspace at the time of creation. You can recreate your endpoints if you want them to reflect changes you made in the workspace. -- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure Machine Learning CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-endpoint-and-configure-inputs-and-outputs).
+- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure Machine Learning CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-batch-endpoints-and-access-results).
## Recommended read
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
In this article, you'll learn how to use batch endpoints to do batch scoring.
In this example, we're going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we're going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. In the second half, [we're going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo. Then, change directories to either `cli/endpoints/batch` if you're using the Azure CLI or `sdk/endpoints/batch` if you're using the Python SDK.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo. Then, change directories to either `cli/endpoints/batch/deploy-models/mnist-classifier` if you're using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/mnist-classifier` if you're using the Python SDK.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/batch
+cd azureml-examples/cli/endpoints/batch/deploy-models/mnist-classifier
``` ### Follow along in Jupyter Notebooks
ml_client.begin_create_or_update(compute_cluster)
> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
-### Registering the model
-
-Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
-
-> [!TIP]
-> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
-
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-MODEL_NAME='mnist'
-az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/"
-```
-
-# [Python](#tab/python)
-
-```python
-model_name = 'mnist'
-model = ml_client.models.create_or_update(
- Model(name=model_name, path='./mnist/model/', type=AssetTypes.CUSTOM_MODEL)
-)
-```
-
-# [Studio](#tab/azure-studio)
-
-1. Navigate to the __Models__ tab on the side menu.
-1. Select __Register__ > __From local files__.
-1. In the wizard, leave the option *Model type* as __Unspecified type__.
-1. Select __Browse__ > __Browse folder__ > Select the folder `./mnist/model/` > __Next__.
-1. Configure the name of the model: `mnist`. You can leave the rest of the fields as they are.
-1. Select __Register__.
--- ## Create a batch endpoint A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](./concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
A deployment is a set of resources required for hosting the model that does the
* The environment in which the model runs. * The pre-created compute and resource settings.
-1. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
+1. Let's start by registering the model we want to deploy. Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
+
+ > [!TIP]
+ > Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ MODEL_NAME='mnist-classifier-torch'
+ az ml model create --name $MODEL_NAME --type "custom_model" --path "deployment-torch/model"
+ ```
+
+ # [Python](#tab/python)
+
+ ```python
+ model_name = 'mnist-classifier-torch'
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path='deployment-torch/model/', type=AssetTypes.CUSTOM_MODEL)
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Models__ tab on the side menu.
+
+ 1. Select __Register__ > __From local files__.
+
+ 1. In the wizard, leave the option *Model type* as __Unspecified type__.
+
+ 1. Select __Browse__ > __Browse folder__ > Select the folder `deployment-torch/model` > __Next__.
+
+ 1. Configure the name of the model: `mnist-classifier-torch`. You can leave the rest of the fields as they are.
- > [!NOTE]
- > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+ 1. Select __Register__.
- > [!WARNING]
- > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
+1. Now it's time to create a scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
- __deployment-torch/code/batch_driver.py__
+ > [!NOTE]
+ > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+ > [!WARNING]
+ > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
- :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
+ __deployment-torch/code/batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`:
A deployment is a set of resources required for hosting the model that does the
The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml" range="10-13":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml" range="11-14":::
# [Python](#tab/python)
A deployment is a set of resources required for hosting the model that does the
```python env = Environment(
+ name="batch-torch-py38",
conda_file="deployment-torch/environment/conda.yml", image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", )
A deployment is a set of resources required for hosting the model that does the
# [Azure CLI](#tab/azure-cli)
- __mnist-torch-deployment.yml__
+ __deployment-torch/deployment.yml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml":::
A deployment is a set of resources required for hosting the model that does the
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
-
- > [!NOTE]
- > __How is work distributed?__:
- >
- > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
1. Check batch endpoint and deployment details.
A deployment is a set of resources required for hosting the model that does the
Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code: :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="check_batch_deployment_detail" :::
-
# [Python](#tab/python)
A deployment is a set of resources required for hosting the model that does the
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
-## Run endpoint and configure inputs and outputs
+## Run batch endpoints and access results
+
+Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress.
-Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for some time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
+When running models for scoring in Batch Endpoints, you need to indicate the input data path where the endpoints should look for the data you want to score. The following example shows how to start a new job over a sample data of the MNIST dataset stored in an Azure Storage Account:
+
+> [!NOTE]
+> __How does parallelization work?__:
+>
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
# [Azure CLI](#tab/azure-cli)
job = ml_client.batch_endpoints.invoke(
-### Configure job's inputs
-
-Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
> [!TIP] > Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
Batch endpoints support reading files or folders that are located in different l
> [!IMPORTANT] > __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
+### Monitor batch job execution progress
+
+Batch scoring jobs usually take some time to process the entire set of inputs.
+
+# [Azure CLI](#tab/azure-cli)
+
+You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
++
+# [Python](#tab/python)
+
+The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
+
+```python
+ml_client.jobs.get(job.name)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to monitor.
+
+1. Select the tab __Jobs__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+
+1. You'll see a list of the jobs created for the selected endpoint.
+
+1. Select the last job that is running.
+
+1. You'll be redirected to the job monitoring page.
+++
+### Check batch scoring results
+
+The job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified. See [Configure the output location](#configure-the-output-location) to know how to change the defaults. Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
+
+1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
+
+1. In the graph of the job, select the `batchscoring` step.
+
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
+
+ The scoring results in Storage Explorer are similar to the following sample page:
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
+ ### Configure the output location The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
job = ml_client.batch_endpoints.invoke(
-### Monitor batch scoring job execution progress
-
-Batch scoring jobs usually take some time to process the entire set of inputs.
-
-# [Azure CLI](#tab/azure-cli)
-
-You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
--
-# [Python](#tab/python)
-
-The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
-
-```python
-ml_client.jobs.get(job.name)
-```
-
-# [Studio](#tab/azure-studio)
-
-1. Navigate to the __Endpoints__ tab on the side menu.
-
-1. Select the tab __Batch endpoints__.
-
-1. Select the batch endpoint you want to monitor.
-
-1. Select the tab __Jobs__.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
-
-1. You'll see a list of the jobs created for the selected endpoint.
-
-1. Select the last job that is running.
-
-1. You'll be redirected to the job monitoring page.
---
-### Check batch scoring results
-
-Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
-
-1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
-
-1. In the graph of the job, select the `batchscoring` step.
-
-1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
-
-1. From __Data outputs__, select the icon to open __Storage Explorer__.
-
- :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
-
- The scoring results in Storage Explorer are similar to the following sample page:
-
- :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
- ## Adding deployments to an endpoint Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
In this example, you'll learn how to add a second deployment __that solves the s
# [Azure CLI](#tab/azure-cli)
- *No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+ The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml" range="11-14":::
# [Python](#tab/python)
In this example, you'll learn how to add a second deployment __that solves the s
```python env = Environment(
- conda_file="deployment-kera/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ name="batch-tensorflow-py38",
+ conda_file="deployment-keras/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
) ```
In this example, you'll learn how to add a second deployment __that solves the s
1. On __Select environment type__ select __Use existing docker image with conda__.
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.0`.
- 1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal.
+ 1. On __Customize__ section copy the content of the file `deployment-keras/environment/conda.yml` included in the repository into the portal.
1. Select __Next__ and then on __Create__.
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
This table shows the VM SKUs that are supported for Azure Machine Learning manag
| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 <br/> Standard_FX24mds <br/> Standard_FX36mds <br/> Standard_FX48mds| Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | Standard_ND40rs_v2 <br/> Standard_ND96asr_v4 <br/> Standard_ND96amsr_A100_v4 <br/>| > [!CAUTION]
-> `Standard_DS1_v2` and `Standard_F2s_v2` may be too small for bigger models and may lead to container termination due to insufficient memory, not enough space on the disk, or probe failure as it takes too long to initiate the container. If you want to reduce the cost of deploying multiple models with managed online endpoint, see [the example for multi models](how-to-deploy-online-endpoints.md#use-more-than-one-model). If you face [OutOfQuota errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-outofquota) or [ReourceNotReady errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-resourcenotready), try bigger VM SKUs.
+> `Standard_DS1_v2` and `Standard_F2s_v2` may be too small for bigger models and may lead to container termination due to insufficient memory, not enough space on the disk, or probe failure as it takes too long to initiate the container. If you want to reduce the cost of deploying multiple models with managed online endpoint, see [the example for multi models](how-to-deploy-online-endpoints.md#use-more-than-one-model-in-a-deployment). If you face [OutOfQuota errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-outofquota) or [ReourceNotReady errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-resourcenotready), try bigger VM SKUs.
network-function-manager Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/partners.md
We have a growing ecosystem of partners offering their network function as manag
| | | | | Affirmed Private Network Service | Mobile packet core |[Configuration guide](../private-multi-access-edge-compute-mec/deploy-affirmed-private-network-service-solution.md)| | NetFoundry ZTNA | Other| [Azure Marketplace](https://portal.azure.com/#create/netfoundryinc.application-ziti-private-edgeapp-edge-router)|
-| Nuage Networks SD-WAN From Nokia | SD-WAN| [Azure Marketplace](https://aka.ms/NokiaNuage)|
| Versa SD-WAN| SD-WAN |[Azure Marketplace](https://aka.ms/versa)| | VMware SD-WAN | SD-WAN | [Azure Marketplace](https://portal.azure.com/#create/vmware-inc.vmware_sdwan_edge_zonesvelo_ase)|
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
Title: Azure RBAC permissions required to use capabilities-
-description: Learn which Azure role-based access control permissions are required to work with Network Watcher capabilities.
+ Title: Azure RBAC permissions required to use Azure Network Watcher capabilities
+description: Learn which Azure role-based access control (Azure RBAC) permissions are required to use Azure Network Watcher capabilities.
Previously updated : 10/07/2022 Last updated : 04/03/2023 + # Azure role-based access control permissions required to use Network Watcher capabilities
-Azure role-based access control (Azure RBAC) enables you to assign only the specific actions to members of your organization that they require to complete their assigned responsibilities. To use Network Watcher capabilities, the account you log into Azure with, must be assigned to the [Owner](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#owner), [Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#contributor), or [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#network-contributor) built-in roles, or assigned to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) that is assigned the actions listed for each Network Watcher capability in the sections that follow. To learn more about Network Watcher's capabilities, see [What is Network Watcher?](network-watcher-monitoring-overview.md).
+Azure role-based access control (Azure RBAC) enables you to assign only the specific actions to members of your organization that they require to complete their assigned responsibilities. To use Azure Network Watcher capabilities, the account you log into Azure with, must be assigned to the [Owner](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#owner), [Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#contributor), or [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#network-contributor) built-in roles, or assigned to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) that is assigned the actions listed for each Network Watcher capability in the sections that follow. To learn more about Network Watcher's capabilities, see [What is Network Watcher?](network-watcher-monitoring-overview.md).
+
+> [!IMPORTANT]
+> [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#network-contributor) does not cover Microsoft.Storage/* or Microsoft.Compute/* actions listed in [Additional actions](#additional-actions) section.
## Network Watcher
Network Watcher capabilities also require the following actions:
| Microsoft.Storage/storageAccounts/Read | Used to get the properties for the specified storage account | | Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action| Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account | | Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Used to log in to the VM, do a packet capture and upload it to storage account|
-| Microsoft.Compute/virtualMachines/extensions/Read </br> Microsoft.Compute/virtualMachines/extensions/Write| Used to check if Network Watcher extension is present, and install if required |
+| Microsoft.Compute/virtualMachines/extensions/Read </br> Microsoft.Compute/virtualMachines/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write| Used to access virtual machine scale sets, do packet captures and upload them to storage account|
-| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Used to check if Network Watcher extension is present, and install if required |
+| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
| Microsoft.Insights/alertRules/* | Used to set up metric alerts |
-| Microsoft.Support/* | Used to create and update support tickets from Network Watcher |
+| Microsoft.Support/* | Used to create and update support tickets from Network Watcher |
postgresql Howto Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-alert-on-metrics.md
Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Flexible Server from the Azure portal.--++
postgresql Howto Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-and-access-logs.md
Title: Configure and Access Logs - Flexible Server - Azure Database for PostgreSQL description: How to access database logs for Azure Database for PostgreSQL - Flexible Server--++ Previously updated : 11/30/2021 Last updated : 4/3/2023 # Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server
To enable resource logs using the Azure portal:
3. Name this setting.
-4. Select your preferred endpoint (storage account, event hub, log analytics).
+4. Select your preferred endpoint (Log Analytics workspace, Storage account, Event hub).
-5. Select the log type **PostgreSQLLogs**.
- :::image type="content" source="media/howto-logging/diagnostic-create-setting.png" alt-text="Choose PostgreSQL logs":::
+5. Select the log type from the list of categories (Server Logs, Sessions data, Query Store Runtime / Wait Statistics etc.)
+ :::image type="content" source="media/howto-logging/diagnostic-setting-log-category.png" alt-text="Screenshot of choosing log categories.":::
7. Save your setting.
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
Title: Configure server parameters - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL - Flexible Server through the Azure portal.--++
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-export-and-import.md
Last updated 09/22/2020 + # Migrate your PostgreSQL database using export and import [!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file.
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file. If you want to migrate all the databases, you can use [pg_dumpall](https://www.postgresql.org/docs/current/app-pg-dumpall.html) to dump all the databases into one script file.
## Prerequisites To step through this how-to guide, you need:
psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --us
++ ## Next steps - To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](how-to-migrate-using-dump-and-restore.md). - For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).+
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Last updated 01/27/2022
+zone_pivot_groups: ap5gc-portal-powershell
# Create a site using the Azure portal
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
Data plane packet capture works by mirroring packets to a Linux kernel interface
1. Remove the output files:
- `kubectl exec -it -n core core-upf-pp-0 -c troubleshooter ΓÇô- bash rm <path to output file>`
+ `kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- rm <path to output file>`
## Next steps
private-5g-core Deploy Private Mobile Network With Site Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-command-line.md
+
+ Title: Deploy a private mobile network and site - Azure CLI
+
+description: Learn how to deploy a private mobile network and site using Azure Command-Line Interface (Azure CLI).
+++++ Last updated : 03/15/2023++
+# Quickstart: Deploy a private mobile network and site - Azure CLI
+
+Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure CLI to deploy the following.
+
+- A private mobile network.
+- A site.
+- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- Optionally, one or more SIMs, and a SIM group.
++
+## Prerequisite: Prepare to deploy a private mobile network and site
+
+- [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) and [Commission the AKS cluster](commission-cluster.md).
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope.
+- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). If you want to provision SIMs, you'll need to prepare a JSON file containing your SIM information, as described in [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims).
+- Identify the names of the interfaces corresponding to ports 5 and 6 on the Azure Stack Edge Pro device in the site.
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
+- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
+
+## Azure CLI commands used in this article
+
+- [az mobile-network create](/cli/azure/mobile-network#az-mobile-network-create)
+- [az mobile-network site create](/cli/azure/mobile-network/site#az-mobile-network-site-create)
+- [az mobile-network pccp create](/cli/azure/mobile-network/pccp#az-mobile-network-pccp-create)
+- [az mobile-network pcdp create](/cli/azure/mobile-network/pcdp#az-mobile-network-pcdp-create)
+- [az mobile-network data-network create](/cli/azure/mobile-network/data-network#az-mobile-network-data-network-create)
+- [az mobile-network sim group create](/cli/azure/mobile-network/sim/group#az-mobile-network-sim-group-create)
+- [az mobile-network slice create](/cli/azure/mobile-network/slice#az-mobile-network-slice-create)
+- [az mobile-network service create](/cli/azure/mobile-network/service#az-mobile-network-service-create)
+- [az mobile-network sim policy create](/cli/azure/mobile-network/sim/policy#az-mobile-network-sim-policy-create)
+- [az mobile network sim create](/cli/azure/mobile-network/sim#az-mobile-network-sim-create)
+- [az mobile-network attached-data-network create](/cli/azure/mobile-network/attached-data-network#az-mobile-network-attached-data-network-create)
++
+## Deploy a private mobile network, site and SIM
+
+You must complete the following steps in order to successfully deploy a private mobile network, site and SIM. Each step must be fully complete before proceeding to the next.
+
+### Create a Mobile Network resource
+
+Use `az mobile-network create` to create a new **Mobile Network** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter a name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```azurecli
+az mobile-network create --location eastus -n <MOBILENETWORK> -g <RESOURCEGROUP> --identifier mcc=001 mnc=01
+```
+
+### Create a Site resource
+
+Use `az mobile-network site` to create a new **Site** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name of the private mobile network you created. |
+| `<SITE>` | Enter the name for the site. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```azurecli
+az mobile-network site create --mobile-network-name <MOBILENETWORK> -n <SITE> -g <RESOURCEGROUP>
+```
+
+### Create a Packet Core Control Plane resource
+
+Use `az mobile-network pccp create` to create a new **Packet Core Control Plane** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<ASE>` | Enter the name of the ASE. |
+| `<CUSTOMLOCATION>` | Enter the name of the custom location. |
+| `<MOBILENETWORK>` | Enter the name of the mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<CONTROLPLANE>` | Enter the name for the packet core control plane. |
+| `<SITE>` | Enter the name of the site. |
+| `<IPV4ADDRESS>` | Enter the IPv4 address of the site. |
+
+Obtain the ASE ID and assign it to a variable.
+
+```azurecli
+ASE_ID=$(databoxedge device show --device-name <ASE> -g <RESOURCEGROUP> --query "id")
+```
+
+Obtain the custom location ID and assign it to a variable.
+
+```azurecli
+CUSTOM_LOCATION_ID=$(customlocation show --name <CUSTOMLOCATION> -g <RESOURCEGROUP> --query "id")
+```
+
+Obtain the site ID and assign it to a variable.
+
+```azurecli
+SITE_ID=$(mobile-network site show --mobile-network-name <MOBILENETWORK> -g <RESOURCEGROUP> -n <SITE> --query "id")
+```
+
+Create the packet core control plane.
+
+```azurecli
+az mobile-network pccp create -n <CONTROLPLANE> -g <RESOURCEGROUP> --access-interface name=N2 ipv4Address=<IPV4ADDRESS> --local-diagnostics authentication-type=Password --platform type=AKS-HCI azure-stack-edge-device="{id:$ASE_ID}" customLocation="{id:$CUSTOM_LOCATION_ID}" --sites "[{id:$SITE_ID}]" --sku G0 --location eastus
+```
+
+### Create a Packet Core Data Plane resource
+
+Use `az mobile-network pcdp create` to create a new **Packet Core Data Plane** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<DATAPLANE>` | Enter the name for the data plane. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<CONTROLPLANE>` | Enter the name of the packet core control plane. |
+
+```azurecli
+az mobile-network pcdp create -n <DATAPLANE> -g <RESOURCEGROUP> --pccp-name <CONTROLPLANE> --access-interface name=N3
+```
+
+### Create a Data Network
+
+Use `az mobile-network data-network create` to create a new **Data Network** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<MOBILENETWORK>` | Enter the name of the private mobile network. |
+
+```azurecli
+az mobile-network data-network create -n <DATANETWORK> -g <RESOURCEGROUP> --mobile-network-name <MOBILENETWORK> --location eastus
+```
+
+### Create a SIM Group
+
+Use `az mobile-network sim group create` to create a new **Packet Core Data Plane** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+Use `` to create a new **SIM Group**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Variable|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name of the private mobile network. |
+| `<SIMGROUP>` | Enter the name for the sim group. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+Obtain the mobile network ID and assign it to a variable.
+
+```azurecli
+NETWORK_ID=$(mobile-network show --mobile-network-name <MOBILENETWORK> -g <RESOURCEGROUP> --query "id")
+```
+
+Create the SIM group.
+
+```azurecli
+az mobile-network sim group create -n <SIMGROUP> -g <RESOURCEGROUP> --mobile-network "{id:$NETWORK_ID}"
+```
+
+### Create a Slice
+
+Use `az mobile-network slice create` to create a new **Slice**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SLICE>` | Enter the name of the slice. |
+
+```azurecli
+az mobile-network slice create --mobile-network-name <MOBILENETWORK> -n <SLICE> -g <RESOURCEGROUP> --snssai "{sst:1,sd:123abc}"
+```
+
+### Create a Service
+
+Use `az mobile-network service create` to create a new **Service**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<SERVICE>` | Enter the name of the service. |
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```azurecli
+az mobile-network service create -n <SERVICE> -g <RESOURCEGROUP> --mobile-network-name <MOBILENETWORK> --pcc-rules "[{ruleName:default-rule,rulePrecedence:10,serviceDataFlowTemplates:[{templateName:IP-to-server,direction:Uplink,protocol:[ip],remoteIpList:[10.3.4.0/24]}]}]" --service-precedence 10
+```
+
+### Create a SIM Policy
+
+Use `az mobile-network sim policy create` to create a new **SIM Policy**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<SLICE>` | Enter the name of the slice. |
+| `<DATANETWORK>` | Enter the name of the data network. |
+| `<SERVICE>` | Enter the name of the service. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SIMPOLICY>` | Enter the name for the SIM policy. |
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+
+Obtain the slice ID and assign it to a variable.
+
+```azurecli
+SLICE_ID=$(mobile-network slice show --mobile-network-name <MOBILENETWORK> -g <RESOURCEGROUP> -n <SLICE> --query "id")
+```
+
+Obtain the data network ID and assign it to a variable.
+
+```azurecli
+DATANETWORK_ID=$(mobile-network data-network show -n <DATANETWORK> --mobile-network-name <MOBILENETWORK> -g <RESOURCEGROUP> --query "id")
+```
+
+Obtain the service ID and assign it to a variable.
+
+```azurecli
+SERVICE_ID=$(mobile-network service show -n <SERVICE> --mobile-network-name <MOBILENETWORK> -g <RESOURCEGROUP> --query "id")
+```
+
+Create the SIM policy.
+
+```azurecli
+az mobile-network sim policy create -g <RESOURCEGROUP> -n <SIMPOLICY> --mobile-network-name <MOBILENETWORK> --default-slice '{id:$SLICE_ID}' --slice-config "[{slice:{id:$SLICE_ID},defaultDataNetwork:{id:$DATANETWORK_ID},dataNetworkConfigurations:[{dataNetwork:{id:$DATANETWORK_ID},allowed
+```
+
+### Create a SIM
+
+Use `` to create a new **SIM**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<SIMGROUP>` | Enter the name of the SIM group. |
+| `<SIM> ` | Enter the name for the SIM. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```azurecli
+az mobile-network sim create -g <RESOURCEGROUP> --sim-group-name <SIMGROUP> -n <SIM> --international-msi 0000000000 --operator-key-code 00000000000000000000000000000000 --authentication-key 00000000000000000000000000000000
+```
+
+### Attach the Data Network
+
+Use `az mobile-network attached-data-network create` to attach the **Data Network** you created. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<CONTROLPLANE>` | Enter the name of the packet core control plane. |
+| `<DATAPLANE>` | Enter the name of the packet core data plane. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```azurecli
+az mobile-network attached-data-network create -n <DATANETWORK> -g <RESOURCEGROUP> --pccp-name <CONTROLPLANE> --pcdp-name <DATAPLANE> --dns-addresses "[1.1.1.1]" --data-interface name=N6 --address-pool 192.168.1.0/24
+```
+
+## Clean up resources
+
+If you do not want to keep your deployment, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group).
+
+## Next steps
+
+If you have kept your deployment, you can either begin designing policy control to determine how your private mobile network handles traffic, or you can add more sites to your private mobile network.
+
+- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md).
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
private-5g-core Deploy Private Mobile Network With Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-powershell.md
+
+ Title: Deploy a private mobile network and site - Azure PowerShell
+
+description: Learn how to deploy a private mobile network and site using Azure PowerShell.
+++++ Last updated : 03/15/2023++
+# Quickstart: Deploy a private mobile network and site - Azure PowerShell
+
+Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure PowerShell to deploy the following.
+
+- A private mobile network.
+- A site.
+- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- Optionally, one or more SIMs, and a SIM group.
++
+## Prerequisite: Prepare to deploy a private mobile network and site
+
+- [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) and [Commission the AKS cluster](commission-cluster.md).
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope.
+- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). If you want to provision SIMs, you'll need to prepare a JSON file containing your SIM information, as described in [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims).
+- Identify the names of the interfaces corresponding to ports 5 and 6 on the Azure Stack Edge Pro device in the site.
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
+- Refer to the release notes for the current version of packet core, and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. If your ASE version is incompatible with the latest packet core, [update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md).
+
+## Azure PowerShell commands used in this article
+
+- [New-AzMobileNetwork](/powershell/module/az.mobilenetwork/new-azmobilenetwork)
+- [New-AzMobileNetworkSite](/powershell/module/az.mobilenetwork/new-azmobilenetworksite)
+- [New-AzMobileNetworkPacketCoreControlPlane](/powershell/module/az.mobilenetwork/new-azmobilenetworkpacketcorecontrolplane)
+- [New-AzMobileNetworkPacketCoreDataPlane](/powershell/module/az.mobilenetwork/new-azmobilenetworkpacketcoredataplane)
+- [New-AzMobileNetworkDataNetwork](/powershell/module/az.mobilenetwork/new-azmobilenetworkdatanetwork)
+- [New-AzMobileNetworkSimGroup](/powershell/module/az.mobilenetwork/new-azmobilenetworksimgroup)
+- [New-AzMobileNetworkSlice](/powershell/module/az.mobilenetwork/new-azmobilenetworkslice)
+- [New-AzMobileNetworkServiceResourceIdObject](/powershell/module/az.mobilenetwork/new-azmobilenetworkserviceresourceidobject)
+- [New-AzMobileNetworkSim](/powershell/module/az.mobilenetwork/new-azmobilenetworksim)
+- [New-AzMobileNetworkSimStaticIPPropertiesObject](/powershell/module/az.mobilenetwork/new-azmobilenetworksimstaticippropertiesobject)
+- [New-AzMobileNetworkAttachedDataNetwork](/powershell/module/az.mobilenetwork/new-azmobilenetworkattacheddatanetwork)
+
+## Sign in to Azure
++
+## Deploy a private mobile network, site and SIM
+
+You must complete the following steps in order to successfully deploy a private mobile network, site and SIM. Each step must be fully complete before proceeding to the next.
+
+Several commands will require the ID of the Azure subscription in which the Azure resources are to be deployed. This appears as `<SUB_ID>` in the commands below. Obtain that value before you proceed.
+
+### Create a Mobile Network resource
+
+Use `New-AzMobileNetwork` to create a new **Mobile Network** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter a name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```powershell
+New-AzMobileNetwork -Name <MOBILENETWORK> -ResourceGroupName <RESOURCEGROUP> -Location eastus -PublicLandMobileNetworkIdentifierMcc 001 -PublicLandMobileNetworkIdentifierMnc 01
+```
+
+### Create a Site resource
+
+Use `New-AzMobileNetworkSite` to create a new **Site** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name of the private mobile network you created. |
+| `<SITE>` | Enter the name for the site. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+New-AzMobileNetworkSite -MobileNetworkName <MOBILENETWORK> -Name <SITE> -ResourceGroupName <RESOURCEGROUP> -Location eastus
+```
+
+Create a variable containing the **Site** resource's ID.
+
+```powershell
+$siteResourceId = New-AzMobileNetworkSiteResourceIdObject -Id /subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/sites/<SITE>
+```
+
+### Create a Packet Core Control Plane resource
+
+Use `New-AzMobileNetworkPacketCoreControlPlane` to create a new **Packet Core Control Plane** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<CONTROLPLANE>` | Enter the name for the packet core control plane. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+$aseId = "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.DataBoxEdge/DataBoxEdgeDevices/<ASE>"
+$customLocationId = "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.ExtendedLocation/customLocations/<CUSTOMLOCATION>"
+New-AzMobileNetworkPacketCoreControlPlane -Name <CONTROLPLANE> -ResourceGroupName <RESOURCEGROUP> -LocalDiagnosticAccessAuthenticationType Password -Location eastus -PlatformType AKS-HCI -Site $siteResourceId -Sku G0 -ControlPlaneAccessInterfaceIpv4Address 10.232.44.56 -ControlPlaneAccessInterfaceName N2 -AzureStackEdgeDeviceId $aseId -CustomLocationId $customLocationId
+```
+
+### Create a Packet Core Data Plane resource
+
+Use `New-AzMobileNetworkPacketCoreDataPlane` to create a new **Packet Core Data Plane** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<DATAPLANE>` | Enter the name for the data plane. |
+| `<CONTROLPLANE>` | Enter the name of the packet core control plane. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```powershell
+New-AzMobileNetworkPacketCoreDataPlane -Name <DATAPLANE> -PacketCoreControlPlaneName <CONTROLPLANE> -ResourceGroupName <RESOURCEGROUP> -Location eastus -UserPlaneAccessInterfaceName N3
+```
+
+### Create a Data Network
+
+Use `New-AzMobileNetworkDataNetwork` to create a new **Data Network** resource. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name of the private mobile network. |
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+New-AzMobileNetworkDataNetwork -MobileNetworkName <MOBILENETWORK> -Name
+ <DATANETWORK> -ResourceGroupName <RESOURCEGROUP> -Location eastus
+```
+
+Create a variable for the **Data Network** resource's configuration.
+
+```powershell
+$dataNetworkConfiguration = New-AzMobileNetworkDataNetworkConfigurationObject -AllowedService $ServiceResourceId -DataNetworkId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/dataNetworks/<DATANETWORK>" -SessionAmbrDownlink "1 Gbps" -SessionAmbrUplink "500 Mbps" -FiveQi 9 -AllocationAndRetentionPriorityLevel 9 -DefaultSessionType 'IPv4' -MaximumNumberOfBufferedPacket 200 -PreemptionCapability 'NotPreempt' -PreemptionVulnerability 'Preemptable'
+```
+
+### Create a SIM Group
+
+Use `New-AzMobileNetworkSimGroup` to create a new **SIM Group**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Variable|Placeholder|Value|
+|-|-|
+| `<SIMGROUP>` | Enter the name for the sim group. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+New-AzMobileNetworkSimGroup -Name <SIMGROUP> -ResourceGroupName <RESOURCEGROUP> -Location eastus -MobileNetworkId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/MOBILENETWORK8"
+```
+
+Confirm that you want to perform the action by typing <kbd>Y</kbd>.
+
+### Create a Slice
+
+Use `New-AzMobileNetworkSlice` to create a new **Slice**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SLICE>` | Enter the name of the slice. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+New-AzMobileNetworkSlice -MobileNetworkName <MOBILENETWORK> -ResourceGroupName <RESOURCEGROUP> -SliceName <SLICE> -Location eastus -SnssaiSst 1
+```
+
+Create a variable for the **Slice** resource's configuration.
+
+```powershell
+$sliceConfiguration = New-AzMobileNetworkSliceConfigurationObject -DataNetworkConfiguration $dataNetworkConfiguration -DefaultDataNetworkId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/dataNetworks/<DATANETWORK>" -SliceId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/slices/<SLICE>"
+```
+
+### Create a Service
+
+Use `New-AzMobileNetworkService` to create a new **Service**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<SERVICE>` | Enter the name of the service. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+$dataFlowTemplates = New-AzMobileNetworkServiceDataFlowTemplateObject -Direction Bidirectional -Protocol ip -RemoteIPList any -TemplateName any
+
+$pccRule = New-AzMobileNetworkPccRuleConfigurationObject -RuleName rule_any -RulePrecedence 199 -ServiceDataFlowTemplate $dataFlowTemplates
+
+New-AzMobileNetworkService -MobileNetworkName <MOBILENETWORK> -Name <SERVICE> -ResourceGroupName <RESOURCEGROUP> -Location eastus -PccRule $pccRule -ServicePrecedence 255
+```
+
+Create a variable for the **Service** resource's ID.
+
+```powershell
+$serviceResourceId = New-AzMobileNetworkServiceResourceIdObject -Id "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/services/<SERVICE>"
+```
+
+### Create a SIM Policy
+
+Use `New-AzMobileNetworkSimPolicy` to create a new **SIM Policy**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<SERVICE>` | Enter the name of the service. |
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<SLICE>` | Enter the name of the slice. |
+| `<SIMPOLICY>` | Enter the name for the SIM policy. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+New-AzMobileNetworkSimPolicy -MobileNetworkName <MOBILENETWORK> -Name <SIMPOLICY> -ResourceGroupName <RESOURCEGROUP> -DefaultSliceId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/slices/<SLICE>" -Location eastus -SliceConfiguration $sliceConfiguration -UeAmbrDownlink "2 Gbps" -UeAmbrUplink "2 Gbps"
+```
+
+### Create a SIM
+
+Use `New-AzMobileNetworkSim` to create a new **SIM**. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<SIMGROUP>` | Enter the name of the SIM group. |
+| `<SIM>` | Enter the name for the SIM. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+| `<MOBILENETWORK>` | Enter the name for the private mobile network. |
+| `<SERVICE>` | Enter the name of the service. |
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<SLICE>` | Enter the name of the slice. |
+| `<SIMPOLICY>` | Enter the name of the SIM policy. |
+| `<SUB_ID>` | The ID of the Azure subscription in which the Azure resources are to be deployed. |
+
+```powershell
+$staticIp = New-AzMobileNetworkSimStaticIPPropertiesObject -StaticIPIpv4Address 10.0.0.20
+
+New-AzMobileNetworkSim -GroupName <SIMGROUP> -Name <SIM> -ResourceGroupName <RESOURCEGROUP> -InternationalMobileSubscriberIdentity 000000000000001 -AuthenticationKey 00112233445566778899AABBCCDDEEFF -DeviceType Mobile -IntegratedCircuitCardIdentifier 8900000000000000001 -OperatorKeyCode 00000000000000000000000000000001 -SimPolicyId "/subscriptions/<SUB_ID>/resourceGroups/<RESOURCEGROUP>/providers/Microsoft.MobileNetwork/mobileNetworks/<MOBILENETWORK>/simPolicies/<SIMPOLICY>" -StaticIPConfiguration $staticIp
+```
+
+### Attach the Data Network
+
+Use `New-AzMobileNetworkAttachedDataNetwork` to attach the **Data Network** you created. The example command uses the following placeholder values, replace them with the information gathered in [Prerequisite: Prepare to deploy a private mobile network and site](#prerequisite-prepare-to-deploy-a-private-mobile-network-and-site).
+
+|Placeholder|Value|
+|-|-|
+| `<DATANETWORK>` | Enter the name for the data network. |
+| `<CONTROLPLANE>` | Enter the name of the packet core control plane. |
+| `<DATAPLANE>` | Enter the name of the packet core data plane. |
+| `<RESOURCEGROUP>` | Enter the name of the resource group. |
+
+```powershell
+ New-AzMobileNetworkAttachedDataNetwork -Name <DATANETWORK> -PacketCoreControlPlaneName <CONTROLPLANE> -PacketCoreDataPlaneName <DATAPLANE> -ResourceGroupName <RESOURCEGROUP> -DnsAddress "1.1.1.1" -Location eastus -UserPlaneDataInterfaceName N6 -UserEquipmentAddressPoolPrefix "192.168.1.0/24"
+```
+
+## Clean up resources
+
+If you do not want to keep your deployment, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group).
+
+## Next steps
+
+If you have kept your deployment, you can either begin designing policy control to determine how your private mobile network handles traffic, or you can add more sites to your private mobile network.
+
+- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md).
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
Last updated 01/16/2022
+zone_pivot_groups: ap5gc-portal-powershell
# Provision new SIMs for Azure Private 5G Core - Azure portal
purview How To Policies Data Owner Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-storage.md
Previously updated : 10/10/2022 Last updated : 04/03/2023 # Access provisioning by data owner to Azure Storage datasets (Preview)
Follow this link for the steps to [update or delete a data owner policy in Micro
## Data Consumption - Data consumer can access the requested dataset using tools such as Power BI or Azure Synapse Analytics workspace.
+- The Copy and Clone commands in Azure Storage Explorer require additional IAM permissions to work in addition to the Allow Modify policy from Purview. Provide Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action permission in IAM to the Azure AD principal.
- Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide. - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The following file types are supported for scanning, for schema extraction, and
> * We do not support data type detection. The data type will be listed as "string" for all columns. > * We only support comma(ΓÇÿ,ΓÇÖ), semicolon(ΓÇÿ;ΓÇÖ), vertical bar(ΓÇÿ|ΓÇÖ) and tab(ΓÇÿ\tΓÇÖ) as delimiters. > * Delimited files with less than three rows cannot be determined to be CSV files if they are using a custom delimiter. For example: files with ~ delimiter and less than three rows will not be able to be determined to be CSV files.
-> * If the field doesn't have quotes on the ends, or the field is a single quote char or there are quotes within the field, the row will be judged as error row. Rows that have different number of columns than the header row will be judged as error rows. (numbers of error rows / numbers of rows sampled ) must be less than 0.1.
+> * If a field contains double quotes, the double quotes can only appear at the beginning and end of the field and must be matched. Double quotes that appear in the middle of the field or appear at the beginning and end but are not matched will be recognized as bad data and there will be no schema get parsed from the file. Rows that have different number of columns than the header row will be judged as error rows. (numbers of error rows / numbers of rows sampled ) must be less than 0.1.
> * For Parquet files, if you are using a self-hosted integration runtime, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](manage-integration-runtimes.md#java-runtime-environment-installation) for an installation guide. ## Schema extraction
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Previously updated : 03/30/2023 Last updated : 04/03/2023 # Index data from Azure Blob Storage
If you don't set up inclusion or exclusion criteria, the indexer will report an
An indexer typically creates one search document per blob, where the text content and metadata are captured as searchable fields in an index. If blobs are whole files, you can potentially parse them into [multiple search documents](search-howto-index-one-to-many-blobs.md). For example, you can parse rows in a [CSV file](search-howto-index-csv-blobs.md) to create one search document per row.
+A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or an .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field. If you have images, consider adding [AI enrichment](cognitive-search-concept-intro.md) to get more search utility from that content.
+
+Textual content of a document is extracted into a string field named "content". You can also extract standard and user-defined metadata.
++ <a name="indexing-blob-metadata"></a> ### Indexing blob metadata
security Cyber Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/cyber-services.md
Title: Microsoft Services in Cybersecurity | Microsoft Docs
description: The article provides an introduction about Microsoft services related to cybersecurity and how to obtain more information about these services. documentationcenter: na--++ ms.assetid: 925ba3c6-fe35-413a-98ea-e1a1461f3022--++ na Previously updated : 01/14/2019 Last updated : 04/03/2023
Microsoft services can create solutions that integrate, and enhance the latest s
Our team of technical professionals consists of highly trained experts who offer a wealth of security and identity experience.
-Learn more about services provided by Microsoft
-
-* [Security Risk Assessment](https://download.microsoft.com/download/5/D/0/5D06F4EA-EAA1-4224-99E2-0C0F45E941D0/Microsoft%20Security%20Risk%20Asessment%20Datasheet.pdf)
-* Dynamic Identity Framework Assessment
-* [Offline Assessment for Active Directory Services](https://download.microsoft.com/download/1/C/1/1C15BA51-840E-498D-86C6-4BD35D33C79E/Prerequisites_Offline_AD.pdf)
-* [Enhanced Security Administration Environment](https://download.microsoft.com/download/A/C/5/AC5D21A6-E04B-4DC4-B1F2-AE060319A4D7/Premier_Support_for_Security/Popis/Enhanced-Security-Admin-Environment-Solution-Datasheet-%5BEN%5D.pdf)
-* Azure AD Implementation Services
-* [Securing Against Lateral Account Movement](/azure-advanced-threat-protection/use-case-lateral-movement-path)
-* [Incident Response and Recovery](/microsoft-365/compliance/gdpr-breach-microsoft-support-professional-services#data-protection-incident-response-overview)
- [Learn more](https://aka.ms/cyberserv) about Microsoft Services Security consulting services.
security Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md
documentationcenter: na ms.assetid: 2431feba-3364-4a63-8e66-858926061dd3--++ na Previously updated : 04/08/2020 Last updated : 04/03/2023 # Security management in Azure
-Azure subscribers may manage their cloud environments from multiple devices, including management workstations, developer PCs, and even privileged end-user devices that have task-specific permissions. In some cases, administrative functions are performed through web-based consoles such as the [Azure portal](https://azure.microsoft.com/features/azure-portal/). In other cases, there may be direct connections to Azure from on-premises systems over Virtual Private Networks (VPNs), Terminal Services, client application protocols, or (programmatically) the Azure Service Management API (SMAPI). Additionally, client endpoints can be either domain joined or isolated and unmanaged, such as tablets or smartphones.
+Azure subscribers may manage their cloud environments from multiple devices, including management workstations, developer PCs, and even privileged end-user devices that have task-specific permissions. In some cases, administrative functions are performed through web-based consoles such as the [Azure portal](https://azure.microsoft.com/features/azure-portal/). In other cases, there may be direct connections to Azure from on-premises systems over Virtual Private Networks (VPNs), Terminal Services, client application protocols, or (programmatically) the Azure classic deployment model. Additionally, client endpoints can be either domain joined or isolated and unmanaged, such as tablets or smartphones.
Although multiple access and management capabilities provide a rich set of options, this variability can add significant risk to a cloud deployment. It can be difficult to manage, track, and audit administrative actions. This variability may also introduce security threats through unregulated access to client endpoints that are used for managing cloud services. Using general or personal workstations for developing and managing infrastructure opens unpredictable threat vectors such as web browsing (for example, watering hole attacks) or email (for example, social engineering and phishing). ![A diagram showing the different ways a threat could mount a attacks.](./media/management/typical-management-network-topology.png)
-The potential for attacks increases in this type of environment because it is challenging to construct security policies and mechanisms to appropriately manage access to Azure interfaces (such as SMAPI) from widely varied endpoints.
+The potential for attacks increases in this type of environment because it's challenging to construct security policies and mechanisms to appropriately manage access to Azure interfaces (such as SMAPI) from widely varied endpoints.
### Remote management threats Attackers often attempt to gain privileged access by compromising account credentials (for example, through password brute forcing, phishing, and credential harvesting), or by tricking users into running harmful code (for example, from harmful websites with drive-by downloads or from harmful email attachments). In a remotely managed cloud environment, account breaches can lead to an increased risk due to anywhere, anytime access.
-Even with tight controls on primary administrator accounts, lower-level user accounts can be used to exploit weaknesses in oneΓÇÖs security strategy. Lack of appropriate security training can also lead to breaches through accidental disclosure or exposure of account information.
+Even with tight controls on primary administrator accounts, lower-level user accounts can be used to exploit weaknesses in one's security strategy. Lack of appropriate security training can also lead to breaches through accidental disclosure or exposure of account information.
When a user workstation is also used for administrative tasks, it can be compromised at many different points. Whether a user is browsing the web, using 3rd-party and open-source tools, or opening a harmful document file that contains a trojan. In general, most targeted attacks that result in data breaches can be traced to browser exploits, plug-ins (such as Flash, PDF, Java), and spear phishing (email) on desktop machines. These machines may have administrative-level or service-level permissions to access live servers or network devices for operations when used for development or management of other assets. ### Operational security fundamentals
-For more secure management and operations, you can minimize a clientΓÇÖs attack surface by reducing the number of possible entry points. This can be done through security principles: ΓÇ£separation of dutiesΓÇ¥ and ΓÇ£segregation of environments.ΓÇ¥
+For more secure management and operations, you can minimize a client's attack surface by reducing the number of possible entry points. This can be done through security principles: "separation of duties" and "segregation of environments."
Isolate sensitive functions from one another to decrease the likelihood that a mistake at one level leads to a breach in another. Examples:
-* Administrative tasks should not be combined with activities that might lead to a compromise (for example, malware in an administratorΓÇÖs email that then infects an infrastructure server).
-* A workstation used for high-sensitivity operations should not be the same system used for high-risk purposes such as browsing the Internet.
+* Administrative tasks shouldn't be combined with activities that might lead to a compromise (for example, malware in an administrator's email that then infects an infrastructure server).
+* A workstation used for high-sensitivity operations shouldn't be the same system used for high-risk purposes such as browsing the Internet.
-Reduce the systemΓÇÖs attack surface by removing unnecessary software. Example:
+Reduce the system's attack surface by removing unnecessary software. Example:
-* Standard administrative, support, or development workstation should not require installation of an email client or other productivity applications if the deviceΓÇÖs main purpose is to manage cloud services.
+* Standard administrative, support, or development workstation shouldn't require installation of an email client or other productivity applications if the device's main purpose is to manage cloud services.
Client systems that have administrator access to infrastructure components should be subjected to the strictest possible policy to reduce security risks. Examples:
Consolidating access resources and eliminating unmanaged endpoints also simplifi
### Providing security for Azure remote management Azure provides security mechanisms to aid administrators who manage Azure cloud services and virtual machines. These mechanisms include:
-* Authentication and [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+* Authentication and [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
* Monitoring, logging, and auditing. * Certificates and encrypted communications. * A web management portal. * Network packet filtering.
-With client-side security configuration and datacenter deployment of a management gateway, it is possible to restrict and monitor administrator access to cloud applications and data.
+With client-side security configuration and datacenter deployment of a management gateway, it's possible to restrict and monitor administrator access to cloud applications and data.
> [!NOTE] > Certain recommendations in this article may result in increased data, network, or compute resource usage, and may increase your license or subscription costs.
The goal of hardening a workstation is to eliminate all but the most critical fu
Within an on-premises enterprise environment, you can limit the attack surface of your physical infrastructure through dedicated management networks, server rooms that have card access, and workstations that run on protected areas of the network. In a cloud or hybrid IT model, being diligent about secure management services can be more complex because of the lack of physical access to IT resources. Implementing protection solutions requires careful software configuration, security-focused processes, and comprehensive policies.
-Using a least-privilege minimized software footprint in a locked-down workstation for cloud managementΓÇöand for application developmentΓÇöcan reduce the risk of security incidents by standardizing the remote management and development environments. A hardened workstation configuration can help prevent the compromise of accounts that are used to manage critical cloud resources by closing many common avenues used by malware and exploits. Specifically, you can use [Windows AppLocker](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd759117(v=ws.11)) and Hyper-V technology to control and isolate client system behavior and mitigate threats, including email or Internet browsing.
+Using a least-privilege minimized software footprint in a locked-down workstation for cloud management and for application development can reduce the risk of security incidents by standardizing the remote management and development environments. A hardened workstation configuration can help prevent the compromise of accounts that are used to manage critical cloud resources by closing many common avenues used by malware and exploits. Specifically, you can use [Windows AppLocker](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd759117(v=ws.11)) and Hyper-V technology to control and isolate client system behavior and mitigate threats, including email or Internet browsing.
-On a hardened workstation, the administrator runs a standard user account (which blocks administrative-level execution) and associated applications are controlled by an allow list. The basic elements of a hardened workstation are as follows:
+On a hardened workstation, the administrator runs a standard user account (which blocks administrative-level execution) and associated applications are controlled by an allowlist. The basic elements of a hardened workstation are as follows:
* Active scanning and patching. Deploy antimalware software, perform regular vulnerability scans, and update all workstations by using the latest security update in a timely fashion.
-* Limited functionality. Uninstall any applications that are not needed and disable unnecessary (startup) services.
+* Limited functionality. Uninstall any applications that aren't needed and disable unnecessary (startup) services.
* Network hardening. Use Windows Firewall rules to allow only valid IP addresses, ports, and URLs related to Azure management. Ensure that inbound remote connections to the workstation are also blocked.
-* Execution restriction. Allow only a set of predefined executable files that are needed for management to run (referred to as ΓÇ£default-denyΓÇ¥). By default, users should be denied permission to run any program unless it is explicitly defined in the allow list.
-* Least privilege. Management workstation users should not have any administrative privileges on the local machine itself. This way, they cannot change the system configuration or the system files, either intentionally or unintentionally.
+* Execution restriction. Allow only a set of predefined executable files that are needed for management to run (referred to as "default-deny"). By default, users should be denied permission to run any program unless it's explicitly defined in the allowlist.
+* Least privilege. Management workstation users shouldn't have any administrative privileges on the local machine itself. This way, they can't change the system configuration or the system files, either intentionally or unintentionally.
You can enforce all this by using [Group Policy Objects](../../active-directory-domain-services/manage-group-policy.md) (GPOs) in Active Directory Domain Services (AD DS) and applying them through your (local) management domain to all management accounts. ### Managing services, applications, and data Azure cloud services configuration is performed through either the Azure portal or SMAPI, via the Windows PowerShell command-line interface or a custom-built application that takes advantage of these RESTful interfaces. Services using these mechanisms include Azure Active Directory (Azure AD), Azure Storage, Azure Websites, and Azure Virtual Network, and others.
-Virtual MachineΓÇôdeployed applications provide their own client tools and interfaces as needed, such as the Microsoft Management Console (MMC), an enterprise management console (such as Microsoft System Center or Windows Intune), or another management applicationΓÇöMicrosoft SQL Server Management Studio, for example. These tools typically reside in an enterprise environment or client network. They may depend on specific network protocols, such as Remote Desktop Protocol (RDP), that require direct, stateful connections. Some may have web-enabled interfaces that should not be openly published or accessible via the Internet.
+Virtual Machine deployed applications provide their own client tools and interfaces as needed, such as the Microsoft Management Console (MMC), an enterprise management console (such as Microsoft System Center or Windows Intune), or another management application Microsoft SQL Server Management Studio, for example. These tools typically reside in an enterprise environment or client network. They may depend on specific network protocols, such as Remote Desktop Protocol (RDP), that require direct, stateful connections. Some may have web-enabled interfaces that shouldn't be openly published or accessible via the Internet.
-You can restrict access to infrastructure and platform services management in Azure by using [multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md), [X.509 management certificates](/archive/blogs/azuresecurity/certificate-management-in-azure-dos-and-donts), and firewall rules. The Azure portal and SMAPI require Transport Layer Security (TLS). However, services and applications that you deploy into Azure require you to take protection measures that are appropriate based on your application. These mechanisms can frequently be enabled more easily through a standardized hardened workstation configuration.
-
-### Management gateway
-To centralize all administrative access and simplify monitoring and logging, you can deploy a dedicated [Remote Desktop Gateway](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd560672(v=ws.10)) (RD Gateway) server in your on-premises network, connected to your Azure environment.
-
-A Remote Desktop Gateway is a policy-based RDP proxy service that enforces security requirements. Implementing RD Gateway together with Windows Server Network Access Protection (NAP) helps ensure that only clients that meet specific security health criteria established by Active Directory Domain Services (AD DS) Group Policy objects (GPOs) can connect. In addition:
-
-* Provision an [Azure management certificate](/previous-versions/azure/gg551722(v=azure.100)) on the RD Gateway so that it is the only host allowed to access the Azure portal.
-* Join the RD Gateway to the same [management domain](/previous-versions/windows/it-pro/windows-2000-server/bb727085(v=technet.10)) as the administrator workstations. This is necessary when you are using a site-to-site IPsec VPN or ExpressRoute within a domain that has a one-way trust to Azure AD, or if you are federating credentials between your on-premises AD DS instance and Azure AD.
-* Configure a [client connection authorization policy](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753324(v=ws.11)) to let the RD Gateway verify that the client machine name is valid (domain joined) and allowed to access the Azure portal.
-* Use IPsec for [Azure VPN](../../vpn-gateway/index.yml) to further protect management traffic from eavesdropping and token theft, or consider an isolated Internet link via [Azure ExpressRoute](../../expressroute/index.yml).
-* Enable multi-factor authentication (via [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md)) or smart-card authentication for administrators who log on through RD Gateway.
-* Configure source [IP address restrictions](https://azure.microsoft.com/blog/2013/08/27/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites/) or [Network Security Groups](../../virtual-network/network-security-groups-overview.md) in Azure to minimize the number of permitted management endpoints.
+You can restrict access to infrastructure and platform services management in Azure by using [multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md), X.509 management certificates, and firewall rules. The Azure portal and SMAPI require Transport Layer Security (TLS). However, services and applications that you deploy into Azure require you to take protection measures that are appropriate based on your application. These mechanisms can frequently be enabled more easily through a standardized hardened workstation configuration.
## Security guidelines
-In general, helping to secure administrator workstations for use with the cloud is similar to the practices used for any workstation on-premisesΓÇöfor example, minimized build and restrictive permissions. Some unique aspects of cloud management are more akin to remote or out-of-band enterprise management. These include the use and auditing of credentials, security-enhanced remote access, and threat detection and response.
+In general, helping to secure administrator workstations for use with the cloud is similar to the practices used for any workstation on-premises. For example, minimized build and restrictive permissions. Some unique aspects of cloud management are more akin to remote or out-of-band enterprise management. These include the use and auditing of credentials, security-enhanced remote access, and threat detection and response.
### Authentication You can use Azure logon restrictions to constrain source IP addresses for accessing administrative tools and audit access requests. To help Azure identify management clients (workstations and/or applications), you can configure both SMAPI (via customer-developed tools such as Windows PowerShell cmdlets) and the Azure portal to require client-side management certificates to be installed, in addition to TLS/SSL certificates. We also recommend that administrator access require multi-factor authentication.
-Some applications or services that you deploy into Azure may have their own authentication mechanisms for both end-user and administrator access, whereas others take full advantage of Azure AD. Depending on whether you are federating credentials via Active Directory Federation Services (AD FS), using directory synchronization or maintaining user accounts solely in the cloud, using [Microsoft Identity Manager](/microsoft-identity-manager/) (part of Azure AD Premium) helps you manage identity lifecycles between the resources.
+Some applications or services that you deploy into Azure may have their own authentication mechanisms for both end-user and administrator access, whereas others take full advantage of Azure AD. Depending on whether you're federating credentials via Active Directory Federation Services (AD FS), using directory synchronization or maintaining user accounts solely in the cloud, using [Microsoft Identity Manager](/microsoft-identity-manager/) (part of Azure AD Premium) helps you manage identity lifecycles between the resources.
### Connectivity
-Several mechanisms are available to help secure client connections to your Azure virtual networks. Two of these mechanisms, site-to-site VPN (S2S) and [point-to-site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md) (P2S), enable the use of industry standard IPsec (S2S) or the [Secure Socket Tunneling Protocol](/previous-versions/technet-magazine/cc162322(v=msdn.10)) (SSTP) (P2S) for encryption and tunneling. When Azure is connecting to public-facing Azure services management such as the Azure portal, Azure requires Hypertext Transfer Protocol Secure (HTTPS).
+Several mechanisms are available to help secure client connections to your Azure virtual networks. Two of these mechanisms, site-to-site VPN (S2S) and [point-to-site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md) (P2S), enable the use of industry standard IPsec (S2S) for encryption and tunneling. When Azure is connecting to public-facing Azure services management such as the Azure portal, Azure requires Hypertext Transfer Protocol Secure (HTTPS).
-A stand-alone hardened workstation that does not connect to Azure through an RD Gateway should use the SSTP-based point-to-site VPN to create the initial connection to the Azure Virtual Network, and then establish RDP connection to individual virtual machines from with the VPN tunnel.
+A stand-alone hardened workstation that doesn't connect to Azure through an RD Gateway should use the SSTP-based point-to-site VPN to create the initial connection to the Azure Virtual Network, and then establish RDP connection to individual virtual machines from with the VPN tunnel.
### Management auditing vs. policy enforcement Typically, there are two approaches for helping to secure management processes: auditing and policy enforcement. Doing both provides comprehensive controls, but may not be possible in all situations. In addition, each approach has different levels of risk, cost, and effort associated with managing security, particularly as it relates to the level of trust placed in both individuals and system architectures.
Monitoring, logging, and auditing provide a basis for tracking and understanding
Policy enforcement that includes strict access controls puts programmatic mechanisms in place that can govern administrator actions, and it helps ensure that all possible protection measures are being used. Logging provides proof of enforcement, in addition to a record of who did what, from where, and when. Logging also enables you to audit and crosscheck information about how administrators follow policies, and it provides evidence of activities ## Client configuration
-We recommend three primary configurations for a hardened workstation. The biggest differentiators between them are cost, usability, and accessibility, while maintaining a similar security profile across all options. The following table provides a short analysis of the benefits and risks to each. (Note that ΓÇ£corporate PCΓÇ¥ refers to a standard desktop PC configuration that would be deployed for all domain users, regardless of roles.)
+We recommend three primary configurations for a hardened workstation. The biggest differentiators between them are cost, usability, and accessibility, while maintaining a similar security profile across all options. The following table provides a short analysis of the benefits and risks to each. (Note that "corporate PC" refers to a standard desktop PC configuration that would be deployed for all domain users, regardless of roles.)
| Configuration | Benefits | Cons | | | | |
We recommend three primary configurations for a hardened workstation. The bigges
| Corporate PC as virtual machine |Reduced hardware costs | - | | - | Segregation of role and applications | - |
-It is important that the hardened workstation is the host and not the guest, with nothing between the host operating system and the hardware. Following the ΓÇ£clean source principleΓÇ¥ (also known as ΓÇ£secure originΓÇ¥) means that the host should be the most hardened. Otherwise, the hardened workstation (guest) is subject to attacks on the system on which it is hosted.
+It's important that the hardened workstation is the host and not the guest, with nothing between the host operating system and the hardware. Following the "clean source principle" (also known as "secure origin") means that the host should be the most hardened. Otherwise, the hardened workstation (guest) is subject to attacks on the system on which it's hosted.
You can further segregate administrative functions through dedicated system images for each hardened workstation that have only the tools and permissions needed for managing select Azure and cloud applications, with specific local AD DS GPOs for the necessary tasks. For IT environments that have no on-premises infrastructure (for example, no access to a local AD DS instance for GPOs because all servers are in the cloud), a service such as [Microsoft Intune](/mem/intune/) can simplify deploying and maintaining workstation configurations. ### Stand-alone hardened workstation for management
-With a stand-alone hardened workstation, administrators have a PC or laptop that they use for administrative tasks and another, separate PC or laptop for non-administrative tasks. A workstation dedicated to managing your Azure services does not need other applications installed. Additionally, using workstations that support a [Trusted Platform Module](/previous-versions/windows/it-pro/windows-vista/cc766159(v=ws.10)) (TPM) or similar hardware-level cryptography technology aids in device authentication and prevention of certain attacks. TPM can also support full volume protection of the system drive by using [BitLocker Drive Encryption](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732774(v=ws.11)).
-
-In the stand-alone hardened workstation scenario (shown below), the local instance of Windows Firewall (or a non-Microsoft client firewall) is configured to block inbound connections, such as RDP. The administrator can log on to the hardened workstation and start an RDP session that connects to Azure after establishing a VPN connect with an Azure Virtual Network, but cannot log on to a corporate PC and use RDP to connect to the hardened workstation itself.
+With a stand-alone hardened workstation, administrators have a PC or laptop that they use for administrative tasks and another, separate PC or laptop for non-administrative tasks. In the stand-alone hardened workstation scenario (shown below), the local instance of Windows Firewall (or a non-Microsoft client firewall) is configured to block inbound connections, such as RDP. The administrator can log on to the hardened workstation and start an RDP session that connects to Azure after establishing a VPN connect with an Azure Virtual Network, but can't log on to a corporate PC and use RDP to connect to the hardened workstation itself.
![A diagram showing the stand-alone hardened workstation scenario.](./media/management/stand-alone-hardened-workstation-topology.png)
In cases where a separate stand-alone hardened workstation is cost prohibitive o
![A diagram showing the hardened workstation hosting a virtual machine to perform non-administrative tasks.](./media/management/hardened-workstation-enabled-with-hyper-v.png)
-To avoid several security risks that can arise from using one workstation for systems management and other daily work tasks, you can deploy a Windows Hyper-V virtual machine to the hardened workstation. This virtual machine can be used as the corporate PC. The corporate PC environment can remain isolated from the Host, which reduces its attack surface and removes the userΓÇÖs daily activities (such as email) from coexisting with sensitive administrative tasks.
+To avoid several security risks that can arise from using one workstation for systems management and other daily work tasks, you can deploy a Windows Hyper-V virtual machine to the hardened workstation. This virtual machine can be used as the corporate PC. The corporate PC environment can remain isolated from the Host, which reduces its attack surface and removes the user's daily activities (such as email) from coexisting with sensitive administrative tasks.
-The corporate PC virtual machine runs in a protected space and provides user applications. The host remains a ΓÇ£clean sourceΓÇ¥ and enforces strict network policies in the root operating system (for example, blocking RDP access from the virtual machine).
+The corporate PC virtual machine runs in a protected space and provides user applications. The host remains a "clean source" and enforces strict network policies in the root operating system (for example, blocking RDP access from the virtual machine).
## Best practices
-Consider the following additional guidelines when you are managing applications and data in Azure.
+Consider the following additional guidelines when you're managing applications and data in Azure.
### Dos and don'ts
-Don't assume that because a workstation has been locked down that other common security requirements do not need to be met. The potential risk is higher because of elevated access levels that administrator accounts generally possess. Examples of risks and their alternate safe practices are shown in the table below.
+Don't assume that because a workstation has been locked down that other common security requirements don't need to be met. The potential risk is higher because of elevated access levels that administrator accounts generally possess. Examples of risks and their alternate safe practices are shown in the table below.
| Don't | Do | | | | | Don't email credentials for administrator access or other secrets (for example, TLS/SSL or management certificates) |Maintain confidentiality by delivering account names and passwords by voice (but not storing them in voice mail), perform a remote installation of client/server certificates (via an encrypted session), download from a protected network share, or distribute by hand via removable media. | | - | Proactively manage your management certificate life cycles. | | Don't store account passwords unencrypted or un-hashed in application storage (such as in spreadsheets, SharePoint sites, or file shares). |Establish security management principles and system hardening policies, and apply them to your development environment. |
-| - | Use [Enhanced Mitigation Experience Toolkit 5.5](https://technet.microsoft.com/security/jj653751) certificate pinning rules to ensure proper access to Azure SSL/TLS sites. |
-| Don't share accounts and passwords between administrators, or reuse passwords across multiple user accounts or services, particularly those for social media or other nonadministrative activities. |Create a dedicated Microsoft account to manage your Azure subscriptionΓÇöan account that is not used for personal email. |
+| Don't share accounts and passwords between administrators, or reuse passwords across multiple user accounts or services, particularly those for social media or other nonadministrative activities. |Create a dedicated Microsoft account to manage your Azure subscription, an account that is not used for personal email. |
| Don't email configuration files. |Configuration files and profiles should be installed from a trusted source (for example, an encrypted USB flash drive), not from a mechanism that can be easily compromised, such as email. | | Don't use weak or simple logon passwords. |Enforce strong password policies, expiration cycles (changeon-first-use), console timeouts, and automatic account lockouts. Use a client password management system with multi-factor authentication for password vault access. |
-| Don't expose management ports to the Internet. |Lock down Azure ports and IP addresses to restrict management access. For more information, see the [Azure Network Security](https://download.microsoft.com/download/4/3/9/43902EC9-410E-4875-8800-0788BE146A3D/Windows%20Azure%20Network%20Security%20Whitepaper%20-%20FINAL.docx) white paper. |
+| Don't expose management ports to the Internet. |Lock down Azure ports and IP addresses to restrict management access. |
| - | Use firewalls, VPNs, and NAP for all management connections. | ## Azure operations
-Within MicrosoftΓÇÖs operation of Azure, operations engineers and support personnel who access AzureΓÇÖs production systems use [hardened workstation PCs with VMs](#stand-alone-hardened-workstation-for-management) provisioned on them for internal corporate network access and applications (such as e-mail, intranet, etc.). All management workstation computers have TPMs, the host boot drive is encrypted with BitLocker, and they are joined to a special organizational unit (OU) in MicrosoftΓÇÖs primary corporate domain.
+Within Microsoft's operation of Azure, operations engineers and support personnel who access Azure's production systems use [hardened workstation PCs with VMs](#stand-alone-hardened-workstation-for-management) provisioned on them for internal corporate network access and applications (such as e-mail, intranet, etc.). All management workstation computers have TPMs, the host boot drive is encrypted with BitLocker, and they're joined to a special organizational unit (OU) in Microsoft's primary corporate domain.
System hardening is enforced through Group Policy, with centralized software updating. For auditing and analysis, event logs (such as security and AppLocker) are collected from management workstations and saved to a central location.
-In addition, dedicated jump-boxes on MicrosoftΓÇÖs network that require two-factor authentication are used to connect to AzureΓÇÖs production network.
+In addition, dedicated jump-boxes on Microsoft's network that require two-factor authentication are used to connect to Azure's production network.
## Azure security checklist Minimizing the number of tasks that administrators can perform on a hardened workstation helps minimize the attack surface in your development and management environment. Use the following technologies to help protect your hardened workstation:
-* IE hardening. The Internet Explorer browser (or any web browser, for that matter) is a key entry point for harmful code due to its extensive interactions with external servers. Review your client policies and enforce running in protected mode, disabling add-ons, disabling file downloads, and using [Microsoft SmartScreen](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj618329(v=ws.11)) filtering. Ensure that security warnings are displayed. Take advantage of Internet zones and create a list of trusted sites for which you have configured reasonable hardening. Block all other sites and in-browser code, such as ActiveX and Java.
-* Standard user. Running as a standard user brings a number of benefits, the biggest of which is that stealing administrator credentials via malware becomes more difficult. In addition, a standard user account does not have elevated privileges on the root operating system, and many configuration options and APIs are locked out by default.
-* AppLocker. You can use [AppLocker](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619725(v=ws.10)) to restrict the programs and scripts that users can run. You can run AppLocker in audit or enforcement mode. By default, AppLocker has an allow rule that enables users who have an admin token to run all code on the client. This rule exists to prevent administrators from locking themselves out, and it applies only to elevated tokens. See also Code Integrity as part of Windows Server [core security](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd348705(v=ws.10)).
-* Code signing. Code signing all tools and scripts used by administrators provides a manageable mechanism for deploying application lockdown policies. Hashes do not scale with rapid changes to the code, and file paths do not provide a high level of security. You should combine AppLocker rules with a PowerShell [execution policy](/previous-versions/windows/it-pro/windows-powershell-1.0/ee176961(v=technet.10)) that only allows specific signed code and scripts to be [executed](/powershell/module/microsoft.powershell.security/set-executionpolicy).
+* A web browser is a key entry point for harmful code due to its extensive interactions with external servers. Review your client policies and enforce running in protected mode, disabling add-ons, and disabling file downloads. Ensure that security warnings are displayed. Take advantage of Internet zones and create a list of trusted sites for which you have configured reasonable hardening. Block all other sites and in-browser code, such as ActiveX and Java.
+* Standard user. Running as a standard user brings a number of benefits, the biggest of which is that stealing administrator credentials via malware becomes more difficult. In addition, a standard user account doesn't have elevated privileges on the root operating system, and many configuration options and APIs are locked out by default.
+* Code signing. Code signing all tools and scripts used by administrators provides a manageable mechanism for deploying application lockdown policies. Hashes don't scale with rapid changes to the code, and file paths don't provide a high level of security. [Set the PowerShell execution policies for Windows computers](/powershell/module/microsoft.powershell.security/set-executionpolicy).
* Group Policy. Create a global administrative policy that is applied to any domain workstation that is used for management (and block access from all others), and to user accounts authenticated on those workstations. * Security-enhanced provisioning. Safeguard your baseline hardened workstation image to help protect against tampering. Use security measures like encryption and isolation to store images, virtual machines, and scripts, and restrict access (perhaps use an auditable check-in/check-out process).
-* Patching. Maintain a consistent build (or have separate images for development, operations, and other administrative tasks), scan for changes and malware routinely, keep the build up to date, and only activate machines when they are needed.
-* Encryption. Make sure that management workstations have a TPM to more securely enable [Encrypting File System](/previous-versions/tn-archive/cc700811(v=technet.10)) (EFS) and BitLocker.
-* Governance. Use AD DS GPOs to control all the administratorsΓÇÖ Windows interfaces, such as file sharing. Include management workstations in auditing, monitoring, and logging processes. Track all administrator and developer access and usage.
+* Patching. Maintain a consistent build (or have separate images for development, operations, and other administrative tasks), scan for changes and malware routinely, keep the build up to date, and only activate machines when they're needed.
+* Governance. Use AD DS GPOs to control all the administrators' Windows interfaces, such as file sharing. Include management workstations in auditing, monitoring, and logging processes. Track all administrator and developer access and usage.
## Summary Using a hardened workstation configuration for administering your Azure cloud services, Virtual Machines, and applications can help you avoid numerous risks and threats that can come from remotely managing critical IT infrastructure. Both Azure and Windows provide mechanisms that you can employ to help protect and control communications, authentication, and client behavior. ## Next steps
-The following resources are available to provide more general information about Azure and related Microsoft services, in addition to specific items referenced in this paper:
+The following resources are available to provide more general information about Azure and related Microsoft
-* [Securing Privileged Access](/windows-server/identity/securing-privileged-access/securing-privileged-access) ΓÇô get the technical details for designing and building a secure administrative workstation for Azure management
-* [Microsoft Trust Center](https://microsoft.com/en-us/trustcenter/cloudservices/azure) - learn about Azure platform capabilities that protect the Azure fabric and the workloads that run on Azure
-* [Microsoft Security Response Center](https://www.microsoft.com/msrc) -- where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to [secure@microsoft.com](mailto:secure@microsoft.com)
+* [Securing Privileged Access](/windows-server/identity/securing-privileged-access/securing-privileged-access) - get the technical details for designing and building a secure administrative workstation for Azure management
+* [Microsoft Trust Center](https://microsoft.com/trustcenter/cloudservices/azure) - learn about Azure platform capabilities that protect the Azure fabric and the workloads that run on Azure
+* [Microsoft Security Response Center](https://www.microsoft.com/msrc) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to [secure@microsoft.com](mailto:secure@microsoft.com)
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The following fields are defined by ASIM for all schemas:
| <a name="eventsubtype"></a>**EventSubType** | Optional | Enumerated | Describes a subdivision of the operation reported in the [EventType](#eventtype) field. Each schema documents the list of values valid for this field. The original, source specific, value is stored in the [EventOriginalSubType](#eventoriginalsubtype) field. | | <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | One of the following values: **Success**, **Partial**, **Failure**, **NA** (Not Applicable).<br> <br>The value might be provided in the source record by using different terms, which should be normalized to these values. Alternatively, the source might provide only the [EventResultDetails](#eventresultdetails) field, which should be analyzed to derive the EventResult value.<br><br>Example: `Success`| | <a name="eventresultdetails"></a>**EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Each schema documents the list of values valid for this field. The original, source specific, value is stored in the [EventOriginalResultDetails](#eventoriginalresultdetails) field.<br><br>Example: `NXDOMAIN`|
-| <a name="eventuid"></a>**EventUid** | Recommended | String | The unique ID of the record, as assigned by Microsoft Sentinel. This is typically mapped to the `_ItemId` Log Analytics field. |
+| <a name="eventuid"></a>**EventUid** | Recommended | String | The unique ID of the record, as assigned by Microsoft Sentinel. This field is typically mapped to the `_ItemId` Log Analytics field. |
| <a name="eventoriginaluid"></a>**EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source.<br><br>Example: `69f37748-ddcd-4331-bf0f-b137f1ea83b`|
-| <a name="eventoriginaltype"></a>**EventOriginalType** | Optional | String | The original event type or ID, if provided by the source. For example, this field will be used to store the original Windows event ID. This value is used to derive [EventType](#eventtype), which should have only one of the values documented for each schema.<br><br>Example: `4624`|
-| <a name="eventoriginalsubtype"></a>**EventOriginalSubType** | Optional | String | The original event subtype or ID, if provided by the source. For example, this field will be used to store the original Windows logon type. This value is used to derive [EventSubType](#eventsubtype), which should have only one of the values documented for each schema.<br><br>Example: `2`|
+| <a name="eventoriginaltype"></a>**EventOriginalType** | Optional | String | The original event type or ID, if provided by the source. For example, this field is used to store the original Windows event ID. This value is used to derive [EventType](#eventtype), which should have only one of the values documented for each schema.<br><br>Example: `4624`|
+| <a name="eventoriginalsubtype"></a>**EventOriginalSubType** | Optional | String | The original event subtype or ID, if provided by the source. For example, this field is used to store the original Windows logon type. This value is used to derive [EventSubType](#eventsubtype), which should have only one of the values documented for each schema.<br><br>Example: `2`|
| <a name="eventoriginalresultdetails"></a>**EventOriginalResultDetails** | Optional | String | The original result details provided by the source. This value is used to derive [EventResultDetails](#eventresultdetails), which should have only one of the values documented for each schema. | | <a name="eventseverity"></a>**EventSeverity** | Recommended | Enumerated | The severity of the event. Valid values are: `Informational`, `Low`, `Medium`, or `High`. | | <a name="eventoriginalseverity"></a>**EventOriginalSeverity** | Optional | String | The original severity as provided by the reporting device. This value is used to derive [EventSeverity](#eventseverity). |
The following fields are defined by ASIM for all schemas:
### Device fields
-The role of the device fields is different for different schemas and event types. For example, for the Network Session schema, device fields provide information about the device which generated the event, while for the Process Event schema, the device fields provide information on the device on which the process is executed. Each schema document specifies the role of the device for the schema.
+The role of the device fields is different for different schemas and event types. For example:
+
+- For the Network Session events, device fields usually provide information about the device that generated the event
+- For the Process events, the device fields provide information on the device on that the process is executed.
+
+Each schema document specifies the role of the device for the schema.
| Field | Class | Type | Description | ||-||--|
The role of the device fields is different for different schemas and event types
| <a name ="dvcipaddr"></a>**DvcIpAddr** | Recommended | IP address | The IP address of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `45.21.42.12` | | <a name ="dvchostname"></a>**DvcHostname** | Recommended | Hostname | The hostname of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `ContosoDc` | | <a name="dvcdomain"></a>**DvcDomain** | Recommended | String | The domain of the device on which the event occurred or which reported the event, depending on the schema.<br><br>Example: `Contoso` |
-| <a name="dvcdomaintype"></a>**DvcDomainType** | Conditional | Enumerated | The type of [DvcDomain](#dvcdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype).<br><br>**Note**: This field is required if the [DvcDomain](#dvcdomain) field is used. |
+| <a name="dvcdomaintype"></a>**DvcDomainType** | Conditional | Enumerated | The type of [DvcDomain](#dvcdomain). For a list of allowed values and further information, refer to [DomainType](normalization-about-schemas.md#domaintype).<br><br>**Note**: This field is required if the [DvcDomain](#dvcdomain) field is used. |
| <a name="dvcfqdn"></a>**DvcFQDN** | Optional | String | The hostname of the device on which the event occurred or which reported the event, depending on the schema. <br><br> Example: `Contoso\DESKTOP-1282V4D`<br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DvcDomainType](#dvcdomaintype) field reflects the format used. | | <a name = "dvcdescription"></a>**DvcDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. | | <a name ="dvcid"></a>**DvcId** | Optional | String | The unique ID of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `41502da5-21b7-48ec-81c9-baeea8d7d669` |
-| <a name="dvcidtype"></a>**DvcIdType** | Conditional | Enumerated | The type of [DvcId](#dvcid). For a list of allowed values and further information refer to [DvcIdType](#dvcidtype).<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list, and store the others by using the field names **DvcAzureResourceId** and **DvcMDEid**, respectively.<br><br>**Note**: This field is required if the [DvcId](#dvcid) field is used. |
+| <a name="dvcidtype"></a>**DvcIdType** | Conditional | Enumerated | The type of [DvcId](#dvcid). For a list of allowed values and further information, refer to [DvcIdType](#dvcidtype).<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list, and store the others by using the field names **DvcAzureResourceId** and **DvcMDEid**, respectively.<br><br>**Note**: This field is required if the [DvcId](#dvcid) field is used. |
| <a name="dvcmacaddr"></a>**DvcMacAddr** | Optional | MAC | The MAC address of the device on which the event occurred or which reported the event. <br><br>Example: `00:1B:44:11:3A:B7` | | <a name="dvczone"></a>**DvcZone** | Optional | String | The network on which the event occurred or which reported the event, depending on the schema. The zone is defined by the reporting device.<br><br>Example: `Dmz` | | <a name="dvcos"></a>**DvcOs** | Optional | String | The operating system running on the device on which the event occurred or which reported the event. <br><br>Example: `Windows` | | <a name="dvcosversion"></a>**DvcOsVersion** | Optional | String | The version of the operating system on the device on which the event occurred or which reported the event. <br><br>Example: `10` | | <a name="dvcaction"></a>**DvcAction** | Recommended | String | For reporting security systems, the action taken by the system, if applicable. <br><br>Example: `Blocked` | | <a name="dvcoriginalaction"></a>**DvcOriginalAction** | Optional | String | The original [DvcAction](#dvcaction) as provided by the reporting device. |
-| <a name="dvcinterface"></a>**DvcInterface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity which is captured by an intermediate or tap device. |
+| <a name="dvcinterface"></a>**DvcInterface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity, which is captured by an intermediate or tap device. |
| <a name="dvcscopeid"></a>**DvcScopeId** | Optional | String | The cloud platform scope ID the device belongs to. **DvcScopeId** map to a subscription ID on Azure and to an account ID on AWS. | | <a name="dvcscope"></a>**DvcScope** | Optional | String | The cloud platform scope the device belongs to. **DvcScope** map to a subscription ID on Azure and to an account ID on AWS. |
The role of the device fields is different for different schemas and event types
### Schema updates -- The `EventOwner` field has been added to the common fields on Dec 1st 2022, and therefore to all of the schemas.-- The `EventUid` field has been added to the common fields on Dec 26th 2022, and therefore to all of the schemas.
+- The `EventOwner` field has been added to the common fields on Dec 1, 2022, and therefore to all of the schemas.
+- The `EventUid` field has been added to the common fields on Dec 26, 2022, and therefore to all of the schemas.
## Vendors and products
The currently supported list of vendors and products used in the [EventVendor](#
| Vendor | Products | | | -- |
-| AWS | - CloudTrail<br> - VPC |
-| Cisco | - ASA<br> - Umbrella<br> - IOS |
-| Corelight | Zeek |
-| GCP | Cloud DNS |
-| Infoblox | NIOS |
-| Microsoft | - Microsoft Azure Active Directory (Azure AD)<br> - Azure<br> - Azure Firewall<br> - Azure Blob Storage<br> - Azure File Storage<br> - Azure NSG flows<br> - Azure Queue Storage<br> - Azure Table Storage <br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br>- SharePoint<br>- OneDrive<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData
-| Linux | - su<br> - sudo |
-| Okta | - Okta<br> - Auth0 |
-| OpenBSD | OpenSSH |
-| Palo Alto | - PanOS<br> - CDL |
-| PostgreSQL | PostgreSQL |
-| Squid | Squid Proxy |
-| Vectra AI | Vectra Steam |
-| WatchGuard | Fireware |
-| Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy |
--
-If you are developing a parser for a vendor or a product which are not listed here, contact the [Microsoft Sentinel](mailto:azuresentinel@microsoft.com) team to allocate a new allowed vendor and product designators.
+| `AWS` | - `CloudTrail`<br> - `VPC` |
+| `Cisco` | - `ASA`<br> - `Umbrella`<br> - `IOS`<br> - `Meraki` |
+| `Corelight` | `Zeek` |
+| `Cynerio` | `Cynerio` |
+| `Dataminr` | `Dataminr Pulse` |
+| `GCP` | `Cloud DNS` |
+| `Infoblox` | `NIOS` |
+| `Microsoft` | - Microsoft Azure Active Directory (Azure AD)<br> - `Azure`<br> - `Azure Firewall`<br> - `Azure Blob Storage`<br> - `Azure File Storage`<br> - `Azure NSG flows`<br> - `Azure Queue Storage`<br> - `Azure Table Storage` <br> - `DNS Server`<br> - `Microsoft 365 Defender for Endpoint`<br> - `Microsoft Defender for IoT`<br> - `Security Events`<br>- `SharePoint`<br>- `OneDrive`<br>- `Sysmon`<br> - `Sysmon for Linu`x<br> - `VMConnection`<br> - `Windows Firewall`<br> - `WireData`
+| `Linux` | - `su`<br> - `sudo`|
+| `Okta` | - `Okta`<br> - `Auth0` |
+| `OpenBSD` | `OpenSSH` |
+| `Palo Alto` | - `PanOS`<br> - `CDL` |
+| `PostgreSQL` | `PostgreSQL` |
+| `Squid` | `Squid Proxy`|
+| `Vectra AI` | `Vectra Steam` |
+| `WatchGuard` | `Fireware` |
+| `Zscaler` | - `ZIA DNS`<br> - `ZIA Firewall`<br> - `ZIA Proxy` |
++
+If you are developing a parser for a vendor or a product,s which are not listed here, contact the [Microsoft Sentinel](mailto:azuresentinel@microsoft.com) team to allocate a new allowed vendor and product designators.
## Next steps
sentinel Normalization Schema Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-network.md
Refer to the article [Managing ASIM parsers](normalization-manage-parsers.md) to
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
+The Network Session parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parameters are optional, they can improve your query performance.
The following filtering parameters are available:
For a full list of analytics rules that use normalized DNS events, see [Network
The Network Session information model is aligned with the [OSSEM Network entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/network.md).
+The Network Session schema serves several types of similar but distinct scenarios, which share the same fields. Those scenarios are identified by the EventType field:
+
+- `NetworkSession` - a network session reported by an intermediate device monitoring the network, such as a Firewall, a router, or a network tap.
+- `L2NetworkSession` - a network sessions for which only layer 2 information is available. Such events will include MAC addresses but not IP addresses.
+- `Flow` - an aggregated event that reports multiple similar network sessions, typically over a predefined time period, such as **Netflow** events.
+- `EndpointNetworkSession` - a network session reported by one of the end points of the session, including clients and servers. For such events, the schema supports the `remote` and `local` alias fields.
+- `IDS` - a network session reported as suspicious. Such an event will have some of the inspection fields populated, and may have just one IP address field populated, either the source or the destination.
+
+Typically, a query should either select just a subset of those event types, and may need to address separately unique aspects of the use cases. For example, IDS events do not reflect the entire network volume and should not be taken into account in column based analytics.
+ Network session events use the descriptors `Src` and `Dst` to denote the roles of the devices and related users and applications involved in the session. So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`. Other ASIM schemas typically use `Target` instead of `Dst`. For events reported by an endpoint and for which the event type is `EndpointNetworkSession`, the descriptors `Local` and `Remote` denote the endpoint itself and the device at the other end of the network session respectively.
The following list mentions fields that have specific guidelines for Network Ses
| Field | Class | Type | Description | ||-||--| | **EventCount** | Mandatory | Integer | Netflow sources support aggregation, and the **EventCount** field should be set to the value of the Netflow **FLOWS** field. For other sources, the value is typically set to `1`. |
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `L2NetworkSession`: for sessions reported by intermediary systems and network taps, but which for which only layer 2 information is available. Such events will include MAC addresses but not IP addresses. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
-| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the scenario reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`<br> - `NetworkSession` <br> - `L2NetworkSession`<br>- `IDS` <br> - `Flow`<br><br>For more information on event types, refer to the the [schema overview](#schema-overview) |
+| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End`<br><br>This is field is not relevant for `Flow` events. |
| <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. | | **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel<br> - Maximum Retry<br> - Reset<br> - Routing issue<br> - Simulation<br> - Terminated<br> - Timeout<br> - Transient error<br> - Unknown<br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. | | **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.5`. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.6`. |
| <a name="dvcaction"></a>**DvcAction** | Recommended | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` | | **EventSeverity** | Optional | Enumerated | If the source device does not provide an event severity, **EventSeverity** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventSeverity** should be `Low`. Otherwise, **EventSeverity** should be `Informational`. | | **DvcInterface** | | | The DvcInterface field should alias either the [DvcInboundInterface](#dvcinboundinterface) or the [DvcOutboundInterface](#dvcoutboundinterface) fields. |
The following are the changes in version 0.2.4 of the schema:
The following are the changes in version 0.2.5 of the schema: - Added the fields `DstUserScope`, `SrcUserScope`, `SrcDvcScopeId`, `SrcDvcScope`, `DstDvcScopeId`, `DstDvcScope`, `DvcScopeId`, and `DvcScope`.
+The following are the changes in version 0.2.6 of the schema:
+- Added IDS as an event type
+ ## Next steps
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
If needed, you can [remove the user role and the optional CR installed on your A
The SAP PAHI table includes data on the history of the SAP system, the database, and SAP parameters. In some cases, the Microsoft Sentinel solution for SAP® applications can't monitor the SAP PAHI table at regular intervals, due to missing or faulty configuration (see the [SAP note](https://launchpad.support.sap.com/#/notes/12103) with more details on this issue). It's important to update the PAHI table and to monitor it frequently, so that the Microsoft Sentinel solution for SAP® applications can alert on suspicious actions that might happen at any time throughout the day.
+Learn more about how the Microsoft Sentinel solution for SAP® applications monitors [suspicious configuration changes to security parameters](sap-solution-security-content.md#monitoring-the-configuration-of-static-sap-security-parameters-preview).
+ > [!NOTE] > For optimal results, in your machine's *systemconfig.ini* file, under the `[ABAP Table Selector]` section, enable both the `PAHI_FULL` and the `PAHI_INCREMENTAL` parameters.
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
For more information, see [Tutorial: Visualize and monitor your data](../monitor
## Built-in analytics rules
-### Monitoring the configuration of static SAP security parameters
+### Monitoring the configuration of static SAP security parameters (Preview)
To secure the SAP system, SAP has identified security-related parameters that need to be monitored for changes. With the "SAP - (Preview) Sensitive Static Parameter has Changed" rule, the Microsoft Sentinel solution for SAP® applications tracks [over 52 static security-related parameters](sap-suspicious-configuration-security-parameters.md) in the SAP system, which are built into Microsoft Sentinel.
+> [!NOTE]
+> For the Microsoft Sentinel solution for SAP® applications to successfully monitor the SAP security parameters, the solution needs to successfully monitor the SAP PAHI table at regular intervals. [Verify that the solution can successfully monitor the PAHI table](preparing-sap.md#verify-that-the-pahi-table-history-of-system-database-and-sap-parameters-is-updated-at-regular-intervals).
+ To understand parameter changes in the system, the Microsoft Sentinel solution for SAP® applications uses the parameter history table, which records changes made to system parameters every hour. The parameters are also reflected in the [SAPSystemParameters watchlist](#systemparameters). This watchlist allows users to add new parameters, disable existing parameters, and modify the values and severities per parameter and system role in production or non-production environments.
These watchlists provide the configuration for the Microsoft Sentinel solution f
| <a name="roles"></a>**SAP - Sensitive Roles** | Sensitive roles, where assignment should be governed. <br><br>- **Role**: SAP authorization role, such as `SAP_BC_BASIS_ADMIN` <br>- **Description**: A meaningful role description. | | <a name="transactions"></a>**SAP - Sensitive Transactions** | Sensitive transactions where execution should be governed. <br><br>- **TransactionCode**: SAP transaction code, such as `RZ11` <br>- **Description**: A meaningful code description. | | <a name="systems"></a>**SAP - Systems** | Describes the landscape of SAP systems according to role and usage.<br><br>- **SystemID**: the SAP system ID (SYSID) <br>- **SystemRole**: the SAP system role, one of the following values: `Sandbox`, `Development`, `Quality Assurance`, `Training`, `Production` <br>- **SystemUsage**: The SAP system usage, one of the following values: `ERP`, `BW`, `Solman`, `Gateway`, `Enterprise Portal` |
-| <a name="systemparameters"></a>**SAPSystemParameters** | Parameters to watch for [suspicious configuration changes](#monitoring-the-configuration-of-static-sap-security-parameters). This watchlist is prefilled with recommended values (according to SAP best practice), and you can extend the watchlist to include more parameters. If you don't want to receive alerts for a parameter, set `EnableAlerts` to `false`.<br><br>- **ParameterName**: The name of the parameter.<br>- **Comment**: The SAP standard parameter description.<br>- **EnableAlerts**: Defines whether to enable alerts for this parameter. Values are `true` and `false`.<br>- **Option**: Defines in which case to trigger an alert: If the parameter value is greater or equal (`GE`), less or equal (`LE`), or equal (`EQ`).<br> For example, if the `login/fails_to_user_lock` SAP parameter is set to `LE` (less or equal), and a value of `5`, once Microsoft Sentinel detects a change to this specific parameter, it compares the newly-reported value and the expected value. If the new value is `4`, Microsoft Sentinel doesn't trigger an alert. If the new value is `6`, Microsoft Sentinel triggers an alert.<br>- **ProductionSeverity**: The incident severity for production systems.<br>- **ProductionValues**: Permitted values for production systems.<br>- **NonProdSeverity**: The incident severity for non-production systems.<br>- **NonProdValues**: Permitted values for non-production systems. |
+| <a name="systemparameters"></a>**SAPSystemParameters** | Parameters to watch for [suspicious configuration changes](#monitoring-the-configuration-of-static-sap-security-parameters-preview). This watchlist is prefilled with recommended values (according to SAP best practice), and you can extend the watchlist to include more parameters. If you don't want to receive alerts for a parameter, set `EnableAlerts` to `false`.<br><br>- **ParameterName**: The name of the parameter.<br>- **Comment**: The SAP standard parameter description.<br>- **EnableAlerts**: Defines whether to enable alerts for this parameter. Values are `true` and `false`.<br>- **Option**: Defines in which case to trigger an alert: If the parameter value is greater or equal (`GE`), less or equal (`LE`), or equal (`EQ`).<br> For example, if the `login/fails_to_user_lock` SAP parameter is set to `LE` (less or equal), and a value of `5`, once Microsoft Sentinel detects a change to this specific parameter, it compares the newly-reported value and the expected value. If the new value is `4`, Microsoft Sentinel doesn't trigger an alert. If the new value is `6`, Microsoft Sentinel triggers an alert.<br>- **ProductionSeverity**: The incident severity for production systems.<br>- **ProductionValues**: Permitted values for production systems.<br>- **NonProdSeverity**: The incident severity for non-production systems.<br>- **NonProdValues**: Permitted values for non-production systems. |
| <a name="users"></a>**SAP - Excluded Users** | System users that are logged in and need to be ignored, such as for the Multiple logons by user alert. <br><br>- **User**: SAP User <br>- **Description**: A meaningful user description | | <a name="networks"></a>**SAP - Excluded Networks** | Maintain internal, excluded networks for ignoring web dispatchers, terminal servers, and so on. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description | | <a name="modules"></a>**SAP - Obsolete Function Modules** | Obsolete function modules, whose execution should be governed. <br><br>- **FunctionModule**: ABAP Function Module, such as TH_SAPREL <br>- **Description**: A meaningful function module description |
sentinel Sap Suspicious Configuration Security Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-suspicious-configuration-security-parameters.md
Last updated 03/26/2023
# Monitored SAP security parameters for detecting suspicious configuration changes
-This article details the security parameters in the SAP system that the Microsoft Sentinel solution for SAP® applications monitors as part of the ["SAP - (Preview) Sensitive Static Parameter has Changed" analytics rule](sap-solution-security-content.md#monitoring-the-configuration-of-static-sap-security-parameters).
+This article details the security parameters in the SAP system that the Microsoft Sentinel solution for SAP® applications monitors as part of the ["SAP - (Preview) Sensitive Static Parameter has Changed" analytics rule](sap-solution-security-content.md#monitoring-the-configuration-of-static-sap-security-parameters-preview).
The Microsoft Sentinel solution for SAP® applications will provide updates for this content according to SAP best practice changes. You can also add parameters to watch for, change values according to your organization's needs, and disable specific parameters in the [SAPSystemParameters watchlist](sap-solution-security-content.md#systemparameters).
+> [!NOTE]
+> For the Microsoft Sentinel solution for SAP® applications to successfully monitor the SAP security parameters, the solution needs to successfully monitor the SAP PAHI table at regular intervals. [Verify that the solution can successfully monitor the PAHI table](preparing-sap.md#verify-that-the-pahi-table-history-of-system-database-and-sap-parameters-is-updated-at-regular-intervals).
+ ## Monitored static SAP security parameters This list includes the static SAP security parameters that the Microsoft Sentinel solution for SAP® applications monitors to protect your SAP system. The list isn't a recommendation for configuring these parameters. For configuration considerations, consult your SAP admins.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
## March 2023 - [Work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces (Preview)](#work-with-the-microsoft-sentinel-solution-for-sap-applications-across-multiple-workspaces-preview)
+- [Monitoring the configuration of static SAP security parameters](#monitoring-the-configuration-of-static-sap-security-parameters-preview)
- [Stream log data from the Google Cloud Platform into Microsoft Sentinel (Preview)](#stream-log-data-from-the-google-cloud-platform-into-microsoft-sentinel-preview) - [Microsoft Defender Threat Intelligence data connector (Preview)](#microsoft-defender-threat-intelligence-data-connector-preview) - [Microsoft Defender Threat Intelligence solution (Preview)](#microsoft-defender-threat-intelligence-solution-preview)
See these [important announcements](#announcements) about recent changes to feat
You can now [work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces](sap/cross-workspace.md) in different scenarios. This feature allows improved flexibility for managed security service providers (MSSPs) or a global or federated SOC, data residency requirements, organizational hierarchy/IT design, and insufficient role-based access control (RBAC) in a single workspace. One common use case is the need for collaboration between the security operations center (SOC) and SAP teams in your organization. Read about [the scenarios that address this use case](sap/cross-workspace.md).
+### Monitoring the configuration of static SAP security parameters (Preview)
+
+To secure the SAP system, SAP has identified security-related parameters that need to be monitored for changes. With the ["SAP - (Preview) Sensitive Static Parameter has Changed" analytics rule](sap/sap-solution-security-content.md#monitoring-the-configuration-of-static-sap-security-parameters-preview), the Microsoft Sentinel solution for SAP® applications tracks [over 52 security-related parameters](sap/sap-suspicious-configuration-security-parameters.md) in the SAP system, and triggers an alert once these parameters are changed not according to the policy.
+
+For the Microsoft Sentinel solution for SAP® applications to successfully monitor the SAP security parameters, the solution needs to successfully monitor the SAP PAHI table at regular intervals. [Verify that the solution can successfully monitor the PAHI table](sap/preparing-sap.md#verify-that-the-pahi-table-history-of-system-database-and-sap-parameters-is-updated-at-regular-intervals).
+ ### Stream log data from the Google Cloud Platform into Microsoft Sentinel (Preview) You can now [stream audit log data from the Google Cloud Platform (GCP) into Microsoft Sentinel](connect-google-cloud-platform.md) using the **GCP Pub/Sub Audit Logs** connector, based on our [Codeless Connector Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) (CCP). The new connector ingests logs from your GCP environment using the GCP [Pub/Sub capability](https://cloud.google.com/pubsub/docs/overview).
Enabling this solution helps your security team achieve the following goals:
- respond more effectively to threats - maximize impact of existing security incident response
-See the [MDTI solution blog post](https://aka.ms/sentinel-playbooks) to learn more about the three playbooks at launch and what's required. Also, check out this [MDTI Tech Community blog](https://techcommunity.microsoft.com/t5/microsoft-defender-threat/what-s-new-at-microsoft-secure/ba-p/3773576) for more information on announcements from Microsoft Secure.
+Check out the [Tech Community blog](https://techcommunity.microsoft.com/t5/microsoft-defender-threat/what-s-new-at-microsoft-secure/ba-p/3773576) for more information on announcements from Microsoft Secure.
### Automatically update the SAP data connector agent
service-bus-messaging Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/network-security.md
You can use service tags to define network access controls on [network security
## IP firewall By default, Service Bus namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
-This feature is helpful in scenarios in which Azure Service Bus should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Service Bus with [Azure Express Route][express-route], you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses or addresses of a corporate NAT gateway.
+This feature is helpful in scenarios in which Azure Service Bus should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Service Bus with [Azure Express Route](../expressroute/expressroute-introduction.md), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses or addresses of a corporate NAT gateway.
The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
See the following articles:
- [How to configure IP firewall for a Service Bus namespace](service-bus-ip-filtering.md) - [How to configure virtual network service endpoints for a Service Bus namespace](service-bus-service-endpoints.md)-- [How to configure private endpoints for a Service Bus namespace](private-link-service.md)
+- [How to configure private endpoints for a Service Bus namespace](private-link-service.md)
service-bus-messaging Service Bus Go How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-go-how-to-use-queues.md
func SendMessageBatch(messages []string, client *azservicebus.Client) {
if err != nil { panic(err) }-
+ defer sender.Close(context.TODO())
+
batch, err := sender.NewMessageBatch(context.TODO(), nil) if err != nil { panic(err)
service-bus-messaging Service Bus Messaging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-overview.md
Azure Service Bus is a fully managed enterprise message broker with message queu
- Safely routing and transferring data and control across service and application boundaries - Coordinating transactional work that requires a high-degree of reliability
-> [!NOTE]
-> For a comparison of Azure messaging services, see [Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus](../event-grid/compare-messaging-services.md?toc=%2fazure%2fservice-bus-messaging%2ftoc.json&bc=%2fazure%2fservice-bus-messaging%2fbreadcrumb%2ftoc.json).
## Overview Data is transferred between different applications and services using **messages**. A message is a container decorated with metadata, and contains data. The data can be any kind of information, including structured data encoded with the common formats such as the following ones: JSON, XML, Apache Avro, Plain Text.
Service Bus fully integrates with many Microsoft and Azure services, for instanc
To get started using Service Bus messaging, see the following articles: -- [Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus](../event-grid/compare-messaging-services.md?toc=%2fazure%2fservice-bus-messaging%2ftoc.json&bc=%2fazure%2fservice-bus-messaging%2fbreadcrumb%2ftoc.json). - [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) - Quickstarts: [.NET](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), or [JMS](service-bus-java-how-to-use-jms-api-amqp.md). - [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
You can configure your managed identities through:
**To migrate your Azure Automation account authentication type from a Run As to a managed identity authentication, follow these steps:** + 1. In the [Azure portal](https://portal.azure.com), select the recovery services vault for which you want to migrate the runbooks. 1. On the homepage of your recovery services vault page, do the following:
You can configure your managed identities through:
:::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/extension-update-settings.png" alt-text="Screenshot of the Create Recovery Services vault page.":::
-1. After the successful migration of your automation account, the authentication type for the linked account details on the **Extension update settings** page is updated.
+
+> [!NOTE]
+> Ensure that the System assigned Managed Identity is turned off for the Automation account for the _"Migrate"_ button to appear. If the account is not migrated and the _"Migrate"_ button isn't appearing, turn off the Managed Identity for the Automation Account and try again.
+
+3. After the successful migration of your automation account, the authentication type for the linked account details on the **Extension update settings** page is updated.
+1. Once the _Migrate_ operation is completed, toggle the **Site Recovery to manage** button to turn it _On_ again.
When you successfully migrate from a Run As to a Managed Identities account, the following changes are reflected on the Automation Run As Accounts :
To link an existing managed identity Automation account to your Recovery Service
1. Select the **Select** option. :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/select-mi.png" alt-text="Screenshot that shows select managed identity settings page."::: 1. Select **Review + assign**.--
+1. Navigate to the **Extension update settings** under the Recovery Services Vault, toggle the **Site Recovery to manage** button to turn it _On_ again.
## Next steps
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
As part of the Azure ecosystem, Azure Spring Apps allows easy binding to other A
* Azure Spring Apps is a fully managed service for Spring Boot apps that lets you focus on building and running apps without the hassle of managing infrastructure.
-* Simply deploy your JARs or code for your Spring Boot app or Zip for your Steeltoe app, and Azure Spring Apps will automatically wire your apps with Spring service runtime and built-in app lifecycle.
+* Deploy your JARs or code for your Spring Boot app or Zip for your Steeltoe app, and Azure Spring Apps automatically wires your apps with Spring service runtime and built-in app lifecycle.
* Monitoring is simple. After deployment you can monitor app performance, fix errors, and rapidly improve applications.
As part of the Azure ecosystem, Azure Spring Apps allows easy binding to other A
### Get started with Azure Spring Apps
-The following quickstarts will help you get started:
+The following articles help you get started:
* [Launch your first app](quickstart.md) * [Introduction to the sample app](quickstart-sample-app-introduction.md)
-The following documents will help you migrate existing Spring Boot apps to Azure Spring Apps:
+The following articles help you migrate existing Spring Boot apps to Azure Spring Apps:
* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps) * [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps?pivots=sc-standard-tier)
-The following quickstarts apply to Basic/Standard only. For Enterprise quickstarts, see the next section.
+The following quickstarts apply to the Basic/Standard plan only. For Enterprise quickstarts, see the next section.
* [Provision an Azure Spring Apps service instance](quickstart-provision-service-instance.md) * [Set up the configuration server](quickstart-setup-config-server.md)
The following quickstarts apply to Basic/Standard only. For Enterprise quickstar
The Standard consumption plan provides a flexible billing model where you pay only for compute time used instead of provisioning resources. Start with as little as 0.25 vCPU and dynamically scale out based on HTTP or events powered by Kubernetes Event-Driven Autoscaling (KEDA). You can also scale your app instance to zero and stop all charges related to the app when there are no requests to process.
-Standard consumption simplifies the virtual network experience for running polyglot apps. All your apps will share the same virtual network when you deploy frontend apps as containers in Azure Container Apps and Spring apps in Standard consumption, in the same Azure Container Apps environment. There's no need to create disparate subnets and Network Security Groups for frontend apps, Spring apps, and the Spring service runtime.
+The Standard consumption plan simplifies the virtual network experience for running polyglot apps. When you deploy frontend apps as containers in Azure Container Apps and Spring apps in the Standard consumption plan, all your apps share the same virtual network in the same Azure Container Apps environment. There's no need to create disparate subnets and Network Security Groups for frontend apps, Spring apps, and the Spring service runtime.
:::image type="content" source="media/overview/standard-consumption-plan.png" alt-text="Diagram showing app architecture with Azure Spring Apps standard consumption plan." lightbox="media/overview/standard-consumption-plan.png" border="false"::: ## Enterprise plan
-Based on our learnings from customer engagements, we built Azure Spring Apps Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential, including feature parity and region parity with Standard tier.
+The Enterprise plan provides commercially supported Tanzu components with SLA assurance. For more information, see the [SLA for Azure Spring Apps](https://azure.microsoft.com/support/legal/sla/spring-apps). This support helps enterprise customers ship faster for mission-critical workloads with peace of mind. The Enterprise plan helps unlock SpringΓÇÖs full potential while including feature parity and region parity with the Standard plan.
-The following video introduces Azure Spring Apps Enterprise tier.
+The following video introduces the Azure Spring Apps Enterprise plan.
<br>
The following video introduces Azure Spring Apps Enterprise tier.
### Deploy and manage Spring and polyglot applications
-The fully managed VMware Tanzu® Build Service™ in Azure Spring Apps Enterprise tier automates container creation, management and governance at enterprise scale using open-source [Cloud Native Buildpacks](https://buildpacks.io/) and commercial [VMware Tanzu® Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/). Tanzu Build Service offers a higher-level abstraction for building apps and provides a balance of control that reduces the operational burden on developers and supports enterprise IT operators who manage applications at scale. You can configure what Buildpacks to apply and build Spring applications and polyglot applications that run alongside Spring applications on Azure Spring Apps.
+The fully managed VMware Tanzu® Build Service™ in the Azure Spring Apps Enterprise plan automates container creation, management and governance at enterprise scale using open-source [Cloud Native Buildpacks](https://buildpacks.io/) and commercial [VMware Tanzu® Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/). Tanzu Build Service offers a higher-level abstraction for building apps. Tanzu Build Service also provides a balance of control that reduces the operational burden on developers and supports enterprise IT operators who manage applications at scale. You can configure what Buildpacks to apply and build Spring applications and polyglot applications that run alongside Spring applications on Azure Spring Apps.
Tanzu Buildpacks makes it easier to build Spring, Java, NodeJS, Python, Go and .NET Core applications and configure application performance monitoring agents such as Application Insights, New Relic, Dynatrace, AppDynamics, and Elastic.
Tanzu Buildpacks makes it easier to build Spring, Java, NodeJS, Python, Go and .
You can manage and discover request routes and APIs exposed by applications using the fully managed Spring Cloud Gateway for VMware Tanzu® and API portal for VMware Tanzu®.
-Spring Cloud Gateway for Tanzu effectively routes diverse client requests to applications in Azure Spring Apps, Azure, and on-premises, and addresses cross-cutting considerations for applications behind the Gateway such as securing, routing, rate limiting, caching, monitoring, resiliency and hiding applications. You can configure:
+Spring Cloud Gateway for Tanzu effectively routes diverse client requests to applications in Azure Spring Apps, Azure, and on-premises. Spring Cloud Gateway also addresses cross-cutting considerations for applications behind the Gateway, such as securing, routing, rate limiting, caching, monitoring, resiliency and hiding applications. You can make the following configurations:
-* Single sign-on integration with your preferred identity provider without any additional code or dependencies.
+* Single sign-on integration with your preferred identity provider without any extra code or dependencies.
* Dynamic routing rules to applications without any application redeployment. * Request throttling without any backing services.
API Portal for VMware Tanzu provides API consumers with the ability to find and
### Use flexible and configurable VMware Tanzu components
-With Azure Spring Apps Enterprise tier, you can use fully managed VMware Tanzu components on Azure. You can select which VMware Tanzu components you want to use in your environment during Enterprise instance creation. Tanzu Build Service, Spring Cloud Gateway for Tanzu, API Portal for VMware Tanzu, Application Configuration Service for VMware Tanzu®, and VMware Tanzu® Service Registry are available during public preview.
+With the Azure Spring Apps Enterprise plan, you can use fully managed VMware Tanzu components on Azure without operational hassle. You can select which VMware Tanzu components you want to use in your environment, either during or after Enterprise instance creation. The following components are available today:
-VMware Tanzu components deliver increased value so you can:
+* Tanzu Build Service
+* Spring Cloud Gateway for Tanzu
+* API Portal for VMware Tanzu
+* Application Configuration Service for VMware Tanzu®
+* VMware Tanzu® Service Registry
+* Application Live View for VMware Tanzu®
+* Application Accelerator for VMware Tanzu®
+
+VMware Tanzu components deliver increased value so you can accomplish the following tasks:
* Grow your enterprise grade application portfolio from a few applications to thousands with end-to-end observability while delegating operational complexity to Microsoft and VMware. * Lift and shift Spring applications across Azure Spring Apps and any other compute environment. * Control your build dependencies, deploy polyglot applications, and deploy Spring Cloud middleware components as needed.
-Microsoft and VMware will continue to add more enterprise-grade features, including Tanzu components such as Application Live View for VMware Tanzu®, Application Accelerator for VMware Tanzu®, and Spring Cloud Data Flow for VMware Tanzu®, although the Azure Spring Apps Enterprise tier roadmap is not confirmed and is subject to change.
- ### Unlock SpringΓÇÖs full potential with Long-Term Support (LTS)
-Azure Spring Apps Enterprise tier includes VMware Spring Runtime Support for application development and deployments. This support gives you access to Spring experts, enabling you to unlock the full potential of the Spring ecosystem to develop and deploy applications faster.
+The Azure Spring Apps Enterprise plan includes VMware Spring Runtime Support for application development and deployments. This support gives you access to Spring experts, enabling you to unlock the full potential of the Spring ecosystem to develop and deploy applications faster.
-Typically, open-source Spring project minor releases are supported for a minimum of 12 months from the date of initial release. In Azure Spring Apps Enterprise, Spring project minor releases will receive commercial support for a minimum of 24 months from the date of initial release through the VMware Spring Runtime Support entitlement. This extended support ensures the security and stability of your Spring application portfolio even after the open source end of life dates. For more information, see [Spring Boot support](https://spring.io/projects/spring-boot#support).
+Typically, open-source Spring project minor releases are supported for a minimum of 12 months from the date of initial release. In the Azure Spring Apps Enterprise plan, Spring project minor releases receive commercial support for a minimum of 24 months from the date of initial release through the VMware Spring Runtime Support entitlement. This extended support ensures the security and stability of your Spring application portfolio even after the open source end of life dates. For more information, see [Spring Boot support](https://spring.io/projects/spring-boot#support).
### Fully integrate into the Azure and Java ecosystems
-Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed environment. You get all the benefits of Azure and the Java ecosystem, and the experience is familiar and intuitive, as shown in the following table:
+Azure Spring Apps, including the Enterprise plan, runs on Azure in a fully managed environment. You get all the benefits of Azure and the Java ecosystem, and the experience is familiar and intuitive, as shown in the following table:
| Best practice | Ecosystem | |--|-|
Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed e
| Securely load app secrets and certificates. | Azure Key Vault | | Use familiar development tools. | IntelliJ, Visual Studio Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
-After you create your Enterprise tier service instance and deploy your applications, you can monitor with Application Insights or any other application performance management tools of your choice.
+After you create your Enterprise plan service instance and deploy your applications, you can monitor with Application Insights or any other application performance management tools of your choice.
### Get started with the Standard consumption plan
-The following quickstarts and articles will help you get started using the Standard consumption plan:
+The following articles help you get started using the Standard consumption plan:
* [Provision a service instance](quickstart-provision-standard-consumption-service-instance.md) * [Provision in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md)
The following quickstarts and articles will help you get started using the Stand
* [Map a custom domain to Azure Spring Apps](quickstart-standard-consumption-custom-domain.md) * [Analyze logs and metrics](quickstart-analyze-logs-and-metrics-standard-consumption.md) * [Enable your own persistent storage](how-to-custom-persistent-storage-with-standard-consumption.md)
-* [Customer responsibilities for Standard consumption plan in a virtual network](standard-consumption-customer-responsibilities.md)
+* [Customer responsibilities for Azure Spring Apps Standard consumption plan in a virtual network](standard-consumption-customer-responsibilities.md)
-### Get started with Enterprise tier
+### Get started with the Enterprise plan
-The following quickstarts will help you get started using the Enterprise tier:
+The following articles help you get started using the Enterprise plan:
-* [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md)
+* [The Enterprise plan in Azure Marketplace](how-to-enterprise-marketplace-offer.md)
* [Introduction to Fitness Store sample](quickstart-sample-app-acme-fitness-store-introduction.md) * [Build and deploy apps](quickstart-deploy-apps-enterprise.md) * [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
The following quickstarts will help you get started using the Enterprise tier:
* [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md) * [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
-Most of the Azure Spring Apps documentation applies to all tiers. Some articles apply only to Enterprise tier or only to Basic/Standard tier, as indicated at the beginning of each article.
+Most of the Azure Spring Apps documentation applies to all the service plans. Some articles apply only to the Enterprise plan or only to the Basic/Standard plan, as indicated at the beginning of each article.
-As a quick reference, the articles listed above and the articles in the following list apply to Enterprise tier only, or contain significant content that applies only to Enterprise tier:
+As a quick reference, the articles listed previously and the articles in the following list apply to the Enterprise plan only, or contain significant content that applies only to the Enterprise plan:
* [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md) * [Use Tanzu Build Service](how-to-enterprise-build-service.md)
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Title: Assign an Azure role for access to blob data description: Learn how to assign permissions for blob data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD.-+ Last updated 04/19/2022-+ ms.devlang: powershell, azurecli
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Title: Authorize access to blobs using Active Directory description: Authorize access to Azure blobs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-cli.md
Title: Authorize access to blob data with Azure CLI description: Specify how to authorize data operations against blob data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token.-+ Last updated 07/12/2021-+ ms.devlang: azurecli
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
Title: Authorize access to blob data in the Azure portal description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ Last updated 12/10/2021-+
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access blob data description: PowerShell supports signing in with Azure AD credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ Last updated 05/12/2022-+ ms.devlang: powershell
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Title: Create and manage encryption scopes
description: Learn how to create an encryption scope to isolate blob data at the container or blob level. -+ Last updated 03/10/2023 -+ ms.devlang: powershell, azurecli
storage Scalability Targets Premium Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-block-blobs.md
Title: Scalability targets for premium block blob storage accounts description: Learn about premium-performance block blob storage accounts. Block blob storage accounts are optimized for applications that use smaller, kilobyte-range objects.-+ Last updated 12/18/2019-+
storage Scalability Targets Premium Page Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-page-blobs.md
Title: Scalability targets for premium page blob storage accounts description: A premium performance page blob storage account is optimized for read/write operations. This type of storage account backs an unmanaged disk for an Azure virtual machine.-+ Last updated 09/24/2021-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md
Title: Scalability and performance targets for Blob storage description: Learn about scalability and performance targets for Blob storage.-+ Last updated 01/11/2023-+
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Title: Security recommendations for Blob storage description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ Last updated 05/12/2022-+
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
Title: Premium block blob storage accounts description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times.-+ -+ Last updated 10/14/2021
storage Storage Blob Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md
Title: Check the encryption status of a blob
description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. -+ Last updated 02/09/2023-+
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md
Title: Optimize costs for Blob storage with reserved capacity
description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage Gen2 resources. -+ Last updated 05/17/2021-+
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
Title: Use Azure CLI to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using Azure CLI. -+ Last updated 12/18/2019-+
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
Title: Use PowerShell to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using PowerShell. -+ Last updated 12/18/2019-+
storage Storage Blobs Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md
Title: Latency in Blob storage
description: Understand and measure latency for Blob storage operations, and learn how to design your Blob storage applications for low latency. -+ Last updated 09/05/2019-+
storage Authorization Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md
Title: Use the Azure Storage resource provider to access management resources description: The Azure Storage resource provider is a service that provides access to management resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and delete resources such as storage accounts, private endpoints, and account access keys. -+ Last updated 12/12/2019-+
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Title: Authorize operations for data access
description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Azure Active Directory, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs. -+ Last updated 10/25/2022-+
storage Lock Account Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md
Title: Apply an Azure Resource Manager lock to a storage account
description: Learn how to apply an Azure Resource Manager lock to a storage account. -+ Last updated 03/09/2021-+
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage
description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 02/21/2023 --++
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Title: Configure an expiration policy for shared access signatures (SAS)
description: Configure a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks. -+ Last updated 12/12/2022-+
storage Scalability Targets Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md
Title: Scalability for the Azure Storage resource provider
+ Title: Scalability targets for the Azure Storage resource provider
description: Scalability and performance targets for operations against the Azure Storage resource provider. The resource provider implements Azure Resource Manager for Azure Storage. -+ Last updated 12/18/2019-+
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Title: Scalability and performance targets for standard storage accounts
description: Learn about scalability and performance targets for standard storage accounts. -+ Last updated 05/25/2022-+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage
description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/14/2023 --++
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Title: Prevent authorization with Shared Key
description: To require clients to use Azure AD to authorize requests, you can disallow requests to the storage account that are authorized with Shared Key. -+ Last updated 11/14/2022-+ ms.devlang: azurecli
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Title: Create a storage account
description: Learn to create a storage account to store blobs, files, queues, and tables. An Azure storage account provides a unique namespace in Microsoft Azure for reading and writing your data. -+ Last updated 01/10/2023-+
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Title: Get storage account configuration information
description: Use the Azure portal, PowerShell, or Azure CLI to retrieve storage account configuration properties, including the Azure Resource Manager resource ID, account location, account type, or replication SKU. -+ -+ Last updated 12/12/2022
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Title: Manage account access keys
description: Learn how to view, manage, and rotate your storage account access keys. -+ Last updated 03/22/2023-+
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Title: Move an Azure Storage account to another region description: Shows you how to move an Azure Storage account to another region. -+ Last updated 06/15/2022-+
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Title: Storage account overview
description: Learn about the different types of storage accounts in Azure Storage. Review account naming, performance tiers, access tiers, redundancy, encryption, endpoints, and more. -+ Last updated 06/28/2022-+
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Title: Recover a deleted storage account
description: Learn how to recover a deleted storage account within the Azure portal. -+ Last updated 01/25/2023-+
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Title: Upgrade to a general-purpose v2 storage account
description: Upgrade to general-purpose v2 storage accounts using the Azure portal, PowerShell, or the Azure CLI. Specify an access tier for blob data. -+ Last updated 04/29/2021-+
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Title: Configure a connection string
description: Configure a connection string for an Azure storage account. A connection string contains the information needed to authorize access to a storage account from your application at runtime using Shared Key authorization. -+ Last updated 01/24/2023-+
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
Title: Use PowerShell to manage data in Azure independent clouds
description: Managing Storage in the China Cloud, Government Cloud, and German Cloud Using Azure PowerShell. -+ Last updated 12/04/2019-+
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Title: Grant limited access to data with shared access signatures (SAS)
description: Learn about using shared access signatures (SAS) to delegate access to Azure Storage resources, including blobs, queues, tables, and files. -+ Last updated 02/16/2023-+
storage Storage Use Azcopy Optimize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md
If you're copying blobs between storage accounts, consider setting the value of
#### Decrease the number of logs generated
-You can improve performance by reducing the number of log entries that AzCopy creates as it completes an operation. By default, AzCopy logs all activity related to an operation. To achieve optimal performance, consider setting the `log-level` parameter of your copy, sync, or remove command to `ERROR`. That way, AzCopy logs only errors. By default, the value log level is set to `INFO`.
+You can improve performance by reducing the number of log entries that AzCopy creates as it completes an operation. By default, AzCopy logs all activity related to an operation. To achieve optimal performance, consider setting the `--log-level` parameter of your copy, sync, or remove command to `ERROR`. That way, AzCopy logs only errors. By default, the value log level is set to `INFO`.
#### Turn off length checking
storage Storage Files Identity Multiple Forests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-multiple-forests.md
To use this method, complete the following steps:
Now, from domain-joined clients, you should be able to use storage accounts joined to any forest.
+> [!NOTE]
+> Ensure hostname part of the FQDN matches the storage account name as described above. Otherwise you will get an access denied error: "The filename, directory name, or volume label syntax is incorrect." A network trace will show STATUS_OBJECT_NAME_INVALID (0xc0000033) message during the SMB session setup.
+++ ### Add custom name suffix and routing rule If you've already modified the storage account name suffix and added a CNAME record as described in the previous section, you can skip this step. If you'd rather not make DNS changes or modify the storage account name suffix, you can configure a suffix routing rule from **Forest 1** to **Forest 2** for a custom suffix of **file.core.windows.net**.
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/assign-azure-role-data-access.md
Title: Assign an Azure role for access to queue data
description: Learn how to assign permissions for queue data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 07/13/2021-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-access-azure-active-directory.md
Title: Authorize access to queues using Active Directory description: Authorize access to Azure queues using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-cli.md
Title: Choose how to authorize access to queue data with Azure CLI description: Specify how to authorize data operations against queue data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token. -+ -+ Last updated 02/10/2021
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-portal.md
Title: Choose how to authorize access to queue data in the Azure portal description: When you access queue data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ -+ Last updated 12/13/2021
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access queue data description: PowerShell supports signing in with Azure AD credentials to run commands on Azure Queue Storage data. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ -+ Last updated 02/10/2021
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/scalability-targets.md
Title: Scalability and performance targets for Queue Storage description: Learn about scalability and performance targets for Queue Storage.-+ -+ Last updated 12/18/2019
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
Title: Security recommendations for Queue Storage description: Learn about security recommendations for Queue Storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ -+ Last updated 05/12/2022
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/assign-azure-role-data-access.md
Title: Assign an Azure role for access to table data
description: Learn how to assign permissions for table data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 03/03/2022-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-access-azure-active-directory.md
Title: Authorize access to tables using Active Directory
description: Authorize access to Azure tables using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 02/09/2023-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/scalability-targets.md
Title: Scalability and performance targets for Table storage
description: Learn about scalability and performance targets for Table storage. -+ Last updated 03/09/2020-+
virtual-desktop Configure Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath.md
RDP Shortpath is a feature of Azure Virtual Desktop that establishes a direct UDP-based transport between a supported Windows Remote Desktop client and session host. This article shows you how to configure RDP Shortpath for managed networks and public networks. For more information, see [RDP Shortpath](rdp-shortpath.md).
+> [!IMPORTANT]
+> RDP Shortpath is only available in the Azure public cloud.
+ ## Prerequisites Before you can enable RDP Shortpath, you'll need to meet the prerequisites. Select a tab below for your scenario.
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
RDP Shortpath can be used in two ways:
The transport used for RDP Shortpath is based on the [Universal Rate Control Protocol (URCP)](https://www.microsoft.com/research/publication/urcp-universal-rate-control-protocol-for-real-time-communication-applications/). URCP enhances UDP with active monitoring of the network conditions and provides fair and full link utilization. URCP operates at low delay and loss levels as needed. > [!IMPORTANT]
-> During the preview, TURN is only available for connections to session hosts in a validation host pool. To configure your host pool as a validation environment, see [Define your host pool as a validation environment](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool).
+> - RDP Shortpath is only available in the Azure public cloud.
+>
+> - During the preview, TURN is only available for connections to session hosts in a validation host pool. To configure your host pool as a validation environment, see [Define your host pool as a validation environment](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool).
## Key benefits
To provide the best chance of a UDP connection being successful when using a pub
When a connection is being established, Interactive Connectivity Establishment (ICE) coordinates the management of STUN and TURN to optimize the likelihood of a connection being established, and ensure that precedence is given to preferred network communication protocols.
-Each RDP session uses a dynamically assigned UDP port from an ephemeral port range (**49152** to **65535** by default) that accepts the RDP Shortpath traffic. You can also use a smaller, predictable port range. For more information, see [Limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md).
+Each RDP session uses a dynamically assigned UDP port from an ephemeral port range (**49152** to **65535** by default) that accepts the RDP Shortpath traffic. Port 65330 is ignored from this range as it is reserved for use internally by Azure. You can also use a smaller, predictable port range. For more information, see [Limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md).
> [!TIP] > RDP Shortpath for public networks will work automatically without any additional configuration, providing networks and firewalls allow the traffic through and RDP transport settings in the Windows operating system for session hosts and clients are using their default values.
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
This article assumes that you're familiar with:
## When to use the Application Health extension
-The Application Health extension is deployed inside a Virtual Machine Scale Set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
- The Application Health Extension is deployed inside a Virtual Machine Scale Set instance and reports on application health from inside the scale set instance. The extension probes on a local application endpoint and will update the health status based on TCP/HTTP(S) responses received from the application. This health status is used by Azure to initiate repairs on unhealthy instances and to determine if an instance is eligible for upgrade operations. The extension reports health from within a VM and can be used in situations where an external probe such as the [Azure Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) canΓÇÖt be used.
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
For more information, see [Share images using a community gallery](./share-galle
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
->To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
->
-> During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery.
+>To publish a community gallery, you'll need to enable the preview feature using the azure CLI: `az feature register --name CommunityGallery --namespace Microsoft.Compute` or PowerShell: `Register-AzProviderFeature -FeatureName "CommunityGallery" -ProviderNamespace "Microsoft.Compute"`. For more information on enabling preview features and checking the status, see [Set up preview features in your Azure subscription](../azure-resource-manager/management/preview-features.md). Creating VMs from community gallery images is open to all Azure users.
> > You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
virtual-machines Extensions Rmpolicy Howto Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-ps.md
When you're done, hit the **Ctrl + O** and then **Enter** to save the file. Hit
A policy definition is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create a policy definition using the [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition) cmdlet.
- The policy rules and parameter values below are the files you created and stored as .json files in your Cloud Shell. Replace the file paths as needed.
+
+ The policy rules and parameters are the files you created and stored as .json files in your cloud shell. Replace the example `-Policy` and `-Parameter` file paths as needed.
+ ```azurepowershell-interactive
virtual-machines Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md
Previously updated : 03/06/2023 Last updated : 04/03/2023 - + # Virtual machine extensions and features for Windows
-Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, antivirus protection, or the ability to run a script inside it, you can use a VM extension.
+Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, antivirus protection, or the ability to run a script inside of the VM, you can use a VM extension.
-You can run Azure VM extensions by using the Azure CLI, PowerShell, Azure Resource Manager templates (ARM templates), and the Azure portal. You can bundle extensions with a new VM deployment or run them against any existing system.
+You can run Azure VM extensions by using the Azure CLI, PowerShell, Azure Resource Manager (ARM) templates, and the Azure portal. You can bundle extensions with a new VM deployment or run them against any existing system.
-This article provides an overview of Azure VM extensions, prerequisites for using them, and guidance on how to detect, manage, and remove them. This article provides generalized information because many VM extensions are available. Each has a potentially unique configuration and its own documentation.
+This article provides an overview of Azure VM extensions, including prerequisites and guidance on how to detect, manage, and remove extensions. This article provides generalized information because many VM extensions are available. Each extension has a potentially unique configuration and its own documentation.
## Use cases and samples
-Each Azure VM extension has a specific use case. Examples include:
+Each Azure VM extension has a specific use case. Here are some examples:
- Apply PowerShell desired state configurations (DSCs) to a VM by using the [DSC extension for Windows](dsc-overview.md).-- Configure monitoring of a VM by using the [Log Analytics Agent VM extension](../../azure-monitor/vm/monitor-virtual-machine.md).+
+- Configure monitoring of a VM by using the [Azure Monitor agent](/azure/azure-monitor/vm/monitor-virtual-machine) and [VM insights](/azure/azure-monitor/vm/vminsights-overview).
+ - Configure an Azure VM by using [Chef](/azure/developer/chef/windows-vm-configure).-- Configure monitoring of your Azure infrastructure by using the [Datadog extension](https://www.datadoghq.com/blog/introducing-azure-monitoring-with-one-click-datadog-deployment/).
+- Configure monitoring of your Azure infrastructure by using the [Datadog extension](https://www.datadoghq.com/blog/introducing-azure-monitoring-with-one-click-datadog-deployment/).
-In addition to process-specific extensions, a Custom Script extension is available for both Windows and Linux virtual machines. The [Custom Script extension for Windows](custom-script-windows.md) allows any PowerShell script to be run on a VM. Custom scripts are useful for designing Azure deployments that require configuration beyond what native Azure tooling can provide.
+In addition to process-specific extensions, a Custom Script Extension is available for both Windows and Linux virtual machines. The [Custom Script Extension for Windows](custom-script-windows.md) allows any PowerShell script to run on a VM. Custom scripts are useful for designing Azure deployments that require configuration beyond what native Azure tooling can provide.
## Prerequisites
+Review the following prerequisites for working with Azure VM extensions.
+ ### Azure VM Agent
-To handle the extension on the VM, you need the [Azure VM Agent for Windows](agent-windows.md) (also called the Windows Guest Agent) installed. Some individual extensions have prerequisites, such as access to resources or dependencies.
+To handle extensions on the VM, you need the [Azure Virtual Machine Agent for Windows](agent-windows.md) installed. This agent is also referred to as the Azure VM Agent or the Windows Guest Agent. As you prepare to install extensions, keep in mind that some extensions have individual prerequisites, such as access to resources or dependencies.
The Azure VM Agent manages interactions between an Azure VM and the Azure fabric controller. The agent is responsible for many functional aspects of deploying and managing Azure VMs, including running VM extensions.
-The Azure VM Agent is preinstalled on Azure Marketplace images. It can also be installed manually on supported operating systems.
+The Azure VM Agent is preinstalled on Azure Marketplace images. The agent can also be installed manually on supported operating systems.
-The agent runs on multiple operating systems. However, the extensions framework has a [limit for the operating systems that extensions use](https://support.microsoft.com/en-us/help/4078134/azure-extension-supported-operating-systems). Some extensions aren't supported across all operating systems and might emit error code 51 ("Unsupported OS"). Check the individual extension documentation for supportability.
+The agent runs on multiple operating systems. However, the extensions framework has a [limit for the operating systems that extensions use](/troubleshoot/azure/virtual-machines/extension-supported-os). Some extensions aren't supported across all operating systems and might emit error code 51 ("Unsupported OS"). Check the individual extension documentation for supportability.
### Network access Extension packages are downloaded from the Azure Storage extension repository. Extension status uploads are posted to Azure Storage.
-If you use a [supported version of the Azure VM Agent](https://support.microsoft.com/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support), you don't need to allow access to Azure Storage in the VM region. You can use the agent to redirect the communication to the Azure fabric controller for agent communications (HostGAPlugin feature through the privileged channel on private IP [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)). If you're on an unsupported version of the agent, you need to allow outbound access to Azure Storage in that region from the VM.
+If you use a [supported version of the Azure VM Agent](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), you don't need to allow access to Azure Storage in the VM region. You can use the VM Agent to redirect the communication to the Azure fabric controller for agent communications (via the `HostGAPlugin` feature through the privileged channel on private IP address [168.63.129.16](/azure/virtual-network/what-is-ip-address-168-63-129-16)). If you're on an unsupported version of the VM Agent, you need to allow outbound access to Azure Storage in that region from the VM.
> [!IMPORTANT]
-> If you've blocked access to 168.63.129.16 by using the guest firewall or by using a proxy, extensions fail even if you're using a supported version of the agent or you've configured outbound access. Ports 80, 443, and 32526 are required.
+> If you block access to IP address 168.63.129.16 by using the guest firewall or via a proxy, extensions fail. Failure occurs even if you use a supported version of the VM Agent or you configure outbound access. Ports 80, 443, and 32526 are required.
-Agents can only be used to download extension packages and reporting status. For example, if an extension installation needs to download a script from GitHub (Custom Script extension) or needs access to Azure Storage (Azure Backup), then you need to open additional firewall or network security group (NSG) ports. Different extensions have different requirements, because they're applications in their own right. For extensions that require access to Azure Storage or Azure Active Directory, you can allow access by using Azure NSG [service tags](../../virtual-network/network-security-groups-overview.md#service-tags).
+Agents can only be used to download extension packages and report status. For example, if an extension installation needs to download a script from GitHub (Custom Script Extension) or requires access to Azure Storage (Azure Backup), then you need to open other firewall or network security group (NSG) ports. Different extensions have different requirements because they're applications in their own right. For extensions that require access to Azure Storage or Azure Active Directory, you can allow access by using Azure NSG [service tags](/azure/virtual-network/network-security-groups-overview#service-tags).
-The Azure VM Agent doesn't have proxy server support for you to redirect agent traffic requests through. That means the Azure VM Agent relies on your custom proxy (if you have one) to access resources on the internet or on the host through IP 168.63.129.16.
+The Azure VM Agent doesn't provide proxy server support to enable redirection of agent traffic requests. The VM Agent relies on your custom proxy (if you have one) to access resources on the internet or on the host through IP address 168.63.129.16.
## Discover VM extensions
-Many VM extensions are available for use with Azure VMs. To see a complete list, use [Get-AzVMExtensionImage](/powershell/module/az.compute/get-azvmextensionimage). The following example lists all available extensions in the *WestUS* location:
+Many VM extensions are available for use with Azure VMs. To see a complete list, use the [`Get-AzVMExtensionImage`](/powershell/module/az.compute/get-azvmextensionimage) PowerShell cmdlet.
+
+The following command lists all available VM extensions in the West US region location:
```powershell
-Get-AzVmImagePublisher -Location "WestUS" |
+Get-AzVmImagePublisher -Location "West US" |
Get-AzVMExtensionImageType | Get-AzVMExtensionImage | Select Type, Version ```
+This command provides output similar to the following example:
+
+```powershell
+Type Version
+- -
+AcronisBackup 1.0.33
+AcronisBackup 1.0.51
+AcronisBackupLinux 1.0.33
+AlertLogicLM 1.3.0.1
+AlertLogicLM 1.3.0.0
+AlertLogicLM 1.4.0.1
+```
+ ## Run VM extensions
-Azure VM extensions run on existing VMs. That's useful when you need to make configuration changes or recover connectivity on an already deployed VM. VM extensions can also be bundled with ARM template deployments. By using extensions with ARM templates, you can deploy and configure Azure VMs without post-deployment intervention.
+Azure VM extensions run on existing VMs, which is useful when you need to make configuration changes or recover connectivity on an already deployed VM. VM extensions can also be bundled with ARM template deployments. By using extensions with ARM templates, you can deploy and configure Azure VMs without post-deployment intervention.
-You can use the following methods to run an extension against an existing VM.
+You can use the following methods to run an extension against an existing VM.
+
+> [!NOTE]
+> Some of the following examples use `"<placeholder>"` parameter values in the commands. Before you run each command, make sure to replace any `"<placeholder>"` values with specific values for your configuration.
### PowerShell
-Several PowerShell commands exist for running individual extensions. To see a list, use [Get-Command](/powershell/module/microsoft.powershell.core/get-command) and filter on *Extension*:
+Several PowerShell commands exist for running individual extensions. To see a list, use the [Get-Command](/powershell/module/microsoft.powershell.core/get-command) command and filter on *Extension*:
```powershell Get-Command Set-Az*Extension* -Module Az.Compute ```
-This command provides output similar to the following:
+This command provides output similar to the following example:
```powershell CommandType Name Version Source
Cmdlet Set-AzVMSqlServerExtension 4.5.0 Az.Comp
Cmdlet Set-AzVmssDiskEncryptionExtension 4.5.0 Az.Compute ```
-The following example uses the [Custom Script extension](custom-script-windows.md) to download a script from a GitHub repository onto the target virtual machine and then run the script:
+The following example uses the [Custom Script Extension](custom-script-windows.md) to download a script from a GitHub repository onto the target virtual machine and then run the script.
```powershell
-Set-AzVMCustomScriptExtension -ResourceGroupName "myResourceGroup" `
- -VMName "myVM" -Name "myCustomScript" `
+Set-AzVMCustomScriptExtension -ResourceGroupName "<myResourceGroup>" `
+ -VMName "<myVM>" -Name "<myCustomScript>" `
-FileUri "https://raw.githubusercontent.com/neilpeterson/nepeters-azure-templates/master/windows-custom-script-simple/support-scripts/Create-File.ps1" `
- -Run "Create-File.ps1" -Location "West US"
+ -Run "Create-File.ps1" -Location "<myVMregion>"
``` The following example uses the [VMAccess extension](/troubleshoot/azure/virtual-machines/reset-rdp#reset-by-using-the-vmaccess-extension-and-powershell) to reset the administrative password of a Windows VM to a temporary password. After you run this code, you should reset the password at first sign-in.
+<!-- Note for reviewers: The following command fails on the -UserName and -Password parameters. -->
+ ```powershell $cred=Get-Credential Set-AzVMAccessExtension -ResourceGroupName "myResourceGroup" -VMName "myVM" -Name "myVMAccess" `
- -Location WestUS -UserName $cred.GetNetworkCredential().Username `
+ -Location "myVMregion" -UserName $cred.GetNetworkCredential().Username `
-Password $cred.GetNetworkCredential().Password -typeHandlerVersion "2.0" ``` You can use the [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) command to start any VM extension. - ### Azure portal
-You can apply VM extensions to an existing VM through the Azure portal. Select the VM in the portal, select **Extensions + applications**, and then select **Add**. Choose the extension that you want from the list of available extensions, and follow the instructions in the wizard.
+You can apply VM extensions to an existing VM through the Azure portal. Select the VM in the portal, select **Extensions + Applications**, and then select **+ Add**. Choose the extension that you want from the list of available extensions, and follow the instructions in the wizard.
The following example shows the installation of the Microsoft Antimalware extension from the Azure portal:
-![Screenshot of the dialog for installing the Microsoft Antimalware extension.](./media/features-windows/installantimalwareextension.png)
### Azure Resource Manager templates You can add VM extensions to an ARM template and run them with the deployment of the template. When you deploy an extension with a template, you can create fully configured Azure deployments.
-For example, the following JSON is taken from a [full ARM template](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-windows) that deploys a set of load-balanced VMs and an Azure SQL database, and then installs a .NET Core application on each VM. The VM extension takes care of the software installation.
+The following JSON example is from an [ARM template](https://github.com/Microsoft/dotnet-core-sample-templates/tree/master/dotnet-core-music-windows) that deploys a set of load-balanced VMs and an Azure SQL database, and then installs a .NET Core application on each VM. The VM extension takes care of the software installation.
```json {
For example, the following JSON is taken from a [full ARM template](https://gith
} ```
-For more information on creating ARM templates, see [Virtual machines in an Azure Resource Manager template](../windows/template-description.md#extensions).
+For more information on creating ARM templates, see [Virtual machines in an ARM template](../windows/template-description.md#extensions).
## Help secure VM extension data
-When you run a VM extension, it might be necessary to include sensitive information such as credentials, storage account names, and access keys. Many VM extensions include a protected configuration that encrypts data and only decrypts it inside the target VM. Each extension has a specific protected configuration schema, and each is detailed in extension-specific documentation.
+When you run a VM extension, it might be necessary to include sensitive information such as credentials, storage account names, and access keys. Many VM extensions include a protected configuration that encrypts data and only decrypts it inside the target VM. Each extension has a specific protected configuration schema, and each schema is detailed in extension-specific documentation.
-The following example shows an instance of the Custom Script extension for Windows. The command to run includes a set of credentials. In this example, the command to run isn't encrypted.
+The following JSON example shows an instance of the Custom Script Extension for Windows. The command to run includes a set of credentials. In this example, the command to run isn't encrypted.
```json {
Moving the `commandToExecute` property to the `protected` configuration helps se
On an Azure infrastructure as a service (IaaS) VM that uses extensions, in the certificates console, you might see certificates that have the subject **Windows Azure CRP Certificate Generator**. On a classic RedDog Front End (RDFE) VM, these certificates have the subject name **Windows Azure Service Management for Extensions**.
-These certificates secure the communication between the VM and its host during the transfer of protected settings (password and other credentials) that extensions use. The certificates are built by the Azure fabric controller and passed to the Azure VM Agent. If you stop and start the VM every day, the fabric controller might create a new certificate. The certificate is stored in the computer's personal certificate store. These certificates can be deleted. The Azure VM Agent re-creates certificates if needed.
+These certificates secure the communication between the VM and its host during the transfer of protected settings (password and other credentials) that extensions use. The Azure fabric controller builds the certificates and passes them to the Azure VM Agent. If you stop and start the VM every day, the fabric controller might create a new certificate. The certificate is stored in the computer's personal certificate store. These certificates can be deleted. The Azure VM Agent re-creates certificates if needed.
### How agents and extensions are updated Agents and extensions share the same automatic update mechanism.
-When an update is available and automatic updates are enabled, the update is installed on the VM only after there's a change to an extension or after other VM model changes, such as:
+When an update is available and automatic updates are enabled, the update is installed on the VM only after an extension or other VM model changes. Changes can include:
- Data disks - Extensions
When an update is available and automatic updates are enabled, the update is ins
- VM size - Network profile
-Publishers make updates available to regions at various times, so it's possible that you can have VMs in different regions on different versions.
+Publishers make updates available to regions at various times. It's possible you can have VMs in different regions on different versions.
> [!NOTE]
-> Some updates might require additional firewall rules. See [Network access](#network-access).
+> Some updates might require additional firewall rules. For more information, see [Network access](#network-access).
+
+#### List extensions deployed to a VM
-#### Listing extensions deployed to a VM
+You can use the following command to list the extensions deployed to a VM:
```powershell
-$vm = Get-AzVM -ResourceGroupName "myResourceGroup" -VMName "myVM"
+$vm = Get-AzVM -ResourceGroupName "<myResourceGroup>" -VMName "<myVM>"
$vm.Extensions | select Publisher, VirtualMachineExtensionType, TypeHandlerVersion ```
+This command produces output similar to the following example:
+ ```powershell Publisher VirtualMachineExtensionType TypeHandlerVersion
Microsoft.Compute CustomScriptExtension 1.9
The Azure VM Agent contains only *extension-handling code*. The *Windows provisioning code* is separate. You can uninstall the Azure VM Agent. You can't disable the automatic update of the Azure VM Agent.
-The extension-handling code is responsible for:
+The extension-handling code is responsible for the following tasks:
-- Communicating with the Azure fabric.-- Handling the VM extension operations, such as installations, reporting status, updating the individual extensions, and removing extensions. Updates contain security fixes, bug fixes, and enhancements to the extension-handling code.
+- Communicate with the Azure fabric.
+- Handle the VM extension operations, such as installations, reporting status, updating the individual extensions, and removing extensions. Updates contain security fixes, bug fixes, and enhancements to the extension-handling code.
-To check what version you're running, see [Detect the VM Agent](agent-windows.md#detect-the-vm-agent).
+To check what version you're running, see [Detect the Azure VM Agent](agent-windows.md#detect-the-vm-agent).
#### Extension updates
-When an extension update is available and automatic updates are enabled, after a [change to the VM model](#how-agents-and-extensions-are-updated) occurs, the Azure VM Agent downloads and upgrades the extension.
+When an extension update is available and automatic updates are enabled, if a [VM model changes](#how-agents-and-extensions-are-updated), the Azure VM Agent downloads and upgrades the extension.
-Automatic extension updates are either *minor* or *hotfix*. You can opt in or opt out of minor updates when you provision the extension. The following example shows how to automatically upgrade minor versions in an ARM template by using `"autoUpgradeMinorVersion": true,`:
+Automatic extension updates are either *minor* or *hotfix*. You can opt in or opt out of minor updates when you provision the extension. The following example shows how to automatically upgrade minor versions in an ARM template by using the `"autoUpgradeMinorVersion": true,` parameter:
```json "properties": {
Automatic extension updates are either *minor* or *hotfix*. You can opt in or op
To get the latest minor-release bug fixes, we highly recommend that you always select automatic update in your extension deployments. You can't opt out of hotfix updates that carry security or key bug fixes.
-If you disable automatic updates or you need to upgrade a major version, use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) and specify the target version.
+If you disable automatic updates or you need to upgrade a major version, use the [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) command and specify the target version.
### How to identify extension updates
+There are a few ways you can identify updates for an extension.
+ #### Identify if the extension is set with autoUpgradeMinorVersion on a VM
-You can see from the VM model if the extension was provisioned with `autoUpgradeMinorVersion`. To check, use [Get-AzVm](/powershell/module/az.compute/get-azvm) and provide the resource group and VM name as follows:
+You can view the VM model to determine if the extension is provisioned with the `autoUpgradeMinorVersion` parameter. To check the VM model, use the [Get-AzVm](/powershell/module/az.compute/get-azvm) command and provide the resource group and VM name as follows:
```powerShell $vm = Get-AzVm -ResourceGroupName "myResourceGroup" -VMName "myVM" $vm.Extensions ```
-The following example output shows that `autoUpgradeMinorVersion` is set to `true`:
+The following example output shows the `autoUpgradeMinorVersion` parameter is set to `true`:
```powershell ForceUpdateTag :
TypeHandlerVersion : 1.9
AutoUpgradeMinorVersion : True ```
-#### Identify when an autoUpgradeMinorVersion event occurred
+#### Identify when an autoUpgradeMinorVersion event occurs
-To see when an update to the extension occurred, review the agent logs on the VM at *C:\WindowsAzure\Logs\WaAppAgent.log*.
+To see when an update to the extension occurred, you can review the agent logs on the VM at *C:\WindowsAzure\Logs\WaAppAgent.log*.
-In the following example, the VM had `Microsoft.Compute.CustomScriptExtension` version `1.8` installed. A hotfix was available to version `1.9`.
+The following example shows the VM with `Microsoft.Compute.CustomScriptExtension` version `1.8` installed, and a hotfix available for version `1.9`.
```powershell [INFO] Getting plugin locations for plugin 'Microsoft.Compute.CustomScriptExtension'. Current Version: '1.8', Requested Version: '1.9'
In the following example, the VM had `Microsoft.Compute.CustomScriptExtension` v
## Agent permissions
-To perform its tasks, the agent needs to run as *Local System*.
+To perform its tasks, the Azure VM Agent needs to run as *Local System*.
## Troubleshoot VM extensions
-Each VM extension might have specific troubleshooting steps. For example, when you use the Custom Script extension, you can find script execution details locally on the VM where the extension was run.
+Each VM extension might have specific troubleshooting steps. For example, when you use the Custom Script Extension, you can find script execution details locally on the VM where the extension is run.
The following troubleshooting actions apply to all VM extensions:
The following troubleshooting actions apply to all VM extensions:
### Common reasons for extension failures -- Extensions have 20 minutes to run. (Exceptions are Custom Script, Chef, and DSC, which have 90 minutes.) If your deployment exceeds this time, it's marked as a timeout. The cause of this can be low-resource VMs, or other VM configurations or startup tasks are consuming large amounts of resources while the extension is trying to provision.
+Here are some common reasons an extension can fail:
+
+- Extensions have 20 minutes to run. (Exceptions are Custom Script, Chef, and DSC, which have 90 minutes.) If your deployment exceeds this time, it's marked as a timeout. The cause of this issue can be low-resource VMs, or other VM configurations or startup tasks are consuming large amounts of resources while the extension is trying to provision.
- Minimum prerequisites aren't met. Some extensions have dependencies on VM SKUs, such as HPC images. Extensions might have certain networking access requirements, such as communicating with Azure Storage or public services. Other examples might be access to package repositories, running out of disk space, or security restrictions.
The following troubleshooting actions apply to all VM extensions:
### View extension status
-After a VM extension has been run against a VM, use [Get-AzVM](/powershell/module/az.compute/get-azvm) to return extension status. `Substatuses[0]` shows that the extension provisioning succeeded, meaning that it successfully deployed to the VM. But `Substatuses[1]` shows that the execution of the extension inside the VM failed.
+After a VM extension is run against a VM, use the [Get-AzVM](/powershell/module/az.compute/get-azvm) command to return extension status. The `Substatuses[0]` result shows that the extension provisioning succeeded, which means it successfully deployed to the VM. If you see the `Substatuses[1]` result, then the execution of the extension inside the VM failed.
```powershell Get-AzVM -ResourceGroupName "myResourceGroup" -VMName "myVM" -Status
You can also find extension execution status in the Azure portal. Select the VM,
### Rerun a VM extension
-There might be cases in which a VM extension needs to be rerun. You can rerun an extension by removing it, and then rerunning the extension with an execution method of your choice. To remove an extension, use [Remove-AzVMExtension](/powershell/module/az.compute/remove-azvmextension) as follows:
+In certain cases, you might need to rerun a VM extension. You can rerun an extension by removing the extension, and then rerunning the extension with an execution method of your choice. To remove an extension, use the [Remove-AzVMExtension](/powershell/module/az.compute/remove-azvmextension) command as follows:
```powershell Remove-AzVMExtension -ResourceGroupName "myResourceGroup" -VMName "myVM" -Name "myExtensionName" ```
-You can also remove an extension in the Azure portal:
-
-1. Select a VM.
-2. Select **Extensions**.
-3. Select the extension.
-4. Select **Uninstall**.
+You can also remove an extension in the Azure portal. Select a VM, select **Extensions**, and then select the desired extension. Select **Uninstall**.
## Common VM extension reference+
+The following table provides some common references for VM extensions.
+ | Extension name | Description | | | |
-| [Custom Script extension for Windows](custom-script-windows.md) |Run scripts against an Azure virtual machine. |
-| [DSC extension for Windows](dsc-overview.md) |Apply PowerShell desired state configurations to a virtual machine. |
-| [Azure Diagnostics extension](https://azure.microsoft.com/blog/windows-azure-virtual-machine-monitoring-with-wad-extension/) |Manage Azure Diagnostics. |
-| [VMAccess extension](https://azure.microsoft.com/blog/using-vmaccess-extension-to-reset-login-credentials-for-linux-vm/) |Manage users and credentials. |
+| [Custom Script Extension for Windows](custom-script-windows.md) | Run scripts against an Azure virtual machine. |
+| [DSC extension for Windows](dsc-overview.md) | Apply PowerShell desired state configurations to a virtual machine. |
+| [Azure Diagnostics extension](https://azure.microsoft.com/blog/windows-azure-virtual-machine-monitoring-with-wad-extension/) | Manage Azure Diagnostics. |
+| [VMAccess extension](https://azure.microsoft.com/blog/using-vmaccess-extension-to-reset-login-credentials-for-linux-vm/) | Manage users and credentials. |
## Next steps
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
Sharing images to the community is a new capability in [Azure Compute Gallery](.
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal) and set up 'CommunityGallery'. Creating VMs from community gallery images is open to all Azure users.
+>To publish a community gallery, you'll need to enable the preview feature using the azure CLI: `az feature register --name CommunityGallery --namespace Microsoft.Compute` or PowerShell: `Register-AzProviderFeature -FeatureName "CommunityGallery" -ProviderNamespace "Microsoft.Compute"`. For more information on enabling preview features and checking the status, see [Set up preview features in your Azure subscription](../azure-resource-manager/management/preview-features.md). Creating VMs from community gallery images is open to all Azure users.
> > During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery. >
virtual-network Create Public Ip Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-powershell.md
The removal of the **`-Zone`** parameter in the command is valid in all regions.
The removal of the **`-Zone`** parameter is the default selection for standard public IP addresses in regions without [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
+# [**Routing Preference Internet IPv4 prefix**](#tab/ipv4-routing-pref)
+
+To create a IPv4 public IP prefix, enter **IPv4** in the **`-IpAddressVersion`** parameter. Remove the **`-Zone`** parameter to create a non-zonal IP prefix.
+
+```azurepowershell-interactive
+$tagproperty = @{
+IpTagType = 'RoutingPreference'
+Tag = 'Internet'
+}
+$routingprefinternettag = New-Object -TypeName Microsoft.Azure.Commands.Network.Models.PSPublicIpPrefixTag -Property $tagproperty
+$ipv4 =@{
+ Name = 'myPublicIpPrefix-routingprefinternet'
+ ResourceGroupName = 'QuickStartCreateIPPrefix-rg'
+ Location = 'eastus2'
+ PrefixLength = '28'
+ IpAddressVersion = 'IPv4'
+ IpTag = $routingprefinternettag
+}
+New-AzPublicIpPrefix @ipv4
+```
## IPv6