Updates from: 11/11/2022 02:15:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
To use MS Graph API, and interact with resources in your Azure AD B2C tenant, yo
- [Update a user](/graph/api/user-update) - [Delete a user](/graph/api/user-delete)
-## User phone number management (beta)
+## User phone number management
A phone number that can be used by a user to sign-in using [SMS or voice calls](sign-in-options.md#phone-sign-in), or [multifactor authentication](multi-factor-authentication.md). For more information, see [Azure AD authentication methods API](/graph/api/resources/phoneauthenticationmethod).
Note, the [list](/graph/api/authentication-list-phonemethods) operation returns
![Enable phone sign-in](./media/microsoft-graph-operations/enable-phone-sign-in.png) > [!NOTE]
-> In the current beta version, this API works only if the phone number is stored with a space between the country code and the phone number. The Azure AD B2C service doesn't currently add this space by default.
+> A correctly represented phone number is stored with a space between the country code and the phone number. The Azure AD B2C service doesn't currently add this space by default.
-## Self-service password reset email address (beta)
+## Self-service password reset email address
An email address that can be used by a [username sign-in account](sign-in-options.md#username-sign-in) to reset the password. For more information, see [Azure AD authentication methods API](/graph/api/resources/emailauthenticationmethod).
An email address that can be used by a [username sign-in account](sign-in-option
- [Update](/graph/api/emailauthenticationmethod-update) - [Delete](/graph/api/emailauthenticationmethod-delete)
-## Software OATH token authentication method (beta)
+## Software OATH token authentication method
A software OATH token is a software-based number generator that uses the OATH time-based one-time password (TOTP) standard for multifactor authentication via an authenticator app. Use the Microsoft Graph API to manage a software OATH token registered to a user:
An email address that can be used by a [username sign-in account](sign-in-option
Manage the [identity providers](add-identity-provider.md) available to your user flows in your Azure AD B2C tenant. -- [List identity providers registered in the Azure AD B2C tenant](/graph/api/identityprovider-list)-- [Create an identity provider](/graph/api/identityprovider-post-identityproviders)-- [Get an identity provider](/graph/api/identityprovider-get)-- [Update identity provider](/graph/api/identityprovider-update)-- [Delete an identity provider](/graph/api/identityprovider-delete)
+- [List identity providers available in the Azure AD B2C tenant](/graph/api/identityproviderbase-availableprovidertypes)
+- [List identity providers configured in the Azure AD B2C tenant](/graph/api/iidentitycontainer-list-identityproviders)
+- [Create an identity provider](/graph/api/identitycontainer-post-identityproviders)
+- [Get an identity provider](/graph/api/identityproviderbase-get)
+- [Update identity provider](/graph/api/identityproviderbase-update)
+- [Delete an identity provider](/graph/api/identityproviderbase-delete)
-## User flow
+## User flow (beta)
Configure pre-built policies for sign-up, sign-in, combined sign-up and sign-in, password reset, and profile update.
Choose a mechanism for letting users register via local accounts. Local accounts
- [Get](/graph/api/b2cauthenticationmethodspolicy-get) - [Update](/graph/api/b2cauthenticationmethodspolicy-update)
-## Custom policies
+## Custom policies (beta)
The following operations allow you to manage your Azure AD B2C Trust Framework policies, known as [custom policies](custom-policy-overview.md).
The following operations allow you to manage your Azure AD B2C Trust Framework p
- [Update or create trust framework policy.](/graph/api/trustframework-put-trustframeworkpolicy) - [Delete an existing trust framework policy](/graph/api/trustframeworkpolicy-delete)
-## Policy keys
+## Policy keys (beta)
The Identity Experience Framework stores the secrets referenced in a custom policy to establish trust between components. These secrets can be symmetric or asymmetric keys/values. In the Azure portal, these entities are shown as **Policy keys**.
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
## Conditional Access -- [List all of the Conditional Access policies](/graph/api/conditionalaccessroot-list-policies?tabs=http)
+- [List the built-in templates for Conditional Access policy scenarios](/graph/api/conditionalaccessroot-list-templates)
+- [List all of the Conditional Access policies](/graph/api/conditionalaccessroot-list-policies)
- [Read properties and relationships of a Conditional Access policy](/graph/api/conditionalaccesspolicy-get) - [Create a new Conditional Access policy](/graph/api/resources/application) - [Update a Conditional Access policy](/graph/api/conditionalaccesspolicy-update)
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
## Retrieve or restore deleted users and applications
-Deleted items can only be restored if they were deleted within the last 30 days.
+Deleted users and apps can only be restored if they were deleted within the last 30 days.
- [List deleted items](/graph/api/directory-deleteditems-list) - [Get a deleted item](/graph/api/directory-deleteditems-get)
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> [!NOTE] > Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute is not automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
+> [!NOTE]
+> When a directory extension attribute in Azure AD does not show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx _acneCostCenter`, make sure you enter it in the same format as defined in the directory.
+ When editing the list of supported attributes, the following properties are provided: - **Name** - The system name of the attribute, as defined in the target object's schema.
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
At this point, the MIM Sync server is no longer needed.
## Import a connector configuration
- 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles.
+ 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#3-install-and-configure-the-azure-ad-connect-provisioning-agent) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles.
1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host runs as. 1. Change to the directory C:\Program Files\Microsoft ECMA2host\Service\ECMA. Ensure there are one or more DLLs already present in that directory. Those DLLs correspond to Microsoft-delivered connectors. 1. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory. 1. Change to the directory C:\Program Files\Microsoft ECMA2Host\Wizard. Run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration. 1. A new window appears with a list of connectors. By default, no connectors will be present. Select **New connector**.
- 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Create a connector" in either the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#create-a-generic-sql-connector) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#configure-a-generic-ldap-connector) articles.
+ 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Create a connector" in either the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#6-create-a-generic-sql-connector) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#configure-a-generic-ldap-connector) articles.
## Next steps
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
After you edit the MSAL code with these parameters, your users can authenticate
## Next steps
-For more information about the native application flow, see [Native apps in Azure Active Directory](../azuread-dev/native-app.md).
+For more information about the native application flow, see [mobile](../develop/authentication-flows-app-scenarios.md#mobile-app-that-calls-a-web-api-on-behalf-of-an-interactive-user) and [desktop](../develop/authentication-flows-app-scenarios.md#desktop-app-that-calls-a-web-api-on-behalf-of-a-signed-in-user) apps in Azure Active Directory.
Learn about setting up [Single sign-on to applications in Azure Active Directory](../manage-apps/sso-options.md#choosing-a-single-sign-on-method).
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
Previously updated : 10/05/2022 Last updated : 11/10/2022
The Windows smart card sign-in works with the latest preview build of Windows 11
## Restrictions and caveats -- Azure AD CBA is supported on Windows Hybrid or Azure AD Joined.
+- Azure AD CBA is supported on Windows devices that are hybrid or Azure AD joined.
- Users must be in a managed domain or using Staged Rollout and can't use a federated authentication model. ## Next steps
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Now we'll walk through each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png" alt-text="Screenshot of the certificate picker." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png"::: 1. Azure AD verifies the certificate revocation list to make sure the certificate isn't revoked and is valid. Azure AD identifies the user by using the [username binding configured](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the tenant to map the certificate field value to the user attribute value.
-1. If a unique user is found with a Conditional Access policy that requires multifactor authentication (MFA), and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If the certificate satisfies only a single factor, then it requests the user for a second factor to complete Azure AD Multi-Factor Authentication.
+1. If a unique user is found with a Conditional Access policy that requires multifactor authentication (MFA), and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If multifactor authentication is required but the certificate satisfies only a single factor, authentication will fail.
1. Azure AD completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application.
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 09/23/2022 Last updated : 11/10/2022
Combined registration supports the authentication methods and actions in the fol
| Email | Yes | Yes | Yes | | Security questions | Yes | No | Yes | | App passwords* | Yes | No | Yes |
-| FIDO2 security keys*| Yes | Yes | Yes |
+| FIDO2 security keys*| Yes | No | Yes |
> [!NOTE] > <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />
For both modes, users who have previously registered a method that can be used f
### Interrupt mode
-Combined registration adheres to both multifactor authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip the registration interruption and complete it at a later time.
+Combined registration adheres to both multifactor authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip (indefinitely) the registration interruption and complete it at a later time.
The following are sample scenarios where users might be prompted to register or refresh their security info:
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 05/04/2022 Last updated : 11/10/2022
The following Azure AD password policy options are defined. Unless noted, you ca
| Characters allowed |<ul><li>A ΓÇô Z</li><li>a - z</li><li>0 ΓÇô 9</li> <li>@ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ></li> <li>blank space</li></ul> | | Characters not allowed | Unicode characters. | | Password restrictions |<ul><li>A minimum of 8 characters and a maximum of 256 characters.</li><li>Requires three out of four of the following:<ul><li>Lowercase characters.</li><li>Uppercase characters.</li><li>Numbers (0-9).</li><li>Symbols (see the previous password restrictions).</li></ul></li></ul> |
-| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days.</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
+| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
| Password expiry notification (When users are notified of password expiration) |<ul><li>Default value: **14** days (before password expires).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet.</li></ul> | | Password expiry (Let passwords never expire) |<ul><li>Default value: **false** (indicates that password's have an expiration date).</li><li>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.</li></ul> | | Password change history | The last password *can't* be used again when the user changes a password. |
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Each user that's enabled in the text message authentication method policy must b
Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign-in. The user can [set this phone number themselves](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42) in *My Account*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
-When a phone number is set for SMS-sign, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
+When a phone number is set for SMS-based sign-in, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
1. Search for and select **Azure Active Directory**. 1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Users**.
If you receive an error when you try to set a phone number for a user account in
[m365-licensing]: https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans [o365-f1]: https://www.microsoft.com/microsoft-365/business/office-365-f1?market=af [o365-f3]: https://www.microsoft.com/microsoft-365/business/office-365-f3?activetab=pivot%3aoverviewtab
-[azure-ad-pricing]: https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing
+[azure-ad-pricing]: https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Users with a Temporary Access Pass can navigate the setup process on Windows 10
For Azure AD Joined devices: - During the Azure AD Join setup process, users can authenticate with a TAP (no password required) to join the device and register Windows Hello for Business. - On already joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business. -- If the [Web sign-in](https://learn.microsoft.com/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
+- If the [Web sign-in](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
For Hybrid Azure AD Joined devices: - Users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
With the policy applied, it can take up to 1 hour to propagate and for users to
### PowerShell > [!NOTE]
-> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0).
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0&preserve-view=true).
Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign-in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
To change the per-user Azure AD Multi-Factor Authentication state for a user, co
After you enable users, notify them via email. Tell the users that a prompt is displayed to ask them to register the next time they sign in. Also, if your organization uses non-browser apps that don't support modern authentication, they need to create app passwords. For more information, see the [Azure AD Multi-Factor Authentication end-user guide](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to help them get started.
-### Convert users from per-user MFA to Conditional Access based MFA
+### Convert per-user MFA enabled and enforced users to disabled
If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication. Run this PowerShell in an ISE window or save as a `.PS1` file to run locally. The operation can only be done by using the [MSOnline module](/powershell/module/msonline#msonline). ```PowerShell
+# Connect to tenant
+Connect-MsolService
+ # Sets the MFA requirement state function Set-MfaState { [CmdletBinding()]
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout can be integrated with hybrid deployments that use password hash s
When using [pass-through authentication](../hybrid/how-to-connect-pta.md), the following considerations apply: * The Azure AD lockout threshold is **less** than the AD DS account lockout threshold. Set the values so that the AD DS account lockout threshold is at least two or three times greater than the Azure AD lockout threshold.
-* The Azure AD lockout duration must be set longer than the AD DS reset account lockout counter after duration. The Azure AD duration is set in seconds, while the AD duration is set in minutes.
+* The Azure AD lockout duration must be set longer than the AD DS account lockout duration. The Azure AD duration is set in seconds, while the AD duration is set in minutes.
For example, if you want your Azure AD smart lockout duration to be higher than AD DS, then Azure AD would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Azure AD lockout threshold to be 5, then you want your on-premises AD lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD accounts from being locked out by brute force attacks on your Azure AD accounts.
active-directory Azure Ad Endpoint Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md
Previously updated : 07/17/2020 Last updated : 11/09/2022 -+
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Previously updated : 12/05/2019 Last updated : 11/10/2022
This tutorial walks you through creating a hybrid identity environment using Azure Active Directory (Azure AD) Connect cloud sync.
-![Create](media/tutorial-single-forest/diagram-2.png)
+![Diagram that shows the Azure AD Connect cloud sync flow](media/tutorial-single-forest/diagram-2.png)
You can use the environment you create in this tutorial for testing or for getting more familiar with cloud sync. ## Prerequisites+ ### In the Azure Active Directory admin center 1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
You can use the environment you create in this tutorial for testing or for getti
### In your on-premises environment
-1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4 GB RAM and .NET 4.7.1+ runtime
+1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
-2. If there is a firewall between your servers and Azure AD, configure the following items:
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used |
You can use the environment you create in this tutorial for testing or for getti
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy allows you to specify safe suffixes, then add connections t to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly. - Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-1. Sign in to the domain joined server. If you are using the [Basic A D and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
-2. Sign in to the Azure portal using cloud-only global admin credentials.
-3. On the left, select **Azure Active Directory**, click **Azure AD Connect**, and in the center select **Manage cloud sync**.
- ![Azure portal](media/how-to-install/install-6.png)
+1. Sign in to the domain joined server. If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
+
+1. Sign in to the Azure portal using cloud-only global admin credentials.
+
+1. On the left, select **Azure Active Directory**.
+
+1. Select **Azure AD Connect**, and in the center select **Manage Azure AD cloud sync**.
+
+ ![Screenshot that shows how to download the Azure AD cloud sync.](media/how-to-install/install-6.png)
+
+1. Select **Download agent**, and select **Accept terms & download**.
+
+ [![Screenshot that shows how to accept the terms and start the download of Azure AD cloud sync.](media/how-to-install/install-6a.png)](media/how-to-install/install-6a.png#lightbox)
+
+1. Run the **Azure AD Connect Provisioning Agent Package** AADConnectProvisioningAgentSetup.exe in your downloads folder.
+
+1. On the splash screen, select **I agree to the license and conditions**, and select **Install**.
-4. Click **Download agent**.
-5. Run the Azure AD Connect provisioning agent.
-6. On the splash screen, **Accept** the licensing terms and click **Install**.
+ ![Screenshot that shows the "Microsoft Azure AD Connect Provisioning Agent Package" splash screen.](media/how-to-install/install-1.png)
- ![Screenshot that shows the "Microsoft Azure A D Connect Provisioning Agent Package" splash screen.](media/how-to-install/install-1.png)
+1. Once this operation completes, the configuration wizard will launch. Sign in with your Azure AD global administrator account. If you have Internet Explorer enhanced security enabled, it will block the sign-in. If so, close the installation, [disable Internet Explorer enhanced security](/troubleshoot/developer/browsers/security-privacy/enhanced-security-configuration-faq), and restart the **Azure AD Connect Provisioning Agent Package** installation.
-7. Once this operation completes, the configuration wizard will launch. Sign in with your Azure AD global administrator account. Note that if you have IE enhanced security enabled this will block the sign-in. If this is the case, close the installation, disable IE enhanced security in Server Manager, and click the **AAD Connect Provisioning Agent Wizard** to restart the installation.
-8. On the **Connect Active Directory** screen, click **Add directory** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account should not have password change requirements. If the password expires or changes, you will need to re-configure the agent with the new credentials. This operation will add your on-premises directory. Click **Next**.
+1. On the **Connect Active Directory** screen, select **Authenticate** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account shouldn't have password change requirements. If the password expires or changes, you'll need to reconfigure the agent with the new credentials.
- ![Screenshot of the "Connect Active Directory" screen.](media/how-to-install/install-3a.png)
+ ![Screenshot of the "Connect Active Directory" screen.](media/how-to-install/install-3.png)
-9. On the **Configuration complete** screen, click **Confirm**. This operation will register and restart the agent.
+1. On the **Configure Service Account screen**, select **Create gMSA** and enter the Active Directory domain administrator credentials to create the group Managed Service Account. This account will be used to run the agent service. To continue, select **Next**.
+
+ [![Screenshot that shows create service account.](media/how-to-install/new-install-7.png)](media/how-to-install/new-install-7.png#lightbox)
+
+1. On the **Connect Active Directory** screen, select **Next**. Your current domain has been added automatically.
+
+ [![Screenshot that shows connecting to the Active Directory.](media/how-to-install/new-install-8.png)](media/how-to-install/new-install-8.png#lightbox)
+
+1. On the **Configuration complete** screen, select **Confirm**. This operation will register and restart the agent.
![Screenshot that shows the "Configuration complete" screen.](media/how-to-install/install-4a.png)
-10. Once this operation completes you should see a notice: **Your agent configuration was successfully verified.** You can click **Exit**.</br>
-![Welcome screen](media/how-to-install/install-5.png)</br>
-11. If you still see the initial splash screen, click **Close**.
+1. Once this operation completes, you should see a notice: **Your agent configuration was successfully verified.** You can select **Exit**.
+
+ ![Screenshot that shows the "configuration complete" screen.](media/how-to-install/install-5.png)
+
+1. If you still get the initial splash screen, select **Close**.
## Verify agent installation+ Agent verification occurs in the Azure portal and on the local server that is running the agent. ### Azure portal agent verification
-To verify the agent is being seen by Azure follow these steps:
+
+To verify the agent is being registered by Azure AD, follow these steps:
1. Sign in to the Azure portal.
-2. On the left, select **Azure Active Directory**, click **Azure AD Connect** and in the center select **Manage cloud sync**.</br>
-![Azure portal](media/how-to-install/install-6.png)</br>
+1. On the left, select **Azure Active Directory**, select **Azure AD Connect** and in the center select **Manage Azure AD cloud sync**.
-3. On the **Azure AD Connect cloud sync** screen click **Review all agents**.
-![Azure A D Provisioning](media/how-to-install/install-7.png)</br>
+ ![Screenshot that shows how to manage the Azure AD could sync.](media/how-to-install/install-6.png)
+
+1. On the **Azure AD Connect cloud sync** screen, select
+**Review all agents**.
+
+ [![Screenshot that shows the Azure AD provisioning agents.](media/how-to-install/install-7.png)](media/how-to-install/install-7.png#lightbox)
-4. On the **On-premises provisioning agents screen** you will see the agents you have installed. Verify that the agent in question is there and is marked **active**.
-![Provisioning agents](media/how-to-install/verify-1.png)</br>
+1. On the **On-premises provisioning agents screen**, you'll see the agents you've installed. Verify that the agent in question is there and is marked **active**.
+
+ [![Screenshot that shows the status of a provisioning agent.](media/how-to-install/verify-1.png)](media/how-to-install/verify-1.png#lightbox)
### On the local server
-To verify that the agent is running follow these steps:
-1. Log on to the server with an administrator account
-2. Open **Services** by either navigating to it or by going to Start/Run/Services.msc.
-3. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are present and the status is **Running**.
-![Services](media/how-to-install/troubleshoot-1.png)
+To verify that the agent is running, follow these steps:
+
+1. Log on to the server with an administrator account
+
+1. Open **Services** by either navigating to it or by going to Start/Run/Services.msc.
+
+1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are present and the status is **Running**.
+
+ [![Screenshot that shows the Windows services.](media/how-to-install/troubleshoot-1.png)](media/how-to-install/troubleshoot-1.png#lightbox)
## Configure Azure AD Connect cloud sync
- Use the following steps to configure provisioning
-
-1. Sign in to the Azure AD portal.
-2. Click **Azure Active Directory**
-3. Click **Azure AD Connect**
-4. Select **Manage cloud sync**
-![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-5. Click **New Configuration**
-![Screenshot of Azure A D Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and click **Save**.
-![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)
-1. The configuration status should now be **Healthy**.
-![Screenshot of Azure A D Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
+
+Use the following steps to configure and start the provisioning:
+
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**
+1. Select **Azure AD Connect**
+1. Select **Manage cloud sync**
+
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+
+1. Select **New Configuration**
+
+ [![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)](media/tutorial-single-forest/configure-1.png#lightbox)
+
+1. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
+
+ [![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)](media/how-to-configure/configure-2.png#lightbox)
+
+1. The configuration status should now be **Healthy**.
+
+ [![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)](media/how-to-configure/manage-4.png#lightbox)
## Verify users are created and synchronization is occurring
-You will now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
+
+You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. 2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**.
-4. Verify that you see the new users in your tenant</br>
+4. Verify that the new users appear in your tenant
## Test signing in with one of your users 1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign in with a user account that was created in your tenant. You will need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.</br>
- ![Verify](media/tutorial-single-forest/verify-1.png)</br>
-You have now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
+1. Sign in with a user account that was created in your tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
+
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
+You've now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
## Next steps
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
This section covers the configuration options under optional claims for changing
| **name:** | Must be "groups" | | **source:** | Not used. Omit or specify null | | **essential:** | Not used. Omit or specify false |
- | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" |
+ | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" and ΓÇ£cloud_displaynameΓÇ¥ |
- In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored.
+ In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored. Additionally you can add ΓÇ£cloud_displaynameΓÇ¥ to emit display name of the cloud group. Note, that this option works only when `ΓÇ£groupMembershipClaimsΓÇ¥` is set to `ΓÇ£ApplicationGroupΓÇ¥`.
Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
This section covers the configuration options under optional claims for changing
] } ```
+3) Emit group names in the format of samAccountName for on-prem synced groups and display name for cloud groups in SAML and OIDC ID Tokens for the groups assigned to the application:
+
+ **Application manifest entry:**
+
+ ```json
+ "groupMembershipClaims": "ApplicationGroup",
+ "optionalClaims": {
+ "saml2Token": [
+ {
+ "name": "groups",
+ "additionalProperties": [
+ "sam_account_name",
+ "cloud_displayname"
+ ]
+ }
+ ],
+ "idToken": [
+ {
+ "name": "groups",
+ "additionalProperties": [
+ "sam_account_name",
+ "cloud_displayname"
+ ]
+ }
+ ]
+ }
+ ```
## Optional claims example
active-directory Delegated And App Perms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md
Previously updated : 09/27/2021 Last updated : 11/10/2022
## Recommended documents - Learn more about how client applications use [delegated and application permission requests](developer-glossary.md#permissions) to access resources.
+- Learn about [delegated and application permissions](permissions-consent-overview.md).
- See step-by-step instructions on how to [configure a client application's permission requests](quickstart-configure-app-access-web-apis.md) - For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal.
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
multiple Previously updated : 10/11/2021 Last updated : 11/09/2022
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
The following code snippets are examples of such an implementation. If you use t
#### Log level from configuration file
-It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](https://docs.microsoft.com/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application.
+It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application.
#### Log Level as Environment Variable
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
# Get a token from the token cache using MSAL.NET
-When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
+When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should try to fetch it from the cache first.
+
+You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property
+
+## Websites and web APIs
+
+ASP.NET Core and ASP.NET Classic websites should integrate with [Microsoft.Identity.Web](microsoft-identity-web.md), a wrapper for MSAL.NET. Memory token caching or distributed token caching can be configured as described in [token cache serialization](msal-net-token-cache-serialization.md?tabs=aspnetcore).
+
+Web APIs on ASP.NET Core should use Microsoft.Identity.Web. Web APIs on ASP.NET classic, use MSAL directly, by calling `AcquireTokenOnBehalfOf` and should configure memory or distributed caching. For more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet). There is no need to call `AcquireTokenSilent` API. There is no API to clear the cache. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+
+## Web service / Daemon apps
+
+Applications which request tokens for an app identity, with no user involved, by calling `AcquiretTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
+
+Since no user is involved, there is no need to call `AcquireTokenSilent` API. `AcquireTokenForClient` will look in the cache on its own. There is no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+
+## Desktop, command-line, and mobile applications
+
+Desktop, command-line, and mobile applications should first call the AcquireTokenSilent method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, as well as the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token is not applicable.
if (result != null)
// Use the token } ```+
+### Clearing the cache
+
+In public client applications, clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though.
+
+```csharp
+var accounts = (await app.GetAccountsAsync()).ToList();
+
+// clear the cache
+while (accounts.Any())
+{
+ await app.RemoveAsync(accounts.First());
+ accounts = (await app.GetAccountsAsync()).ToList();
+}
+```
active-directory Msal Net Clear Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-clear-token-cache.md
# Clear the token cache using MSAL.NET
+## Web API and daemon apps
+
+There is no API to remove the tokens from the cache. Cache size should be handled by setting eviction policies on the underlying storage. See [Cache Serialization](msal-net-token-cache-serialization.md?tabs=aspnetcore) for details on how to use a memory cache or distributed cache.
+
+## Desktop, command line and mobile applications
+ When you [acquire an access token](msal-acquire-cache-tokens.md) using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. Clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though. The following example instantiates a public client application, gets the accounts for the application, and removes the accounts.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
You can also specify options to limit the size of the in-memory token cache:
#### Distributed caches
-If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0) interface.
+If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0&preserve-view=true) interface.
For testing purposes only, you may want to use `services.AddDistributedMemoryCache()`, an in-memory implementation of `IDistributedCache`.
active-directory Perms For Given Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/perms-for-given-api.md
Previously updated : 07/15/2019 Last updated : 11/10/2022
## Recommended documents - Learn more about how client applications use [delegated and application permission requests](./developer-glossary.md#permissions) to access resources.
+- Learn about [scopes and permissions in the Microsoft identity platform](scopes-oidc.md)
- See step-by-step instructions on how to [configure a client application's permission requests](./quickstart-configure-app-access-web-apis.md) - For more depth, learn how resource applications expose [scopes](./developer-glossary.md#scopes) and [application roles](./developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. ## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
+[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 06/01/2021 Last updated : 11/09/2022 -+ # Publisher verification
App developers must meet a few requirements to complete the publisher verificati
- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. -- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center.
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
- In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
The IDP initiated feature exposes two settings for each application.  
## Next steps -- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0)
+- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0&preserve-view=true)
- To learn more about how to configure this policy see [Customize app SAML token claims](active-directory-saml-claims-customization.md)
active-directory Registration Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md
Previously updated : 09/27/2021 Last updated : 11/09/2022
active-directory Scenario Spa Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-call-api.md
Title: Build single-page app calling a web API description: Learn how to build a single-page application that calls a web API -+
Last updated 09/27/2021-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Setup Multi Tenant App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/setup-multi-tenant-app.md
Previously updated : 07/15/2019 Last updated : 11/10/2022
Here is a list of recommended topics to learn more about multi-tenant applications: - Get a general understanding of [what it means to be a multi-tenant application](./developer-glossary.md#multi-tenant-application)
+- Learn about [tenancy in Azure Active Directory](single-and-multi-tenant-apps.md)
- Get a general understanding of [how to configure an application to be multi-tenant](./howto-convert-app-to-be-multi-tenant.md) - Get a step-by-step overview of [how the Azure AD consent framework is used to implement consent](./quickstart-register-app.md), which is required for multi-tenant applications - For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios ## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
+[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Test Throttle Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md
Previously updated : 09/17/2021 Last updated : 11/09/2022 #Customer intent: As a developer, I want to understand the throttling and service limits I might hit so that I can test my app without interruption.
Throttling behavior can depend on the type and number of requests. For example,
When you exceed a throttling limit, you receive the HTTP status code `429 Too many requests` and your request fails. The response includes a `Retry-After` header value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. Retry the request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned. If the request fails again with a 429 error code, you are still being throttled. Continue to use the recommended `Retry-After` delay and retry the request until it succeeds. ## Next steps
-Learn how to [setup a test environment](test-setup-environment.md).
+Learn how to [setup a test environment](test-setup-environment.md).
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Fill in these details with the values you obtain from Azure app registration por
## Add code for user sign-in and token acquisition
-1. Create a new file named *auth.js* under the *router* folder and add the following code there:
+1. Create a new file named *auth.js* under the *routes* folder and add the following code there:
:::code language="js" source="~/ms-identity-node/App/routes/auth.js":::
active-directory Enterprise State Roaming Windows Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-windows-settings-reference.md
The following is a list of the settings that will be roamed or backed up in Wind
## Windows Settings details
-List of settings that can be configured to sync in recent Windows versions. These can be found in Windows 10 under **Settings** > **Accounts** > **Sync your settings** or **Settings** > **Accounts** > **Windows backup** > **Remember my preferences** on Windows 11.
+List of settings that can be configured to sync in recent Windows versions. These can be found in Windows 11 under **Settings** > **Accounts** > **Sync your settings** or **Settings** > **Accounts** > **Windows backup** > **Remember my preferences**.
| Settings | Windows 10 (21H1 or newer) | | | |
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
-Learn more about [how to manage inactive user accounts in Azure AD](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+Learn more about [how to manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
-2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access
-1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an Access Review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review)
+2. To [create an Access Review](/azure/active-directory/governance/create-access-review)
for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
## Create a review to remove inactive external guests
-1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an access review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+2. To [create an access review](/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
3. Select **New access review**.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 10/28/2022 Last updated : 11/09/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on October 28th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on November 9th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | | Windows 10 Enterprise E5 Commercial (GCC Compatible) | WINE5_GCC_COMPAT | 938fd547-d794-42a4-996c-1cc206619580 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) | | Windows 10/11 Enterprise VDA | E3_VDA_only | d13ef257-988a-46f3-8fce-f47484dd4550 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872) |
-| Windows 365 Business 2 vCPU, 4 GB, 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>(CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) |
-| Windows 365 Business 4 vCPU, 16 GB, 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
-| Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
-| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB | CPC_E_2C_8GB_128GB | e2aebe6c-897d-480f-9d62-fff1381581f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
-| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (Preview) | CPC_LVL_2 | 461cb62c-6db7-41aa-bf3c-ce78236cdb9e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
-| Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (Preview) | CPC_LVL_3 | bbb4bf6e-3e12-4343-84a1-54d160c00f40 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) |
+| Windows 365 Business 1 vCPU 2 GB 64 GB | CPC_B_1C_2RAM_64GB | 816eacd3-e1e3-46b3-83c8-1ffd37e053d9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_1C_2RAM_64GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 1 vCPU, 2 GB, 64 GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) |
+| Windows 365 Business 2 vCPU 4 GB 128 GB | CPC_B_2C_4RAM_128GB | 135bee78-485b-4181-ad6e-40286e311850 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_128GB (1a13832e-cd79-497d-be76-24186f55c8b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 128 GB (1a13832e-cd79-497d-be76-24186f55c8b0) |
+| Windows 365 Business 2 vCPU 4 GB 256 GB | CPC_B_2C_4RAM_256GB | 805d57c3-a97d-4c12-a1d0-858ffe5015d0 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_256GB (a0b1c075-51c9-4a42-b34c-308f3993bb7e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 256 GB (a0b1c075-51c9-4a42-b34c-308f3993bb7e) |
+| Windows 365 Business 2 vCPU 4 GB 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) |
+| Windows 365 Business 2 vCPU 8 GB 128 GB | CPC_B_2C_8RAM_128GB | 71f21848-f89b-4aaa-a2dc-780c8e8aac5b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_SS_2 (9d2eed2c-b0c0-4a89-940c-bc303444a41b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 8 GB, 128 GB (9d2eed2c-b0c0-4a89-940c-bc303444a41b) |
+| Windows 365 Business 2 vCPU 8 GB 256 GB | CPC_B_2C_8RAM_256GB | 750d9542-a2f8-41c7-8c81-311352173432 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_8RAM_256GB (1a3ef005-2ef6-434b-8be1-faa56c892854) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 8 GB, 256 GB (1a3ef005-2ef6-434b-8be1-faa56c892854) |
+| Windows 365 Business 4 vCPU 16 GB 128 GB | CPC_B_4C_16RAM_128GB | ad83ac17-4a5a-4ebb-adb2-079fb277e8b9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
+| Windows 365 Business 4 vCPU 16 GB 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
+| Windows 365 Business 4 vCPU 16 GB 256 GB | CPC_B_4C_16RAM_256GB | b3891a9f-c7d9-463c-a2ec-0b2321bda6f9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_256GB (30f6e561-8805-41d0-80ce-f82698b72d7d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 256 GB (30f6e561-8805-41d0-80ce-f82698b72d7d) |
+| Windows 365 Business 4 vCPU 16 GB 512 GB | CPC_B_4C_16RAM_512GB | 1b3043ad-dfc6-427e-a2c0-5ca7a6c94a2b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_512GB (15499661-b229-4a1f-b0f9-bd5832ef7b3e) | Exchange Foundation(113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 512 GB (15499661-b229-4a1f-b0f9-bd5832ef7b3e) |
+| Windows 365 Business 8 vCPU 32 GB 128 GB | CPC_B_8C_32RAM_128GB | 3cb45fab-ae53-4ff6-af40-24c1915ca07b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_128GB (648005fc-b330-4bd9-8af6-771f28958ac0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 128 GB (648005fc-b330-4bd9-8af6-771f28958ac0) |
+| Windows 365 Business 8 vCPU 32 GB 256 GB | CPC_B_8C_32RAM_256GB | fbc79df2-da01-4c17-8d88-17f8c9493d8f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_256GB (d7a5113a-0276-4dc2-94f8-ca9f2c5ae078) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 256 GB (d7a5113a-0276-4dc2-94f8-ca9f2c5ae078) |
+| Windows 365 Business 8 vCPU 32 GB 512 GB | CPC_B_8C_32RAM_512GB | 8ee402cd-e6a8-4b67-a411-54d1f37a2049 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_512GB (4229a0b4-7f34-4835-b068-6dc8d10be57c) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 512 GB (4229a0b4-7f34-4835-b068-6dc8d10be57c) |
+| Windows 365 Enterprise 1 vCPU 2 GB 64 GB | CPC_E_1C_2GB_64GB | 0c278af4-c9c1-45de-9f4b-cd929e747a2c | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_1C_2GB_64GB (86d70dbb-d4c6-4662-ba17-3014204cbb28) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 1 vCPU, 2 GB, 64 GB (86d70dbb-d4c6-4662-ba17-3014204cbb28) |
+| Windows 365 Enterprise 2 vCPU 4 GB 128 GB | CPC_E_2C_4GB_128GB | 226ca751-f0a4-4232-9be5-73c02a92555e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_1 (545e3611-3af8-49a5-9a0a-b7867968f4b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 128 GB (545e3611-3af8-49a5-9a0a-b7867968f4b0) |
+| Windows 365 Enterprise 2 vCPU 4 GB 256 GB | CPC_E_2C_4GB_256GB | 5265a84e-8def-4fa2-ab4b-5dc278df5025 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_256GB (0d143570-9b92-4f57-adb5-e4efcd23b3bb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 256 GB (0d143570-9b92-4f57-adb5-e4efcd23b3bb) |
+| Windows 365 Enterprise 2 vCPU 4 GB 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
+| Windows 365 Enterprise 2 vCPU 8 GB 128 GB | CPC_E_2C_8GB_128GB | e2aebe6c-897d-480f-9d62-fff1381581f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
+| Windows 365 Enterprise 2 vCPU 8 GB 256 GB | CPC_E_2C_8GB_256GB | 1c79494f-e170-431f-a409-428f6053fa35 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_8GB_256GB (d3468c8c-3545-4f44-a32f-b465934d2498) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 256 GB (d3468c8c-3545-4f44-a32f-b465934d2498) |
+| Windows 365 Enterprise 4 vCPU 16 GB 128 GB | CPC_E_4C_16GB_128GB | d201f153-d3b2-4057-be2f-fe25c8983e6f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_128GB (2de9c682-ca3f-4f2b-b360-dfc4775db133) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 128 GB (2de9c682-ca3f-4f2b-b360-dfc4775db133) |
+| Windows 365 Enterprise 4 vCPU 16 GB 256 GB | CPC_E_4C_16GB_256GB | 96d2951e-cb42-4481-9d6d-cad3baac177e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) |
+| Windows 365 Enterprise 4 vCPU 16 GB 512 GB | CPC_E_4C_16GB_512GB | 0da63026-e422-4390-89e8-b14520d7e699 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_512GB (3bba9856-7cf2-4396-904a-00de74fba3a4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 512 GB (3bba9856-7cf2-4396-904a-00de74fba3a4) |
+| Windows 365 Enterprise 8 vCPU 32 GB 128 GB | CPC_E_8C_32GB_128GB | c97d00e4-0c4c-4ec2-a016-9448c65de986 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_128GB (2f3cdb12-bcde-4e37-8529-e9e09ec09e23) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 128 GB (2f3cdb12-bcde-4e37-8529-e9e09ec09e23) |
+| Windows 365 Enterprise 8 vCPU 32 GB 256 GB | CPC_E_8C_32GB_256GB | 7818ca3e-73c8-4e49-bc34-1276a2d27918 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_256GB (69dc175c-dcff-4757-8389-d19e76acb45d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 256 GB (69dc175c-dcff-4757-8389-d19e76acb45d) |
+| Windows 365 Enterprise 8 vCPU 32 GB 512 GB | CPC_E_8C_32GB_512GB | 9fb0ba5f-4825-4e84-b239-5167a3a5d4dc | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_512GB (0e837228-8250-4047-8a80-d4a34ba11658) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 512 GB (0e837228-8250-4047-8a80-d4a34ba11658) |
+| Windows 365 Enterprise 2 vCPU 4 GB 128 GB (Preview) | CPC_LVL_1 | bce09f38-1800-4a51-8d50-5486380ba84a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_1 (545e3611-3af8-49a5-9a0a-b7867968f4b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 128 GB (545e3611-3af8-49a5-9a0a-b7867968f4b0) |
+| Windows 365 Shared Use 2 vCPU 4 GB 64 GB | Windows_365_S_2vCPU_4GB_64GB | 1f9990ca-45d9-4c8d-8d04-a79241924ce1 | CPC_S_2C_4GB_64GB (64981bdb-a5a6-4a22-869f-a9455366d5bc) | Windows 365 Shared Use 2 vCPU, 4 GB, 64 GB (64981bdb-a5a6-4a22-869f-a9455366d5bc) |
+| Windows 365 Shared Use 2 vCPU 4 GB 128 GB | Windows_365_S_2vCPU_4GB_128GB | 90369797-7141-4e75-8f5e-d13f4b6092c1 | CPC_S_2C_4GB_128GB (51855c77-4d2e-4736-be67-6dca605f2b57) | Windows 365 Shared Use 2 vCPU, 4 GB, 128 GB (51855c77-4d2e-4736-be67-6dca605f2b57) |
+| Windows 365 Shared Use 2 vCPU 4 GB 256 GB | Windows_365_S_2vCPU_4GB_256GB | 8fe96593-34d3-49bb-aeee-fb794fed0800 | CPC_S_2C_4GB_256GB (aa8fbe7b-695c-4c05-8d45-d1dddf6f7616) | Windows 365 Shared Use 2 vCPU, 4 GB, 256 GB (aa8fbe7b-695c-4c05-8d45-d1dddf6f7616) |
+| Windows 365 Shared Use 2 vCPU 8 GB 128 GB | Windows_365_S_2vCPU_8GB_128GB | 2d21fc84-b918-491e-ad84-e24d61ccec94 | CPC_S_2C_8GB_128GB (057efbfe-a95d-4263-acb0-12b4a31fed8d) | Windows 365 for Shared Use 2 vCPU, 8 GB, 128 GB (057efbfe-a95d-4263-acb0-12b4a31fed8d) |
+| Windows 365 Shared Use 2 vCPU 8 GB 256 GB | Windows_365_S_2vCPU_8GB_256GB | 2eaa4058-403e-4434-9da9-ea693f5d96dc | CPC_S_2C_8GB_256GB (50ef7026-6174-40ba-bff7-f0e4fcddbf65) | Windows 365 for Shared Use 2 vCPU, 8 GB, 256 GB (50ef7026-6174-40ba-bff7-f0e4fcddbf65) |
+| Windows 365 Shared Use 4 vCPU 16 GB 128 GB | Windows_365_S_4vCPU_16GB_128GB | 1bf40e76-4065-4530-ac37-f1513f362f50 | CPC_S_4C_16GB_128GB (dd3801e2-4aa1-4b16-a44b-243e55497584) | Windows 365 Shared Use 4 vCPU, 16 GB, 128 GB (dd3801e2-4aa1-4b16-a44b-243e55497584) |
+| Windows 365 Shared Use 4 vCPU 16 GB 256 GB | Windows_365_S_4vCPU_16GB_256GB | a9d1e0df-df6f-48df-9386-76a832119cca | CPC_S_4C_16GB_256GB (2d1d344e-d10c-41bb-953b-b3a47521dca0) | Windows 365 Shared Use 4 vCPU, 16 GB, 256 GB (2d1d344e-d10c-41bb-953b-b3a47521dca0) |
+| Windows 365 Shared Use 4 vCPU 16 GB 512 GB | Windows_365_S_4vCPU_16GB_512GB | 469af4da-121c-4529-8c85-9467bbebaa4b | CPC_S_4C_16GB_512GB (48b82071-99a5-4214-b493-406a637bd68d) | Windows 365 Shared Use 4 vCPU, 16 GB, 512 GB (48b82071-99a5-4214-b493-406a637bd68d) |
+| Windows 365 Shared Use 8 vCPU 32 GB 128 GB | Windows_365_S_8vCPU_32GB_128GB | f319c63a-61a9-42b7-b786-5695bc7edbaf | CPC_S_8C_32GB_128GB (e4dee41f-a5c5-457d-b7d3-c309986fdbb2) | Windows 365 Shared Use 8 vCPU, 32 GB, 128 GB (e4dee41f-a5c5-457d-b7d3-c309986fdbb2) |
+| Windows 365 Shared Use 8 vCPU 32 GB 256 GB | Windows_365_S_8vCPU_32GB_256GB | fb019e88-26a0-4218-bd61-7767d109ac26 | CPC_S_8C_32GB_256GB (1e2321a0-f81c-4d43-a0d5-9895125706b8) | Windows 365 Shared Use 8 vCPU, 32 GB, 256 GB (1e2321a0-f81c-4d43-a0d5-9895125706b8) |
+| Windows 365 Shared Use 8 vCPU 32 GB 512 GB | Windows_365_S_8vCPU_32GB_512GB | f4dc1de8-8c94-4d37-af8a-1fca6675590a | CPC_S_8C_32GB_512GB (fa0b4021-0f60-4d95-bf68-95036285282a) | Windows 365 Shared Use 8 vCPU, 32 GB, 512 GB (fa0b4021-0f60-4d95-bf68-95036285282a) |
| Windows Store for Business | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Windows Store for Business EDU Faculty | WSFB_EDU_FACULTY | c7e9d9e6-1981-4bf3-bb50-a5bdfaa06fb2 | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) |
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Last updated 08/30/2022
-+
Developers can use Azure AD business-to-business APIs to customize the invitatio
## Collaborate with any partner using their identities
-With Azure AD B2B, the partner uses their own identity management solution, so there is no external administrative overhead for your organization. Guest users sign in to your apps and services with their own work, school, or social identities.
+With Azure AD B2B, the partner uses their own identity management solution, so there's no external administrative overhead for your organization. Guest users sign in to your apps and services with their own work, school, or social identities.
- The partner uses their own identities and credentials, whether or not they have an Azure AD account. - You don't need to manage external accounts or passwords.
B2B collaboration is enabled by default, but comprehensive admin settings let yo
- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and Microsoft Azure Government or Microsoft Azure China 21Vianet.
+- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](/azure/azure-government) or [Microsoft Azure China 21Vianet](/azure/china).
## Easily invite guest users from the Azure AD portal
As an administrator, you can easily add guest users to your organization in the
- [Create a new guest user](b2b-quickstart-add-guest-users-portal.md) in Azure AD, similar to how you'd add a new user. - Assign guest users to apps or groups.-- Send an invitation email that contains a redemption link, or send a direct link to an app you want to share.
+- [Send an invitation email](invitation-email-elements.md) that contains a redemption link, or send a direct link to an app you want to share.
-![Screenshot showing the New Guest User invitation entry page.](media/what-is-b2b/add-a-b2b-user-to-azure-portal.png)
- Guest users follow a few simple [redemption steps](redemption-experience.md) to sign in.
-![Screenshot showing the Review permissions page.](media/what-is-b2b/consentscreen.png)
## Allow self-service sign-up
With a self-service sign-up user flow, you can create a sign-up experience for e
You can also use [API connectors](api-connectors-overview.md) to integrate your self-service sign-up user flows with external cloud systems. You can connect with custom approval workflows, perform identity verification, validate user-provided information, and more.
-![Screenshot showing the user flows page.](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
## Use policies to securely share your apps and services
You can use authentication and authorization policies to protect your corporate
- At the application level. - For specific guest users to protect corporate apps and data.
-![Screenshot showing the Conditional Access option.](media/what-is-b2b/tutorial-mfa-policy-2.png)
## Let application and group owners manage their own guest users
You can delegate guest user management to application owners so that they can ad
- Administrators set up self-service app and group management. - Non-administrators use their [Access Panel](https://myapps.microsoft.com) to add guest users to applications or groups.
-![Screenshot showing the Access panel for a guest user.](media/what-is-b2b/access-panel-manage-app.png)
## Customize the onboarding experience for B2B guest users
Bring your external partners on board in ways customized to your organization's
## Integrate with Identity providers
-Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers so your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
+Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers. This way your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
-![Screenshot showing the Identity providers page.](media/what-is-b2b/identity-providers.png)
## Integrate with SharePoint and OneDrive
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided by Azure AD B2B to provide better security, lower cost, and reduce complexity when compared to local account creation. Learn more
-[here.](https://learn.microsoft.com/azure/active-directory/fundamentals/secure-external-access-resources)
+[here.](/azure/active-directory/fundamentals/secure-external-access-resources)
If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
If your organization currently issues local credentials that external users have
Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application. The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about [provisioning B2B guests to on-premises
-applications.](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+applications.](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
External users should be notified that the migration will be taking place and wh
## Migrate local guest accounts to Azure AD B2B
-Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](https://learn.microsoft.com/azure/active-directory/external-identities/invite-internal-users)
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](/azure/active-directory/external-identities/invite-internal-users)
This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
The Azure AD provisioning service enables organizations to [bring identities fro
### On-premises HR + joining multiple data sources
-To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
+To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS.
Use the numbered sections in the next two section to cross reference the followi
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
|No.| What | From | To | Technology | | - | - | - | - | - |
-| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](https://learn.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync) |
-| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](https://learn.microsoft.com/azure/active-directory/hybrid/whatis-azure-ad-connect) |
+| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](/azure/active-directory/cloud-sync/what-is-cloud-sync) |
+| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](/azure/active-directory/hybrid/whatis-azure-ad-connect) |
| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | | 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)| | 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) |
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
-[Learn more about Azure AD Lifecycle Workflows](https://learn.microsoft.com/azure/active-directory/governance/what-are-lifecycle-workflows)
+[Learn more about Azure AD Lifecycle Workflows](/azure/active-directory/governance/what-are-lifecycle-workflows)
> [!Note] > For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md).
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
Another approach could have been to utilize the capabilities of Azure AD Connect
## Multi-tenant resource isolation
-A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
Subscriptions that enable [delegated resource management](../../lighthouse/conce
It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
-**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
## Azure resource management with Azure AD
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
After deleting workflows, you can view them on the **Deleted Workflows (Preview)
## Delete a workflow using Microsoft Graph
-To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta).
+To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true).
To view
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
The first time your organization uses these cmdlets for this scenario, you need
1. If there were users who couldn't be located in Azure AD, or weren't active and able to sign in, but you want to have their access reviewed or their attributes updated in the database, you need to update or create Azure AD users for them. You can create users in bulk by using either: - A CSV file, as described in [Bulk create users in the Azure AD portal](../enterprise-users/users-bulk-add.md)
- - The [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples) cmdlet
+ - The [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples&preserve-view=true) cmdlet
Ensure that these new users are populated with the attributes required for Azure AD to later match them to the existing users in the application.
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
For a guide on supplying this information to a custom task extension via Microso
## Next steps -- [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta)
+- [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta&preserve-view=true)
- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md) - [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
Separating processing of the workflow from the tasks is important because, in a
## Next steps -- [userProcessingResult resource type](/graph/api/resources/identitygovernance-userprocessingresult?view=graph-rest-beta)-- [taskReport resource type](/graph/api/resources/identitygovernance-taskreport?view=graph-rest-beta)-- [run resource type](/graph/api/resources/identitygovernance-run?view=graph-rest-beta)-- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta)
+- [userProcessingResult resource type](/graph/api/resources/identitygovernance-userprocessingresult?view=graph-rest-beta&preserve-view=true)
+- [taskReport resource type](/graph/api/resources/identitygovernance-taskreport?view=graph-rest-beta&preserve-view=true)
+- [run resource type](/graph/api/resources/identitygovernance-run?view=graph-rest-beta&preserve-view=true)
+- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta&preserve-view=true)
- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) - [Lifecycle Workflow templates](lifecycle-workflow-templates.md)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
The default specific parameters for the **Post-Offboarding of an employee** temp
## Next steps -- [workflowTemplate resource type](/graph/api/resources/identitygovernance-workflowtemplate?view=graph-rest-beta)
+- [workflowTemplate resource type](/graph/api/resources/identitygovernance-workflowtemplate?view=graph-rest-beta&preserve-view=true)
- [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) - [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Detailed **Version information** are as follows:
## Next steps -- [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta)
+- [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta&preserve-view=true)
- [Manage workflow Properties (Preview)](manage-workflow-properties.md) - [Manage workflow versions (Preview)](manage-workflow-tasks.md)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Use the following steps to create a pre-hire workflow that will generate a TAP a
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Use the following steps to create a scheduled leaver workflow that will configur
7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
You can add extra expressions using **And/Or** to create complex conditionals, a
[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox) > [!NOTE]
-> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
# Configure group claims for applications by using Azure Active Directory
-Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports two main patterns:
+Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports three main patterns:
- Groups identified by their Azure AD object identifier (OID) attribute - Groups identified by the `sAMAccountName` or `GroupSID` attribute for Active Directory-synchronized groups and users
+- Groups identified by their Display Name attribute for cloud-only groups (Preview)
> [!IMPORTANT] > The number of groups emitted in a token is limited to 150 for SAML assertions and 200 for JWT, including nested groups. In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Exceeding a limit can lead to unpredictable results. For workarounds to these limits, read more in [Important caveats for this functionality](#important-caveats-for-this-functionality).
Azure Active Directory (Azure AD) can provide a user's group membership informat
## Important caveats for this functionality - Support for use of `sAMAccountName` and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from Active Directory Federation Services (AD FS) and other identity providers. Groups managed in Azure AD don't contain the attributes necessary to emit these claims.-- In order to avoid the number of groups limit if your users have large numbers of group memberships, you can restrict the groups emitted in claims to the relevant groups for the application. Read more about emitting groups assigned to the application for [JWT tokens](..\develop\active-directory-optional-claims.md#configuring-groups-optional-claims) and [SAML tokens](#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration). If assigning groups to your applications is not possible, you can also configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim. Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+- In order to avoid the number of groups limit if your users have large numbers of group memberships, you can restrict the groups emitted in claims to the relevant groups for the application. Read more about emitting groups assigned to the application for [JWT tokens](..\develop\active-directory-optional-claims.md#configuring-groups-optional-claims) and [SAML tokens](#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration). If assigning groups to your applications is not possible, you can also configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim. Group filtering applies to tokens emitted for apps where group claims and filtering were configured in the **Enterprise apps** blade in the portal.
- Group claims have a five-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will have a `"hasgroups":true` claim only if the user is in more than five groups. - We recommend basing in-app authorization on application roles rather than groups when:
To configure group claims for a gallery or non-gallery SAML application via sing
1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
-1. Select **Add a group claim**.
+2. Select **Add a group claim**.
![Screenshot that shows the page for user attributes and claims, with the button for adding a group claim selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
-1. Use the options to select which groups should be included in the token.
+3. Use the options to select which groups should be included in the token.
![Screenshot that shows the Group Claims window with group options.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
To configure group claims for a gallery or non-gallery SAML application via sing
For more information about managing group assignment to applications, see [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+## Emit cloud-only group display name in token (Preview)
+
+You can configure group claim to include the group display name for the cloud-only groups.
+
+1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
+
+2. If you already have group claims configured, select it from the **Additional claims** section. Otherwise, you can add the group claim as described in the previous steps.
+
+3. For the group type emitted in the token select **Groups assigned to the application**:
+
+ ![Screenshot that shows the Group Claims window, with the option for groups assigned to the application selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
+
+4. To emit group display name just for cloud groups, in the **Source attribute** dropdown select the **Cloud-only group display names (Preview)**:
+
+ ![Screenshot that shows the Group Claims source attribute dropdown, with the option for configuring cloud only group names selected.](media/how-to-connect-fed-group-claims/group-claims-ui-8.png)
+
+5. For a hybrid setup, to emit on-premises group attribute for synced groups and display name for cloud groups, you can select the desired on-premises sources attribute and check the checkbox **Emit group name for cloud-only groups (Preview)**:
+
+ ![Screenshot that shows the configuration to emit on-premises group attribute for synced groups and display name for cloud groups.](media/how-to-connect-fed-group-claims/group-claims-ui-9.png)
++ ### Set advanced options #### Customize group claim name
You can also configure group claims in the [optional claims](../../active-direct
| `name` | Must be `"groups"`. | | `source` | Not used. Omit or specify `null`. | | `essential` | Not used. Omit or specify `false`. |
- | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, and `"emit_as_roles"`. |
+ | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, `"cloud_displayname"`, and `"emit_as_roles"`. |
In `additionalProperties`, only one of `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, or `"netbios_domain_and_sam_account_name"` is required. If more than one is present, the first is used and any others are ignored. Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add `"emit_as_roles"` to additional properties. The group values will be emitted in the role claim.
+ To emit group display name for cloud-only groups, you can add `"cloud_displayname"` to `additional properties`. This option will work only when `ΓÇ£groupMembershipClaimsΓÇ¥` is set to `ApplicationGroup`
+ > [!NOTE] > If you use `"emit_as_roles"`, any configured application roles that the user is assigned to will not appear in the role claim.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources. > [!NOTE]
-> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
To view the existing writeback settings on Microsoft 365 groups in the portal, g
[![Screenshot of Microsoft 365 group properties.](media/how-to-connect-group-writeback/group-2.png)](media/how-to-connect-group-writeback/group-2.png#lightbox)
-You can also view the writeback state via Microsoft Graph. For more information, see [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta).
+You can also view the writeback state via Microsoft Graph. For more information, see [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta&preserve-view=true).
> Example: `GET https://graph.microsoft.com/beta/groups?$filter=groupTypes/any(c:c eq 'Unified')&$select=id,displayName,writebackConfiguration`
Finally, you can view the writeback state via PowerShell by using the [Microsof
For groups that haven't been created yet, you can view whether or not they'll be written back automatically.
-To see the default behavior in your environment for newly created groups, use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type in Microsoft Graph.
+To see the default behavior in your environment for newly created groups, use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type in Microsoft Graph.
> Example: `GET https://graph.microsoft.com/beta/Settings`
You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-u
> If `directorySetting` is returned with a `NewUnifiedGroupWritebackDefault` value of `false`, Microsoft 365 groups *won't automatically* be enabled for writeback when they're created. If the value is not specified or is set to `true`, newly created Microsoft 365 groups *will automatically* be written back. ## Discover if Active Directory has been prepared for Exchange
-To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked).
+To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019&preserve-view=true#how-do-you-know-this-worked).
## Meet prerequisites for public preview The following are prerequisites for group writeback:
The following are prerequisites for group writeback:
- An Azure AD Premium 1 license - Azure AD Connect version 2.0.89.0 or later
-An optional prerequisite is Exchange Server 2016 CU15 or later. You need it only for configuring cloud groups with an Exchange hybrid. For more information, seeΓÇ»[Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites). If you haven't [prepared Active Directory for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail-related attributes of groups won't be written back.
+An optional prerequisite is Exchange Server 2016 CU15 or later. You need it only for configuring cloud groups with an Exchange hybrid. For more information, seeΓÇ»[Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites). If you haven't [prepared Active Directory for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019&preserve-view=true), mail-related attributes of groups won't be written back.
## Choose the right approach The right deployment approach for your organization depends on the current state of group writeback in your environment and the desired writeback behavior.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor. - Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch).-- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](https://learn.microsoft.com/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant).
+- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0&preserve-view=true#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant).
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
- * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](https://learn.microsoft.com/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
+ * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
* You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
To configure directory settings to disable automatic writeback of newly created
New-AzureADDirectorySetting -DirectorySetting $Setting ``` -- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type.
+- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type.
### Disable writeback for each existing Microsoft 365 group
To configure directory settings to disable automatic writeback of newly created
- PowerShell: Use the [Microsoft Identity Tools PowerShell module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16). For example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false` -- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta).
+- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta&preserve-view=true).
## Delete groups when they're disabled for writeback or soft deleted
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
To change domain-based filtering, run the installation wizard: [domain and OU fi
## Organizational unitΓÇôbased filtering To change OU-based filtering, run the installation wizard: [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
+> [!IMPORTANT]
+> If you explicitly select an OU for synchronization, Azure AD Connect will add the DistinguishedName of that OU in the inclusion list for the domain's sync scope. However, if you later rename that OU in Active Directory, the DistinguishedName of the OU is changed, and consequently, Azure AD Connect will no longer consider that OU in sync scope. This will not cause an immediate issue, but upon a full import step, Azure AD Connect will reevaluate the sync scope and delete (i.e. obsolete) any objects out of sync scope, which can potentially cause an unexpected mass deletion of objects in Azure AD. To prevent this issue, after renaming a OU, run Azure AD Connect Wizard and re-select the OU to be again included in sync scope.
## Attribute-based filtering Make sure that you're using the November 2015 ([1.0.9125](reference-connect-version-history.md)) or later build for these steps to work.
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Windows 7 and 8.1 devices are not affected by this issue after UPN changes.
**Known Issues**
-Your organization may use [MAM app protection policies](https://learn.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
+Your organization may use [MAM app protection policies](/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state. **Work Around**
-IT admins should [issue a selective wipe](https://learn.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
+IT admins should [issue a selective wipe](/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
## Microsoft Authenticator known issues and workarounds
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
If you want all the latest features and updates, check this page and install wha
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+## 2.1.20.0
+
+### Release status:
+11/9/2022: Released for download
+
+### Bug fixes
+
+ - We fixed a bug where the new employeeLeaveDateTime attribute was not syncing correctly in version 2.1.19.0. Note that if the incorrect attribute was already used in a rule, then the rule must be updated with the new attribute and any objects in the AAD connector space that have the incorrect attribute must be removed with the "Remove-ADSyncCSObject" cmdlet, and then a full sync cycle must be run.
+ ## 2.1.19.0 ### Release status:
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Functional changes
+ - We added a new attribute 'employeeLeaveDateTime' for syncing to Azure AD. To learn more about how to use this attribute to manage your users' life cycles, please refer to [this article](/azure/active-directory/governance/how-to-lifecycle-workflow-sync-attributes)
### Bug fixes
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Further prompts can be expected in various scenarios:
* The user who originally consented to the application was an administrator, but they didn't consent on-behalf of the entire organization.
-* The application is using [incremental and dynamic consent](../azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
+* The application is using [incremental and dynamic consent](../develop/permissions-consent-overview.md#consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
* Consent was revoked after being granted initially.
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
The scenario solution has the following components:
- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](https://learn.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
- An Oracle PeopleSoft environment
For the Oracle PeopleSoft application to recognize the user correctly, there's a
## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](https://learn.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea
- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
-- [Configure Datawiza with Azure AD B2C](https://learn.microsoft.com/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
- [Datawiza documentation](https://docs.datawiza.com/)
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
To resolve the error, follow these steps, or watch this [short video about how t
- Claims issued in the token - Certificate used to sign the token.
- For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
+ For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md).
1. Now that you've reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem. 1. If you're still not able to sign in successfully, you can ask the application vendor what is missing from the SAML response.
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
This section shows how to create a contained user in the database that represent
- [Universal Authentication with SQL Database and Azure Synapse Analytics (SSMS support for MFA)](/azure/azure-sql/database/authentication-mfa-ssms-overview) - [Configure and manage Azure Active Directory authentication with SQL Database or Azure Synapse Analytics](/azure/azure-sql/database/authentication-aad-configure)
-SQL DB requires unique Azure AD display names. With this, the Azure AD accounts such as users, groups and Service Principals (applications), and VM names enabled for managed identity must be uniquely defined in AAD regarding their display names. SQL DB checks the Azure AD display name during T-SQL creation of such users and if it is not unique, the command fails requesting to provide a unique Azure AD display name for a given account.
+SQL DB requires unique Azure AD display names. With this, the Azure AD accounts such as users, groups and Service Principals (applications), and VM names enabled for managed identity must be uniquely defined in Azure AD regarding their display names. SQL DB checks the Azure AD display name during T-SQL creation of such users and if it is not unique, the command fails requesting to provide a unique Azure AD display name for a given account.
**To create a contained user:**
Code running in the VM can now get a token using its system-assigned managed ide
## Access data
-This section shows how to get an access token using the VM's system-assigned managed identity and use it to call Azure SQL. Azure SQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. You use the **access token** method of creating a connection to SQL. This is part of Azure SQL's integration with Azure AD, and is different from supplying credentials on the connection string.
+This section shows how to get an access token using the VM's system-assigned managed identity and use it to call Azure SQL. Azure SQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. This method doesn't require supplying credentials on the connection string.
-Here's a .NET code example of opening a connection to SQL using an access token. The code must run on the VM to be able to access the VM's system-assigned managed identity's endpoint. **.NET Framework 4.6** or higher or **.NET Core 2.2** or higher is required to use the access token method. Replace the values of AZURE-SQL-SERVERNAME and DATABASE accordingly. Note the resource ID for Azure SQL is `https://database.windows.net/`.
+Here's a .NET code example of opening a connection to SQL using Active Directory Managed Identity authentication. The code must run on the VM to be able to access the VM's system-assigned managed identity's endpoint. **.NET Framework 4.6.2** or higher or **.NET Core 3.1** or higher is required to use this method. Replace the values of AZURE-SQL-SERVERNAME and DATABASE accordingly and add a NuGet reference to the Microsoft.Data.SqlClient library.
```csharp
-using System.Net;
-using System.IO;
-using System.Data.SqlClient;
-using System.Web.Script.Serialization;
-
-//
-// Get an access token for SQL.
-//
-HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://database.windows.net/");
-request.Headers["Metadata"] = "true";
-request.Method = "GET";
-string accessToken = null;
+using Microsoft.Data.SqlClient;
try {
- // Call managed identities for Azure resources endpoint.
- HttpWebResponse response = (HttpWebResponse)request.GetResponse();
-
- // Pipe response Stream to a StreamReader and extract access token.
- StreamReader streamResponse = new StreamReader(response.GetResponseStream());
- string stringResponse = streamResponse.ReadToEnd();
- JavaScriptSerializer j = new JavaScriptSerializer();
- Dictionary<string, string> list = (Dictionary<string, string>) j.Deserialize(stringResponse, typeof(Dictionary<string, string>));
- accessToken = list["access_token"];
-}
-catch (Exception e)
-{
- string errorText = String.Format("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
-}
- //
-// Open a connection to the server using the access token.
+// Open a connection to the server using Active Direcotry Managed Identity authentication.
//
-if (accessToken != null) {
- string connectionString = "Data Source=<AZURE-SQL-SERVERNAME>; Initial Catalog=<DATABASE>;";
- SqlConnection conn = new SqlConnection(connectionString);
- conn.AccessToken = accessToken;
- conn.Open();
-}
+string connectionString = "Data Source=<AZURE-SQL-SERVERNAME>; Initial Catalog=<DATABASE>; Authentication=Active Directory Managed Identity; Encrypt=True";
+SqlConnection conn = new SqlConnection(connectionString);
+conn.Open();
``` >[!NOTE]
Alternatively, a quick way to test the end-to-end setup without having to write
```powershell $SqlConnection = New-Object System.Data.SqlClient.SqlConnection
- $SqlConnection.ConnectionString = "Data Source = <AZURE-SQL-SERVERNAME>; Initial Catalog = <DATABASE>"
+ $SqlConnection.ConnectionString = "Data Source = <AZURE-SQL-SERVERNAME>; Initial Catalog = <DATABASE>; Encrypt=True;"
$SqlConnection.AccessToken = $AccessToken $SqlConnection.Open() ```
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To configure the integration of AWS Single-Account Access into Azure AD, you nee
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for AWS Single-Account Access
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for AWS IAM Identity Center
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
To configure the integration of Cisco AnyConnect into Azure AD, you need to add
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Cisco AnyConnect
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for DocuSign
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for FortiGate SSL VPN
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
To configure the integration of Azure AD SAML Toolkit into Azure AD, you need to
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Azure AD SAML Toolkit
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for ServiceNow
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
To configure the integration of Slack into Azure AD, you need to add Slack from
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Slack
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
Devices integrated with Azure AD can be either [hybrid joined devices](../device
* [Azure Linux virtual machines](../devices/howto-vm-sign-in-azure-ad-linux.md)
-* [Azure Virtual Desktop](https://learn.microsoft.com/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join)
+* [Azure Virtual Desktop](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join)
* [Virtual desktop infrastructure](../devices/howto-device-identity-virtual-desktop-infrastructure.md)
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
Microsoft offers authentication methods that enable you to meet required NIST au
| FIDO2 security key<br>or<br> Smart card (Active Directory Federation Services [AD FS])<br>or<br>Windows Hello for Business with hardware TPM| Multifactor cryptographic hardware | | **Additional methods**| | | Password<br> and<br>(Hybrid Azure AD joined with hardware TPM <br>or <br> Azure AD joined with hardware TPM)| Memorized secret<br>and<br> Single-factor cryptographic hardware |
-| Password <br>and<br>Single-factor one-time password hardware (from an OTP manufacturer) <br>and<br>(Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> [Compliant managed device](https://learn.microsoft.com/mem/intune/protect/device-compliance-get-started))| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
+| Password <br>and<br>Single-factor one-time password hardware (from an OTP manufacturer) <br>and<br>(Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> [Compliant managed device](/mem/intune/protect/device-compliance-get-started))| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
### Our recommendations
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
+- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
VU Identity Card works as a link between users who need to access an application
Verifiable credentials can be used to enable faster and easier user onboarding by replacing some human interactions. For example, a user or employee who wants to create or remotely access an account can use a Verified ID through VU Identity Card to verify their identity without using vulnerable or overly complex passwords or the requirement to be on-site.
-Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
In this account onboarding scenario, Vu plays the Trusted ID proofing issuer role.
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Last updated 07/07/2022
Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
+> [!NOTE]
+> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+ ## Why use auto-upgrade Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
AKS follows a strict versioning window with regard to supportability. With prope
## Using auto-upgrade
-Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel.
+Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
The following upgrade channels are available:
az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrad
## Using auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window.
+
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
+
+For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
## Best practices for auto-upgrade
The following best practices will help maximize your success when using auto-upg
<!-- EXTERNAL LINKS --> [pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
+[release-tracker]: release-tracker.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
This article assumes that you have an existing AKS cluster with 1.21 or later ve
If you want to interact with Azure disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure disks][kubernetes-disks].
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+ ## Storage class static provisioning The following table describes the Storage Class parameters for the Azure disk CSI driver static provisioning:
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
Last updated 07/21/2022
A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster. > [!NOTE]
-> An Azure Disks can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
+> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
This article assumes that you have an existing AKS cluster with 1.21 or later ve
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+ ## Built-in storage classes A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
Either the load balancers and services IP address can be dynamically assigned, o
You can create both *internal* and *external* load balancers. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
+Learn more about Services in the [Kubernetes docs][k8s-service].
+ ## Azure virtual networks In AKS, you can deploy a cluster that uses one of the following two network models:
For more information on core Kubernetes and AKS concepts, see the following arti
<!-- LINKS - External --> [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md [kubenet]: https://kubernetes.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet
+[k8s-service]: https://kubernetes.io/docs/concepts/services-networking/service/
<!-- LINKS - Internal --> [aks-http-routing]: http-application-routing.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Kubernetes typically treats individual pods as ephemeral, disposable resources.
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
+> [!NOTE]
+> The Azure Disks CSI driver has a limit of 32 volumes per node. Other Azure Storage services don't have an equivalent limit.
+ ### Azure Disks Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types includ
* Standard HDDs > [!TIP]
->For most production and development workloads, use Premium SSD.
+> For most production and development workloads, use Premium SSD.
-Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
+Because Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
### Azure Files
-Use *Azure Files* to mount a Server Message Block (SMB) version 3.1.1 share or Network File System (NFS) version 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
+Use [Azure Files][azure-files-volume] to mount a Server Message Block (SMB) version 3.1.1 share or Network File System (NFS) version 4.1 share backed by an Azure storage account to pods. Azure Files let you share data across multiple nodes and pods and can use:
* Azure Premium storage backed by high-performance SSDs * Azure Standard storage backed by regular HDDs
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Title: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster recommendations: false description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster-+
If you navigated away from the **Deployment is in progress** page, the following
1. Save aside the values for **Login server**, **Registry name**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard. 1. Navigate again to the resource group into which you deployed the resources. 1. In the **Settings** section, select **Deployments**.
-1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string **ibm**.
+1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string `ibm`.
1. In the left pane, select **Outputs**. 1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
- * **cmdToConnectToCluster**
-
+ * `cmdToConnectToCluster`
+ * `appDeploymentTemplateYaml`
+
+1. Paste the value of `appDeploymentTemplateYaml` into a Bash shell, append `| grep secretName`, and execute. This command will output the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
These values will be used later in this article. Note that several other useful commands are listed in the outputs.
java-app
Γö£ΓöÇ src/main/ Γöé Γö£ΓöÇ aks/ Γöé Γöé Γö£ΓöÇ db-secret.yaml
-Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication-agic.yaml
Γöé Γö£ΓöÇ docker/ Γöé Γöé Γö£ΓöÇ Dockerfile Γöé Γöé Γö£ΓöÇ Dockerfile-local
java-app
The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
-In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
+In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image.
In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
export DB_SERVER_NAME=<Server name>.database.windows.net
export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name> export DB_PASSWORD=<Server admin password>
+export INGRESS_TLS_SECRET=<Ingress TLS secret name>
mvn clean install ```
Use your local ide, or `liberty:run` command to run and test the project locally
cd <path-to-your-repo>/java-app mvn liberty:run ```
-
+ 1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working. 1. Press `Ctrl+C` to stop `liberty:run` mode.
After successfully running the app in the Liberty Docker container, you can run
```bash cd <path-to-your-repo>/java-app/target
-# If you are running with Open Liberty
+# If you're running with Open Liberty
docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
-# If you are running with WebSphere Liberty
+# If you're running with WebSphere Liberty
docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp . ```
The following steps deploy and test the application.
1. Connect to the AKS cluster.
- Paste the value of **cmdToConnectToCluster** into a bash shell.
+ Paste the value of **cmdToConnectToCluster** into a Bash shell and execute.
1. Apply the DB secret.
The following steps deploy and test the application.
1. Apply the deployment file. ```bash
- kubectl apply -f openlibertyapplication.yaml
+ kubectl apply -f openlibertyapplication-agic.yaml
``` 1. Wait for the pods to be restarted.
The following steps deploy and test the application.
You should see output similar to the following to indicate that all the pods are running. ```bash
- NAME READY STATUS RESTARTS AGE
- javaee-cafe-cluster-67cdc95bc-2j2gr 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-fgtt8 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-h47qm 1/1 Running 0 29s
+ NAME READY STATUS RESTARTS AGE
+ javaee-cafe-cluster-agic-67cdc95bc-2j2gr 1/1 Running 0 29s
+ javaee-cafe-cluster-agic-67cdc95bc-fgtt8 1/1 Running 0 29s
+ javaee-cafe-cluster-agic-67cdc95bc-h47qm 1/1 Running 0 29s
``` 1. Verify the results.
- 1. Get endpoint of the deployed service
+ 1. Get **ADDRESS** of the Ingress resource deployed with the application
```bash
- kubectl get service
+ kubectl get ingress
```
- 1. Go to `http://EXTERNAL-IP` to test the application.
-
+ Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.
+
+ 1. Go to `https://<ADDRESS>` to test the application.
## Clean up resources
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
If you'd like to increase the speed of upgrades, use the `--max-surge` value to
The following command sets the max surge value for performing a node image upgrade: ```azurecli
-az aks nodepool upgrade \
+az aks nodepool update \
--resource-group myResourceGroup \ --cluster-name myAKSCluster \ --name mynodepool \ --max-surge 33% \
- --node-image-only \
--no-wait ```
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
# Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system Pods on a VMSS instance and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
+Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system pods on a VMSS instance, and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
### Limitations
-When using Planned Maintenance, the following restrictions apply:
+When you use Planned Maintenance, the following restrictions apply:
- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical. - Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
The following example output shows the maintenance window from 1:00am to 2:00am
} ```
-To allow maintenance any time during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
+To allow maintenance anytime during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
```azurecli-interactive az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday
az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCl
Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedule your upgrades during your maintenance window automatically. For more details on about Cluster Auto-Upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
+ ## Next steps - To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
+> [!NOTE]
+> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+
+> [!NOTE]
+> Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
+ ## Before you begin * If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
By default, AKS configures upgrades to surge with one extra node. A default valu
For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You may wish to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
-AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the current node count at the time of upgrade, the current node count is used for the max surge value.
+AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade.
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[upgrade-cluster]: #upgrade-an-aks-cluster [planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md
+[release-tracker]: release-tracker.md
[specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) > [!Important]
-> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 04-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
+> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To use Application Insights, [create an instance of the Application Insights ser
> + A logger for all APIs. > > Specifying *both*:
-> + if they are different loggers, both of them will be used (multiplexing logs).
-> + if they are the same loggers with different settings, the single API logger (more granular level) will override the one for all APIs.
+> - By default, the single API logger (more granular level) will override the one for all APIs.
+> - If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support.
## What data is added to Application Insights
To improve performance issues, skip:
+ Learn more about [Azure Application Insights](/azure/application-insights/). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
-+ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
++ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
You have now configured a native client application that can request access your
### Daemon client application (service-to-service calls)
-Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
+Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant.
1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your daemon app registration.
Your application can acquire a token to call a Web API hosted in your App Servic
1. After the app registration is created, copy the value of **Application (client) ID**. 1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again.
-You can now [request an access token using the client ID and client secret](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#use-the-access-token-to-access-the-secured-resource), and App Service Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
+You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and App Service Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
At present, this allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must perform some additional configuration.
At present, this allows _any_ client application in your Azure AD tenant to requ
1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md). 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to click **Grant admin consent** to authorize the client application to request the permission.
-1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
+1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service Authentication / Authorization). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code). You have now configured a daemon client application that can access your App Service app using its own identity.
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
Content-Type: application/json
} ```
-This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you will then add the value of `access_token` to a client connection with the vault.
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#successful-response). To access Key Vault, you will then add the value of `access_token` to a client connection with the vault.
# [.NET](#tab/dotnet)
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
Follow these steps to create an Azure PostgreSQL database in your subscription.
--resource-group $RESOURCE_GROUP \ --name $DB_SERVER_NAME \ --location $LOCATION \
- --admin-user $DB_USERNAME \
- --admin-password $DB_PASSWORD \
+ --admin-user $ADMIN_USERNAME \
+ --admin-password $ADMIN_PASSWORD \
--sku-name GP_Gen5_2 ```
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022 + # Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
git clone https://github.com/Azure-Samples/Passwordless-Connections-for-Java-App
cd Passwordless-Connections-for-Java-Apps/Tomcat/ ```
-## Create an Azure Postgres DB
+## Create an Azure Database for PostgreSQL
Follow these steps to create an Azure Database for Postgres in your subscription. The Spring Boot app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
Follow these steps to create an Azure Database for Postgres in your subscription
az group create --name $RESOURCE_GROUP --location $LOCATION ```
-1. Create an Azure Postgres Database server. The server is created with an administrator account, but it won't be used because we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
+1. Create an Azure Database for PostgreSQL server. The server is created with an administrator account, but it won't be used because we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
### [Flexible Server](#tab/flexible)
Follow these steps to build a WAR file and deploy to Azure App Service on Tomcat
--type war ```
-## Connect Postgres Database with identity connectivity
-
-Next, connect your app to a Postgres Database with a system-assigned managed identity using Service Connector.
+## Connect the Postgres database with identity connectivity
### [Flexible Server](#tab/flexible)
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+Next, connect your app to a Postgres database with a system-assigned managed identity using Service Connector.
+ To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres-flexible) command. ```azurecli-interactive
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
Previously updated : 04/12/2022 Last updated : 11/09/2022
Custom error pages are supported for the following two scenarios:
- **Maintenance page** - This custom error page is sent instead of a 502 bad gateway page. It's shown when Application Gateway has no backend to route traffic to. For example, when there's scheduled maintenance or when an unforeseen issue effects backend pool access. - **Unauthorized access page** - This custom error page is sent instead of a 403 unauthorized access page. It's shown when the Application Gateway WAF detects malicious traffic and blocks it.
-If an error originates from the backend servers, then it's passed along unmodified back to the caller. A custom error page isn't displayed. Application gateway can display a custom error page when a request can't reach the backend.
+If an error originates from backend targets of your backend pool, the error is passed along unmodified back to the caller. Custom error pages will only be displayed when a request can't reach the backend or when WAF is in prevention mode and blocks the request.
Custom error pages can be defined at the global level and the listener level:
To create a custom error page, you must have:
- error page should be internet accessible and return 200 response. - error page should be in \*.htm or \*.html extension type. - error page size must be less than 1 MB.
+- error page must be hosted in Azure blob storage
-You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same location are currently not supported.
+You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using base64-encoded inline images, javascript, or CSS.
-After you specify an error page, the application gateway downloads it from the defined location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
+> [!Note]
+> Relative links with files in the same location are not supported.
+
+After you specify an error page, application gateway verifies internet connectivity to the file and will save the file to the local application gateway cache. The HTML page will be served by the application gateway, whereas externally referenced resources (such as images, javascript, css files) are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. Application gateway doesn't periodically check the blob location to fetch new versions.
## Portal configuration
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
If your app or business depends on the use of a Form Recognizer custom model, we
## Prerequisites 1. Two Form Recognizer Azure resources in different Azure regions. If you don't have them, go to the Azure portal and [create a new Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer).
-1. The key, endpoint URL, and subscription ID for your Form Recognizer resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://ms.portal.azure.com/#home).
+1. The key, endpoint URL, and subscription ID for your Form Recognizer resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://portal.azure.com/#home).
::: moniker-end
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
The Azure AD authority is the endpoint you use for acquiring an Azure AD token.
### Authentication libraries
-Azure provides a set of libraries, called Azure Active Directory Authentication Libraries, to simplify the process of acquiring an Azure AD token. Azure builds these libraries for multiple languages. For more information, see the [documentation](../active-directory/azuread-dev/active-directory-authentication-libraries.md).
+Microsoft Authentication Library (MSAL) helps to simplify the process of acquiring an Azure AD token. Azure builds these libraries for multiple languages. For more information, see the [documentation](../active-directory/develop/msal-overview.md).
## Errors
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 11/04/2022 Last updated : 11/10/2022 description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."
If you are experiencing issues connecting a cluster to Azure Arc, it's probably
Review this flowchart in order to diagnose your issue when attempting to connect a cluster to Azure Arc without a proxy server. More details about each step are provided below. ### Does the Azure identity have sufficient permissions?
When you [create your support request](/azure/azure-portal/supportability/how-to
If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Servers](servers/overview.md): Manage Windows and Linux physical servers and virtual machines hosted outside of Azure. * [Kubernetes clusters](kubernetes/overview.md): Attach and configure Kubernetes clusters running anywhere, with multiple supported distributions. * [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance
-and PostgreSQL server (preview) services are currently available.
+and PostgreSQL (preview) services are currently available.
* [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure. * Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
Some of the key scenarios that Azure Arc supports are:
* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL server, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
-* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
+* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled data services](./dat).
* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
The following Azure Arc control plane functionality is offered at no extra cost:
* Resource organization through Azure management groups and tags * Searching and indexing through Azure Resource Graph
-* Access and security through Azure RBAC and subscriptions
+* Access and security through Azure Role-based access control (RBAC)
* Environments and automation through templates and extensions * Update management
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). * Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview).
-* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines)
-* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md)
-* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
+* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md).
+* Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart).
+* Learn about best practices and design patterns trough the various [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 09/26/2022 Last updated : 11/09/2022
This article provides information on troubleshooting and resolving issues that m
### Logs
-For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the client machine from which you've deployed the Azure Arc resource bridge.
+For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same deployment machine that was used to run commands to deploy the Arc resource bridge. If there is a problem collecting logs, most likely the deployment machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the deployment machine to the Appliance VM.
-The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the client machine where the deployment of the appliance was performed from. To use a different client machine to run the Azure CLI command, you need to make sure the following files are copied to the new client machine:
+The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the deployment machine. To use a different machine to run the logs command, make sure the following files are copied to the machine in the same location:
```azurecli $HOME\.KVA\.ssh\logkey.pub $HOME\.KVA\.ssh\logkey ```
-To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
+To run the `az arcappliance logs` command, the Appliance VM IP, Control Plane IP, or kubeconfig can be passed in the corresponding parameter. If `az arcappliance deploy` was not completed, then the kubeconfig file may be empty, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs.
-If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 192.168.1.1", the command to use for logs collection would be:
```azurecli
-az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
-```
-
-To view the logs, run the following command:
-
-```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig>
-```
-
-To save the logs to a destination folder, run the following command:
-
-```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory>
+az arcappliance logs hci --ip 192.168.1.1 --out-dir c:\logs`
``` To specify the IP address of the Azure Arc resource bridge virtual machine, run the following command:
az arcappliance logs <provider> --out-dir <path to specified output directory> -
### Remote PowerShell is not supported
-If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [EOF error when using the `logs` command](#logs-command-fails-with-eof-error), or an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure).
+If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure) or another type of error.
Using `az arcappliance` commands from remote PowerShell is not currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
To resolve this error, the .wssd\python and .wssd\kva folders in the user profil
When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
-### `logs` command fails with EOF error
-
-When running the `az arcappliance logs` Azure CLI command, you may see an error: `Appliance logs command failed with error: EOF when reading a line.` This may occur in scenarios similar to the following:
-
-```azurecli
-az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
-+ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException
-+ FullyQualifiedErrorId : NativeCommandError
-
-Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line[v-Host1]: PS C:\Users\AzureStackAdminD\Documents> az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
-+ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException
-+ FullyQualifiedErrorId : NativeCommandError
-
-Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line
-```
-
-The `az arcappliance logs` CLI command runs in interactive mode, meaning that it prompts the user for parameters. If the command is run in a scenario where it can't prompt the user for parameters, this error will occur. This is especially common when trying to use remote PowerShell to run the command.
-
-To avoid this error, use Remote Desktop Protocol (RDP) or a console session to sign directly in to the node and locally run the `logs` command (or any `az arcappliance` command). Remote PowerShell is not currently supported by Azure Arc resource bridge.
-
-You can also avoid this error by pre-populating the values that the `logs` command prompts for, thus avoiding the prompt. The example below provides these values into a variable which is then passed to the `logs` command. Be sure to replace `$loginValues` with your cloudservice IP address and the full path to your token credentials.
-
-```azurecli
-$loginValues="192.168.200.2
-C:\kvatoken.tok"
-
-$user_in = ""
-foreach ($val in $loginValues) { $user_in = $user_in + $val + "`n" }
-
-$user_in | az arcappliance logs hci --kubeconfig C:\Users\AzureStackAdminD\.kube\config
-```
- ### Default host resource pools are unavailable for deployment When using the `az arcappliance createConfig` or `az arcappliance run` command, there will be an interactive experience which shows the list of the VMware entities where user can select to deploy the virtual appliance. This list will show all user-created resource pools along with default cluster resource pools, but the default host resource pools aren't listed.
When the appliance is deployed to a host resource pool, there is no high availab
### Restricted outbound connectivity
-Make sure the URLs listed below are added to your allowlist.
+Below is the list of firewall and proxy URLs that need to be allowlisted to enable communication from the host machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
#### Proxy URLs used by appliance agents and services
Make sure the URLs listed below are added to your allowlist.
|Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and Control Plane IP need outbound connection. | Manages identity and access control for Azure resources | |Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used for Kubernetes cluster configuration.| |Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and Control Plane IP need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-prem resources to Azure.|
-|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-premises resources to Azure.|
+|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
|Resource bridge (appliance) Dataplane service| 443 | `https://*.dp.prod.appliances.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Communicate with resource provider in Azure.| |Resource bridge (appliance) container image download| 443 | `*.blob.core.windows.net, https://ecpacr.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
-|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
+|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
|Azure Arc for Kubernetes container image download| 443 | `https://azurearcfork8sdev.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. | |ADHS telemetry service | 443 | adhs.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Runs inside the appliance/mariner OS. Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any Kubernetes control plane. | |Microsoft events data service | 443 |v20.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
There are only two certificates that should be relevant when deploying the Arc r
### KVA timeout error
-Azure Arc resource bridge is a Kubernetes management cluster that is deployed in an appliance VM directly on the on-premises infrastructure. While trying to deploy Azure Arc resource bridge, a "KVA timeout error" may appear if there is a networking problem that doesn't allow communication of the Arc Resource Bridge appliance VM to the host, DNS, network or internet. This error is typically displayed for the following reasons:
+While trying to deploy Arc Resource Bridge, a "KVA timeout error" may appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the deployment machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
+
+For clarity, "deployment machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
+
+#### Top causes of the KVA timeout errorΓÇ»
+
+- Deployment machine is unable to communicate with Control Plane IP and Appliance VM IP.
+- Appliance VM is unable to communicate with the deployment machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).ΓÇ»
+- Appliance VM does not have internet access.
+- Appliance VM has internet access, but connectivity to one or more required URLs is being blocked, possibly due to a proxy or firewall.
+- Appliance VM is unable to reach a DNS server that can resolve internal names, such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses and container registry names.ΓÇ»
+- Proxy server configuration on the deployment machine or Arc resource bridge configuration files is incorrect. This can impact both the deployment machine and the Appliance VM. When the `az arcappliance prepare` command is run, the deployment machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images. 
+
+#### Troubleshoot KVA timeout error
+
+To resolve the error, one or more network misconfigurations may need to be addressed. Follow the steps below to address the most common reasons for this error.
+
+1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig may be empty if deploy command did not complete). Problems collecting logs are most likely due to the deployment machine being unable to reach the Appliance VM.
+
+ Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error.
+
+1. The deployment machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the deployment machine and verify there is a response from both IPs.
+
+ If a request times out, the deployment machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the deployment machine to the Control Plane IP and Appliance VM IP.
+
+1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script) for Arc resource bridge.
+
+1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
+
+1. In a non-proxy environment, the deployment machine must have external and internal DNS resolution. The deployment machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#restricted-outbound-connectivity), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the deployment machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#restricted-outbound-connectivity).
+
+ To test DNS resolution to an internal address from the deployment machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the deployment machine has internal DNS resolution in a non-proxy scenario. 
-- The appliance VM IP address doesn't have DNS resolution.-- The appliance VM IP address doesn't have internet access to download the required image.-- The host doesn't have routability to the appliance VM IP address.
+1. Appliance VM needs to be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to resolve external/internal addresses, such as Azure service addresses and container registry names for download of the Arc resource bridge container images from the cloud.
-To resolve this error, ensure that all IP addresses assigned to the Arc Resource Bridge appliance VM can be resolved by DNS and have access to the internet, and that the host can successfully route to the IP addresses.
+ Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.
## Azure-Arc enabled VMs on Azure Stack HCI issues
azure-cache-for-redis Cache Redis Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md
Some popular modules are available for use in the Enterprise tier of Azure Cache
|RediSearch | No | Yes | Yes (preview) | |RedisBloom | No | Yes | No | |RedisTimeSeries | No | Yes | No |
-|RedisJSON | No | Yes (preview) | Yes (preview) |
+|RedisJSON | No | Yes | Yes |
Currently, `RediSearch` is the only module that can be used concurrently with active geo-replication.
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Last updated 10/2/2022
# What's New in Azure Cache for Redis
+## November 2022
+
+Support for using the RedisJSON module has now reached General Availability (GA).
+
+For more information, see [Use Redis modules with Azure Cache for Redis](cache-redis-modules.md).
+ ## October 2022 ### Enhancements for passive geo-replication
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
description: Learn to use the Azure SQL input binding in Azure Functions.
Previously updated : 5/24/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
Isolated worker process isn't currently supported.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-java).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-java)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-java)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-java)
+
+The examples refer to a `ToDoItem` class (in a separate file `ToDoItem.java`) and a corresponding database table:
+
+```java
+package com.function;
+import java.util.UUID;
+
+public class ToDoItem {
+ public UUID Id;
+ public int order;
+ public String title;
+ public String url;
+ public boolean completed;
+
+ public ToDoItem() {
+ }
+
+ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) {
+ this.Id = Id;
+ this.order = order;
+ this.title = title;
+ this.url = url;
+ this.completed = completed;
+ }
+}
+```
++
+<a id="http-trigger-get-multiple-items-java"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a Java function that reads from a query and returns the results in the HTTP response.
+
+```java
+package com.function;
+
+import com.microsoft.azure.functions.HttpMethod;
+import com.microsoft.azure.functions.HttpRequestMessage;
+import com.microsoft.azure.functions.HttpResponseMessage;
+import com.microsoft.azure.functions.HttpStatus;
+import com.microsoft.azure.functions.annotation.AuthorizationLevel;
+import com.microsoft.azure.functions.annotation.FunctionName;
+import com.microsoft.azure.functions.annotation.HttpTrigger;
+import com.microsoft.azure.functions.sql.annotation.SQLInput;
+
+import java.util.Optional;
+
+public class GetToDoItems {
+ @FunctionName("GetToDoItems")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "SELECT * FROM dbo.ToDo",
+ commandType = "Text",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItems).build();
+ }
+}
+```
+
+<a id="http-trigger-look-up-id-from-query-string-java"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a Java function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+```java
+public class GetToDoItem {
+ @FunctionName("GetToDoItem")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "SELECT * FROM dbo.ToDo",
+ commandType = "Text",
+ parameters = "@Id={Query.id}",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ ToDoItem toDoItem = toDoItems[0];
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItem).build();
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-java"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a Java function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```java
+public class DeleteToDo {
+ @FunctionName("DeleteToDo")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "dbo.DeleteToDo",
+ commandType = "StoredProcedure",
+ parameters = "@Id={Query.id}",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItems).build();
+ }
+}
+
+```
::: zone-end
module.exports = async function (context, req, todoItems) {
} ``` +++
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-powershell)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-powershell)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-powershell)
+
+The examples refer to a database table:
++
+<a id="http-trigger-get-multiple-items-powershell"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a function.json file and a PowerShell function that reads from a query and returns the results in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
+ "commandType": "Text",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($Request, $todoItems)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItems
+})
+```
+
+<a id="http-trigger-look-up-id-from-query-string-powershell"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a PowerShell function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $todoItem)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItem
+})
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-powershell"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a function.json file and a PowerShell function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $todoItems)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItems
+})
+```
+ ::: zone pivot="programming-language-python"
def main(req: func.HttpRequest, todoItems: func.SqlRowList) -> func.HttpResponse
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
-
--
->
::: zone pivot="programming-language-csharp" ## Attributes
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlAttribute](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute to declare the SQL bindings on the function, which has the following properties:
| Attribute property |Description| |||
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
| **Parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | ::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
+ ::: zone pivot="programming-language-java" ## Annotations
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLInput` annotation (`com.microsoft.azure.functions.sql.annotation.SQLInput`) on parameters whose value would come from Azure SQL. This annotation supports the following elements:
| Element |Description| ||| | **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
-| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
-| **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is ["Text"](/dotnet/api/system.data.commandtype#fields) for a query and ["StoredProcedure"](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
::: zone-end >
+
## Configuration The following table explains the binding configuration properties that you set in the function.json file.
The following table explains the binding configuration properties that you set i
## Usage
-The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding.
Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.
## Next steps - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
description: Learn to use the Azure SQL output binding in Azure Functions.
Previously updated : 5/24/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
Isolated worker process isn't currently supported.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-java).
+
+This section contains the following examples:
+
+* [HTTP trigger, write a record to a table](#http-trigger-write-record-to-table-java)
+<!-- * [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-java) -->
+
+The examples refer to a `ToDoItem` class (in a separate file `ToDoItem.java`) and a corresponding database table:
+
+```java
+package com.function;
+import java.util.UUID;
+
+public class ToDoItem {
+ public UUID Id;
+ public int order;
+ public String title;
+ public String url;
+ public boolean completed;
+
+ public ToDoItem() {
+ }
+
+ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) {
+ this.Id = Id;
+ this.order = order;
+ this.title = title;
+ this.url = url;
+ this.completed = completed;
+ }
+}
+```
++
+<a id="http-trigger-write-record-to-table-java"></a>
+### HTTP trigger, write a record to a table
+
+The following example shows a SQL output binding in a Java function that adds a record to a table, using data provided in an HTTP POST request as a JSON body. The function takes an additional dependency on the [com.fasterxml.jackson.core](https://github.com/FasterXML/jackson) library to parse the JSON body.
+
+```xml
+<dependency>
+ <groupId>com.fasterxml.jackson.core</groupId>
+ <artifactId>jackson-databind</artifactId>
+ <version>2.13.4.1</version>
+</dependency>
+```
+
+```java
+package com.function;
+
+import java.util.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.sql.annotation.SQLOutput;
+import com.fasterxml.jackson.core.JsonParseException;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonMappingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import java.util.Optional;
+
+public class PostToDo {
+ @FunctionName("PostToDo")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
+ @SQLOutput(
+ commandText = "dbo.ToDo",
+ connectionStringSetting = "SqlConnectionString")
+ OutputBinding<ToDoItem> output) throws JsonParseException, JsonMappingException, JsonProcessingException {
+ String json = request.getBody().get();
+ ObjectMapper mapper = new ObjectMapper();
+ ToDoItem newToDo = mapper.readValue(json, ToDoItem.class);
+
+ newToDo.Id = UUID.randomUUID();
+ output.setValue(newToDo);
+
+ return request.createResponseBuilder(HttpStatus.CREATED).header("Content-Type", "application/json").body(output).build();
+ }
+}
+```
+
+<!-- commented out until issue with java library resolved
+
+<a id="http-trigger-write-to-two-tables-java"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a JavaS function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. The function takes an additional dependency on the [com.fasterxml.jackson.core](https://github.com/FasterXML/jackson) library to parse the JSON body.
+
+```xml
+<dependency>
+ <groupId>com.fasterxml.jackson.core</groupId>
+ <artifactId>jackson-databind</artifactId>
+ <version>2.13.4.1</version>
+</dependency>
+```
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+and Java class in `RequestLog.java`:
+
+```java
+package com.function;
+
+import java.util.Date;
+
+public class RequestLog {
+ public int Id;
+ public Date RequestTimeStamp;
+ public int ItemCount;
+
+ public RequestLog() {
+ }
+
+ public RequestLog(int Id, Date RequestTimeStamp, int ItemCount) {
+ this.Id = Id;
+ this.RequestTimeStamp = RequestTimeStamp;
+ this.ItemCount = ItemCount;
+ }
+}
+```
+
+```java
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger and SQL output binding function processed a request.');
+ context.log(req.body);
+
+ const newLog = {
+ RequestTimeStamp = Date.now(),
+ ItemCount = 1
+ }
+
+ if (req.body) {
+ context.bindings.todoItems = req.body;
+ context.bindings.requestLog = newLog;
+ context.res = {
+ body: req.body,
+ mimetype: "application/json",
+ status: 201
+ }
+ } else {
+ context.res = {
+ status: 400,
+ body: "Error reading request body"
+ }
+ }
+}
+``` -->
++ ::: zone pivot="programming-language-javascript"
The examples refer to a database table:
<a id="http-trigger-write-records-to-table-javascript"></a> ### HTTP trigger, write records to a table
-The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+The following example shows a SQL output binding in a function.json file and a JavaScript function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
The following is binding data in the function.json file:
module.exports = async function (context, req) {
<a id="http-trigger-write-to-two-tables-javascript"></a> ### HTTP trigger, write to two tables
-The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+The following example shows a SQL output binding in a function.json file and a JavaScript function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
The second table, `dbo.RequestLog`, corresponds to the following definition:
module.exports = async function (context, req) {
::: zone-end +++
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-powershell)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-powershell)
+
+The examples refer to a database table:
+++
+<a id="http-trigger-write-records-to-table-powershell"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a SQL output binding in a function.json file and a PowerShell function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+
+```powershell
+using namespace System.Net
+
+param($Request)
+
+Write-Host "PowerShell function with SQL Output Binding processed a request."
+
+# Update req_body with the body of the request
+$req_body = $Request.Body
+
+# Assign the value we want to pass to the SQL Output binding.
+# The -Name value corresponds to the name property in the function.json for the binding
+Push-OutputBinding -Name todoItems -Value $req_body
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [HttpStatusCode]::OK
+ Body = $req_body
+})
+```
+
+<a id="http-trigger-write-to-two-tables-powershell"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a function.json file and a PowerShell function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($Request)
+
+Write-Host "PowerShell function with SQL Output Binding processed a request."
+
+# Update req_body with the body of the request
+$req_body = $Request.Body
+$new_log = @{
+ RequestTimeStamp = [DateTime]::Now
+ ItemCount = 1
+}
+
+Push-OutputBinding -Name todoItems -Value $req_body
+Push-OutputBinding -Name requestLog -Value $new_log
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [HttpStatusCode]::OK
+ Body = $req_body
+})
+```
++++ ::: zone pivot="programming-language-python" More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python).
The examples refer to a database table:
<a id="http-trigger-write-records-to-table-python"></a> ### HTTP trigger, write records to a table
-The following example shows a SQL input binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+The following example shows a SQL output binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
The following is binding data in the function.json file:
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow]) -> func.HttpRe
<a id="http-trigger-write-to-two-tables-python"></a> ### HTTP trigger, write to two tables
-The following example shows a SQL input binding in a function.json file and a Python function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+The following example shows a SQL output binding in a function.json file and a Python function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
The second table, `dbo.RequestLog`, corresponds to the following definition:
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow], requestLog: fu
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
-
--
->
::: zone pivot="programming-language-csharp" ## Attributes
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlAttribute](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute to declare the SQL bindings on the function, which has the following properties:
| Attribute property |Description| |||
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
+ ::: zone pivot="programming-language-java" ## Annotations
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLOutput` annotation (`com.microsoft.azure.functions.sql.annotation.SQLOutput`) on parameters whose value would come from Azure SQL. This annotation supports the following elements:
| Element |Description| |||
-| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
::: zone-end > + ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file.
The following table explains the binding configuration properties that you set i
## Usage
-The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
-The output bindings uses the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
::: zone-end ## Next steps - [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
+
+ Title: Azure SQL trigger for Functions
+description: Learn to use the Azure SQL trigger in Azure Functions.
++ Last updated : 11/10/2022++
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Azure SQL trigger for Functions (preview)
+
+The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted.
+
+For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
+
+## Example usage
+<a id="example"></a>
++
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
++
+The example refers to a `ToDoItem` class and a corresponding database table:
+++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with 2 properties:
+- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
+
+# [In-process](#tab/in-process)
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table:
+
+```cs
+using System.Collections.Generic;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Extensions.Logging;
+using Microsoft.Azure.WebJobs.Extensions.Sql;
+
+namespace AzureSQL.ToDo
+{
+ public static class ToDoTrigger
+ {
+ [FunctionName("ToDoTrigger")]
+ public static void Run(
+ [SqlTrigger("[dbo].[ToDo]", ConnectionStringSetting = "SqlConnectionString")]
+ IReadOnlyList<SqlChange<ToDoItem>> changes,
+ ILogger logger)
+ {
+ foreach (SqlChange<ToDoItem> change in changes)
+ {
+ ToDoItem toDoItem = change.Item;
+ logger.LogInformation($"Change operation: {change.Operation}");
+ logger.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
+ }
+ }
+ }
+}
+```
+
+# [Isolated process](#tab/isolated-process)
+
+Isolated worker process isn't currently supported.
+
+<!-- Uncomment to support C# script examples.
+# [C# Script](#tab/csharp-script)
+
+-->
+++++
+> [!NOTE]
+> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md)
+++
+## Attributes
+
+The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
+
+| Attribute property |Description|
+|||
+| **TableName** | Required. The name of the table being monitored by the trigger. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+++
+## Configuration
+
+<!-- ### for another day ###
++
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+++
+In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger:
+
+| App Setting | Description|
+|||
+|**Sql_Trigger_BatchSize** |This controls the number of changes processed at once before being sent to the triggered function. The default value is 100.|
+|**Sql_Trigger_PollingIntervalMs**|This controls the delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).|
+|**Sql_Trigger_MaxChangesPerWorker**|This controls the upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.|
+++
+## Set up change tracking (required)
+
+Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [VS Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
+
+1. Enable change tracking on the SQL database, substituting `your database name` with the name of the database where the table to be monitored is located:
+
+ ```sql
+ ALTER DATABASE [your database name]
+ SET CHANGE_TRACKING = ON
+ (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+ ```
+
+ The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, it will only be able to catch the changes that occurred in past two days with the above query.
+
+ The `AUTO_CLEANUP` option is used to enable or disable the clean-up task that removes old change tracking information. If a temporary problem that prevents the trigger from running, turning off auto cleanup can be useful to pause the removal of information older than the retention period until the problem is resolved.
+
+ More information on change tracking options is available in the [SQL documentation](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server).
+
+2. Enable change tracking on the table, substituting `your table name` with the name of the table to be monitored (changing the schema if appropriate):
+
+ ```sql
+ ALTER TABLE [dbo].[your table name]
+ ENABLE CHANGE_TRACKING;
+ ```
+
+ The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger will have associated change tracking table and leases table in a schema `az_func`, which are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/triggerbindings/README.md#internal-state-tables).
++
+## Enable runtime-driven scaling
+
+Optionally, your functions can scale automatically based on the amount of changes that are pending to be processed in the user table. To allow your functions to scale properly on the Premium plan when using SQL triggers, you need to enable runtime scale monitoring.
+++
+## Next steps
+
+- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)
+- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
description: Understand how to use Azure SQL bindings in Azure Functions.
Previously updated : 6/3/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL bindings for Azure Functions overview (preview)
-This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input and output bindings for the Azure SQL and SQL Server products.
+This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input bindings, output bindings, and a function trigger for the Azure SQL and SQL Server products.
| Action | Type | |||
+| Trigger a function when a change is detected on a SQL table | [SQL trigger](./functions-bindings-azure-sql-trigger.md) |
| Read data from a database | [Input binding](./functions-bindings-azure-sql-input.md) | | Save data to a database |[Output binding](./functions-bindings-azure-sql-output.md) |
You can install this version of the extension in your function app by registerin
::: zone-end ## Install bundle The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
-# [Preview Bundle v3.x](#tab/extensionv3)
-
-You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
-
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
- "version": "[3.*, 4.0.0)"
- }
-}
-```
# [Preview Bundle v4.x](#tab/extensionv4)
You can add the preview extension bundle by adding or replacing the following co
} ```
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
+ ::: zone-end
You can add the preview extension bundle by adding or replacing the following co
# [Preview Bundle v3.x](#tab/extensionv3)
-Python support isn't available with the SQL bindings extension in the v3 version of the functions runtime.
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
Support for Python durable functions with SQL bindings isn't yet available.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+
+## Install bundle
+
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+
+# [Preview Bundle v4.x](#tab/extensionv4)
+
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
+++
+## Update packages
+
+Add the Java library for SQL bindings to your functions project with an update to the `pom.xml` file in your Python Azure Functions project as seen in the following snippet:
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure.functions</groupId>
+ <artifactId>azure-functions-java-library-sql</artifactId>
+ <version>0.1.0</version>
+</dependency>
+```
::: zone-end ## SQL connection string
-Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true). Notable keywords include:
+Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). Notable keywords include:
- `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md) - `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds)
Azure SQL bindings for Azure Functions have a required property for connection s
## Considerations -- Because the Azure SQL bindings doesn't have a trigger, you need to use another supported trigger to start a function that reads from or writes to an Azure SQL database. -- Azure SQL binding supports version 2.x and later of the Functions runtime.
+- Azure SQL binding supports version 4.x and later of the Functions runtime.
- Source code for the Azure SQL bindings can be found in [this GitHub repository](https://github.com/Azure/azure-functions-sql-extension). - This binding requires connectivity to an Azure SQL or SQL Server database. - Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` aren't supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and aren't compatible with the `OPENJSON` function used by this Azure Functions binding.
Azure SQL bindings for Azure Functions have a required property for connection s
- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/) - [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md) - [Use SQL bindings in Azure Stream Analytics](../stream-analytics/sql-database-upsert.md#option-1-update-by-key-with-the-azure-function-sql-binding)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Handling errors in Azure Functions is important to avoid lost data, missed event
This article describes general strategies for error handling and the available retry strategies. > [!IMPORTANT]
-> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in October 2022. For more information, see the [Retries section below](#retries).
+> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in December 2022. For more information, see the [Retries section below](#retries).
## Handling errors
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
The Kafka extension is part of an [extension bundle], which is specified in your
To allow your functions to scale properly on the Premium plan when using Kafka triggers and bindings, you need to enable runtime scale monitoring.
-# [Azure portal](#tab/portal)
-In the Azure portal, in your function app, choose **Configuration** and on the **Function runtime settings** tab turn **Runtime scale monitoring** to **On**.
--
-# [Azure CLI](#tab/azure-cli)
-
-Use the following Azure CLI command to enable runtime scale monitoring:
-
-```azurecli-interactive
-az resource update -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites
-```
-- ## host.json settings
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
To learn more about Azure Functions runtime support policy, please refer to this
### Running local on a specific version
-When running locally the Azure Functions runtime defaults to using PowerShell Core 6. To instead use PowerShell 7 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7, your local.settings.json file looks like the following example:
+Support for PowerShell 7.0 in Azure Functions is ending on 3 December 2022. To use PowerShell 7.2 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.2, your local.settings.json file looks like the following example:
```json {
When running locally the Azure Functions runtime defaults to using PowerShell Co
"Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "powershell",
- "FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"
+ "FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"
} } ``` ### Changing the PowerShell version
-Your function app must be running on version 3.x to be able to upgrade from PowerShell Core 6 to PowerShell 7. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
+Support for PowerShell 7.0 in Azure Functions is ending on 3 December 2022. Your function app must be running on version 4.x to be able to upgrade to PowerShell 7.2. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
Use the following steps to change the PowerShell version used by your function app. You can do this either in the Azure portal or by using PowerShell.
Use the following steps to change the PowerShell version used by your function a
1. In the [Azure portal](https://portal.azure.com), browse to your function app. 1. Under **Settings**, choose **Configuration**. In the **General settings** tab, locate the **PowerShell version**. -
- :::image type="content" source="media/functions-reference-powershell/change-powershell-version-portal.png" alt-text="Choose the PowerShell version used by the function app":::
-
+
+ ![image](https://user-images.githubusercontent.com/108835427/199586564-25600629-44c7-439c-91f9-a500ad2989c4.png)
+
1. Choose your desired **PowerShell Core version** and select **Save**. When warned about the pending restart choose **Continue**. The function app restarts on the chosen PowerShell version. # [PowerShell](#tab/powershell)
Set-AzResource -ResourceId "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RES
```
-Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with either `~6` or `~7`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table.
+Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with `7.2`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table.
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
The following table indicates key .NET classes used by Functions that could chan
| | | | | | `FunctionName` (attribute) | `FunctionName` (attribute) | `Function` (attribute) | `Function` (attribute) | | `HttpRequest` | `HttpRequest` | `HttpRequestData` | `HttpRequestData` |
-| `OkObjectResult` | `OkObjectResult` | `HttpResonseData` | `HttpResonseData` |
+| `OkObjectResult` | `OkObjectResult` | `HttpResponseData` | `HttpResponseData` |
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
The Azure FC allocates infrastructure resources to tenants and manages unidirect
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
-Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/develop/v2-oauth2-auth-code-flow.md).
:::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
The [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) allows you to r
Maps Creator service is a suite of web services that developers can use to create applications with map features based on indoor map data.
-Maps Creator provides three core
+Maps Creator provides the following
* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted Drawing package data. For information about Drawing package requirements, see Drawing package requirements.
Maps Creator provides three core
* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](http://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
-<!-* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
->
+* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
+ ### Elevation service The Azure Maps Elevation service is a web service that developers can use to retrieve elevation data from anywhere on the EarthΓÇÖs surface.
Stay up to date on Azure Maps:
[style editor]: https://azure.github.io/Azure-Maps-Style-Editor [FeatureState]: creator-indoor-maps.md#feature-statesets [WFS]: creator-indoor-maps.md#web-feature-service-api
-<!--[wayfinding-preview]: creator-indoor-maps.md# -->
+[wayfinding-preview]: creator-indoor-maps.md#wayfinding-preview
+[wayfind]: /rest/api/maps/v20220901preview/wayfinding
+[routeset]: /rest/api/maps/v20220901preview/routeset
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator
description: Facility Ontology that describes the feature class definitions for Azure Maps Creator Previously updated : 03/02/2022 Last updated : 11/08/2022
zone_pivot_groups: facility-ontology-schema
Facility ontology defines how Azure Maps Creator internally stores facility data in a Creator dataset. In addition to defining internal facility data structure, facility ontology is also exposed externally through the WFS API. When WFS API is used to query facility data in a dataset, the response format is defined by the ontology supplied to that dataset.
-At a high level, facility ontology divides the dataset into feature classes. All feature classes share a common set of properties, such as `ID` and `Geometry`. In addition to the common property set, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependent on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
- ## Changes and Revisions :::zone pivot="facility-ontology-v1"
Fixed the following constraint validation checks:
:::zone-end
+## Feature collection
++
+At a high level, the facility ontology consists of feature collections, each contains an array of feature objects. All feature objects have two fields in common, `ID` and `Geometry`. When importing a drawing package into Azure Maps Creator, these fields are automatically generated.
+++
+At a high level, the facility ontology consists of feature collections, each contains an array of feature objects. All feature objects have two fields in common, `ID` and `Geometry`.
+
+# [Drawing package](#tab/dwg)
+
+When importing a drawing package into Azure Maps Creator, these fields are automatically generated.
+
+# [GeoJSON package (preview)](#tab/geojson)
+
+Support for creating a [dataset][datasetv20220901] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator.
+
+When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object][feature object] in each GeoJSON file in the package.
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object][GeometryObject] in the GeoJSON (RFC 7946) format specification. |
+|`ID` | string | true | The value of this field can be alphanumeric characters (0-9, a-z, A-Z), dots (.), hyphens (-) and underscores (_). Maximum length allowed is 1,000 characters.|
++
+For more information, see [Create a dataset using a GeoJson package](how-to-dataset-geojson.md).
++++
+In addition to these common fields, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependent on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
+
+The remaining sections in this article define the different feature classes and their properties that make up the facility ontology in Microsoft Azure Maps Creator.
+ ## unit The `unit` feature class defines a physical and non-overlapping area that can be occupied and traversed by a navigating agent. A `unit` can be a hallway, a room, a courtyard, and so on.
The `unit` feature class defines a physical and non-overlapping area that can be
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--|--|-|--|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. |
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
-|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. |
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
+|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--|--|-|--|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID.<BR>When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined.<BR>Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`addressRoomNumber` | string | false | Room/Unit/Apartment/Suite number of the unit. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `unit` feature class defines a physical and non-overlapping area that can be
## structure
-The `structure` feature class defines a physical and non-overlapping area that cannot be navigated through. Can be a wall, column, and so on.
+The `structure` feature class defines a physical and non-overlapping area that can't be navigated through. Can be a wall, column, and so on.
**Geometry Type**: Polygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end ## zone
-The `zone` feature class defines a virtual area, like a WiFi zone or emergency assembly area. Zones can be used as destinations but are not meant for through traffic.
+The `zone` feature class defines a virtual area, like a WiFi zone or emergency assembly area. Zones can be used as destinations but aren't meant for through traffic.
**Geometry Type**: Polygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It is recommended that the `setId` is a GUID. Maximum length allowed is 1000.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## level
The `level` class feature defines an area of a building at a set elevation. For
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
-| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1000.|
-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
-| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## facility
The `facility` feature class defines the area of the site, building footprint, a
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
-|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
-|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
+|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
+|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+ ## verticalPenetration
The `verticalPenetration` class feature defines an area that, when used in a set
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
-|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
+|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
| `isConnectedToVerticalPenetration` | boolean | false | Whether or not this feature is connected to a `verticalPenetration` feature on one of its sides. Default value is `false`. |
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
| `accessRightToLeft`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from right to left. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `accessLeftToRight`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from left to right. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `isEmergency` | boolean | false | If `true`, the opening is navigable only during emergencies. Default value is `false` |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] y that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `directoryInfo` object class feature defines the name, address, phone number
**Geometry Type**: None
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1000. |
-|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1000. |
-|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1000.|
-|`adminDivisions`| string| false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1000.|
-|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1000.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1000. |
-|`website` | string | false | Website URL. Maximum length allowed is 1000. |
-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1000. |
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. |
+|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
+|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1,000 characters.|
+|`adminDivisions`| array of strings | false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1,000 characters.|
+|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. |
+|`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1,000 characters. |
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. |
+|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
+|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1,000 characters.|
+|`adminDivisions`| array of strings| false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1,000 characters.|
+|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. |
+|`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification][Open Street Map specification]. Maximum length allowed is 1,000 characters. |
+ ## pointElement
The `pointElement` is a class feature that defines a point feature in a unit, su
**Geometry Type**: MultiPoint
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000.|
-| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+ ## lineElement
The `lineElement` is a class feature that defines a line feature in a unit, such
**Geometry Type**: LinearMultiString
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. |
-| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
-|`obstructionArea` | [Polygon](/rest/api/maps/v2/wfs/get-features#featuregeojson)| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+ ## areaElement
The `areaElement` is a class feature that defines a polygon feature in a unit, s
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. |
-| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`obstructionArea` | geometry: ["Polygon","MultiPolygon" ]| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## category
The `category` class feature defines category names. For example: "room.conferen
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. |
| `routeThroughBehavior` | boolean | false | Determines whether a feature can be used for through traffic.|
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if a feature should be part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if a feature should be part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. |
:::zone-end+
+[conversion]: /rest/api/maps/v2/conversion
+[geojsonpoint]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
+[GeoJsonPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon
+[MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon
+[GeometryObject]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1
+[feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2
+[datasetv20220901]: /rest/api/maps/v20220901preview/dataset
+[Open Street Map specification]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Creator services create, store, and use various data types that are defined and
- Converted data - Dataset - Tileset-- Custom styles
+- style
+- Map configuration
- Feature stateset
+- Routeset
## Upload a Drawing package
Azure Maps Creator provides the following services that support map creation:
- [Dataset service](/rest/api/maps/v2/dataset). - [Tileset service](/rest/api/maps/v2/tileset). Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.-- Custom styles. Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
+- [Custom styling service](#custom-styling-preview). Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
- [Feature State service](/rest/api/maps/v2/feature-state). Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system.
+- [Wayfinding service](#wayfinding-preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
### Datasets
If a tileset becomes outdated and is no longer useful, you can delete the tilese
> >To reflect changes in a dataset, you must create new tilesets. Similarly, if you delete a tileset, the dataset isn't affected.
-### Custom styling (Preview)
+### Custom styling (preview)
A style defines the visual appearance of a map. It defines what data to draw, the order to draw it in, and how to style the data when drawing it. Azure Maps Creator styles support the MapLibre standard for [style layers][style layers] and [sprites][sprites].
An application can use a feature stateset to dynamically render features in a fa
>[!NOTE] >Like tilesets, changing a dataset doesn't affect the existing feature stateset, and deleting a feature stateset doesn't affect the dataset to which it's attached.
+### Wayfinding (preview)
+
+The [Wayfinding service][wayfind] enables you to provide your customers with the shortest path between two points within a facility. Once you've imported your indoor map data and created your dataset, you can use that to create a [routeset][routeset]. The routeset provides the data required to generate paths between two points. The wayfinding service takes into account things such as the minimum width of openings and may optionally exclude elevators or stairs when navigating between levels as a result.
+
+Creator wayfinding is powered by [Havok][havok].
+
+#### Wayfinding paths
+
+When a [wayfinding path][wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors.
+
+For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg will be the elevator, and then the final leg will be the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON.
+
+##### Structure
+
+For wayfinding to work, the facility data must contain a [structure][structures]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures.
+
+##### Vertical penetration
+
+If the selected origin and destination are on different floors, the wayfinding service determines what [vertical penetration][verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path will be used.
+
+The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding][wayfind] API documentation to learn about other factors that can impact the path selection between floor levels.
+
+For more information, see the [Indoor maps wayfinding service](how-to-creator-wayfinding.md) how-to article.
+ ## Using indoor maps ### Render V2-Get Map Tile API
Creator services such as Conversion, Dataset, Tileset and Feature State return a
### Indoor Maps module
-The [Azure Maps Web SDK](./index.yml) includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets, such as *floor picker*, that help users to visualize the different floors.
+The [Azure Maps Web SDK](./index.yml) includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets such as *floor picker* that help users to visualize the different floors.
You can use the Indoor Maps module to create web applications that integrate indoor map data with other [Azure Maps services](./index.yml). The most common application setups include adding knowledge from other maps - such as road, imagery, weather, and transit - to indoor maps.
The following example shows how to update a dataset, create a new tileset, and d
[basemap]: supported-map-styles.md [style]: /rest/api/maps/v20220901preview/style [tileset]: /rest/api/maps/v20220901preview/tileset
+[routeset]: /rest/api/maps/v20220901preview/routestset
+[wayfind]: /rest/api/maps/v20220901preview/wayfinding
+[wayfinding path] /rest/api/maps/v20220901preview/wayfinding/path
[style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md [map-config-api]: /rest/api/maps/v20220901preview/map-configuration [instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
+[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
+[havok]: https://www.havok.com/
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
+
+ Title: Indoor Maps wayfinding service
+
+description: How to use the wayfinding service to plot and display routes for indoor maps in Microsoft Azure Maps Creator
++ Last updated : 10/25/2022+++++
+# Indoor maps wayfinding service (preview)
+
+The Azure Maps Creator [wayfinding service][wayfinding service] allows you to navigate from place to place anywhere within your indoor map. The service utilizes stairs and elevators to navigate between floors and provides guidance to help you navigate around physical obstructions. This article describes how to generate a path from a starting point to a destination point in a sample indoor map.
+
+## Prerequisites
+
+- Understanding of [Creator concepts](creator-indoor-maps.md).
+- An Azure Maps Creator [dataset][dataset] and [tileset][tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps](tutorial-creator-indoor-maps.md) tutorial helpful.
+
+>[!IMPORTANT]
+>
+> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services][how to manage access to creator services].
+> - In the URL examples in this article you will need to:
+> - Replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status][check dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
+
+## Create a routeset
+
+A [routeset][routeset] is a collection of indoor map data that is used by the wayfinding service.
+
+A routeset is created from a dataset, but is independent from that dataset. This means that if the dataset is deleted, the routeset continues to exist.
+
+Once you've created a routeset, you can then use the wayfinding API to get a path from the starting point to the destination point within the facility.
+
+To create a routeset:
+
+1. Execute the following **HTTP POST request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+1. Copy the value of the **Operation-Location** key from the response header.
+
+This is the status URL that you'll use to check the status of the routeset creation in the next section.
+
+### Check the routeset creation status and retrieve the routesetId
+
+To check the status of the routeset creation process and retrieve the routesetId:
+
+1. Execute the following **HTTP GET request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routsets/operations/{operationId}?api-version=2022-09-01-preview0&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+ > [!NOTE]
+ > Get the `operationId` from the Operation-Location key in the response header when creating a new routeset.
+
+1. Copy the value of the **Resource-Location** key from the responses header. This is the resource location URL and contains the `routsetId`, as shown below:
+
+ > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview
+
+Make a note of the `routesetId`, it will be required parameter in all [wayfinding](#get-a-wayfinding-path) requests, and when your [Get the facility ID](#get-the-facility-id).
+
+### Get the facility ID
+
+The `facilityId`, a property of the routeset, is a required parameter when searching for a wayfinding path. Get the `facilityId` by querying the routeset.
+
+1. Execute the following **HTTP GET request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routsets/{routesetId}?api-version=2022-09-01-preview0&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+1. The `facilityId` is a property of the `facilityDetails` object, which you can find in the response body of the routeset request, which is `FCL43` in the following example:
+
+```json
+{
+ "routeSetId": "675ce646-f405-03be-302e-0d22bcfe17e8",
+ "dataSetId": "eec3825c-620f-13e1-b469-85d2767c8a41",
+ "created": "10/10/2022 6:58:32 PM +00:00",
+ "facilityDetails": [
+ {
+ "facilityId": "FCL43",
+ "levelOrdinals": [
+ 0,
+ 1
+ ]
+ }
+ ],
+ "creationMode": "Wall",
+ "ontology": "facility-2.0"
+}
+```
+
+## Get a wayfinding path
+
+In this section, youΓÇÖll use the [wayfinding API][wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding][wayfinding] in the concepts article.
+
+To create a wayfinding query:
+
+1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status](#check-the-routeset-creation-status-and-retrieve-the-routesetid) section and the {facilityId} with the facilityId obtained in the [Get the facility ID](#get-the-facility-id) section):
+
+ ```http
+ https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Azure-Maps-Primary-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width}
+ ```
+
+ > [!TIP]
+ > The `AvoidFeatures` parameter can be used to specify something for the wayfinding service to avoid when determining the path, such as elevators or stairs.
+
+1. The details of the path and legs are displayed in the Body of the response.
+
+The summary displays the estimated travel time in seconds for the total journey. In addition, the estimated time for each section of the journey is displayed at the beginning of each leg.
+
+The wayfinding service calculates the path through specific intervening points. Each point is displayed, along with its latitude and longitude details.
+
+<!-- TODO: ## Implement the wayfinding service in your map (Refer to sample app once completed) -->
+
+[dataset]: creator-indoor-maps.md#datasets
+[tileset]: creator-indoor-maps.md#tilesets
+[routeset]: /rest/api/maps/v20220901preview/routeset
+[wayfinding]: creator-indoor-maps.md#wayfinding-preview
+[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
+[how to manage access to creator services]: how-to-manage-creator.md#access-to-creator-services
+[check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
+[wayfinding service]: creator-indoor-maps.md#wayfinding-preview
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage
description: Learn about Microsoft Azure Maps Weather services coverage Previously updated : 03/28/2022 Last updated : 11/08/2022
Azure Maps [Severe weather alerts][severe-weather-alerts] service returns severe
## Americas
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-||::|:-:|::|::|
-| Anguilla | Γ£ô | | | Γ£ô |
-| Antarctica | Γ£ô | | | Γ£ô |
-| Antigua & Barbuda | Γ£ô | | | Γ£ô |
-| Argentina | Γ£ô | | | Γ£ô |
-| Aruba | Γ£ô | | | Γ£ô |
-| Bahamas | Γ£ô | | | Γ£ô |
-| Barbados | Γ£ô | | | Γ£ô |
-| Belize | Γ£ô | | | Γ£ô |
-| Bermuda | Γ£ô | | | Γ£ô |
-| Bolivia | Γ£ô | | | Γ£ô |
-| Bonaire | Γ£ô | | | Γ£ô |
-| Brazil | Γ£ô | | Γ£ô | Γ£ô |
-| British Virgin Islands | Γ£ô | | | Γ£ô |
-| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Cayman Islands | Γ£ô | | | Γ£ô |
-| Chile | Γ£ô | | | Γ£ô |
-| Colombia | Γ£ô | | | Γ£ô |
-| Costa Rica | Γ£ô | | | Γ£ô |
-| Cuba | Γ£ô | | | Γ£ô |
-| Curaçao | ✓ | | | ✓ |
-| Dominica | Γ£ô | | | Γ£ô |
-| Dominican Republic | Γ£ô | | | Γ£ô |
-| Ecuador | Γ£ô | | | Γ£ô |
-| El Salvador | Γ£ô | | | Γ£ô |
-| Falkland Islands | Γ£ô | | | Γ£ô |
-| French Guiana | Γ£ô | | | Γ£ô |
-| Greenland | Γ£ô | | | Γ£ô |
-| Grenada | Γ£ô | | | Γ£ô |
-| Guadeloupe | Γ£ô | | | Γ£ô |
-| Guatemala | Γ£ô | | | Γ£ô |
-| Guyana | Γ£ô | | | Γ£ô |
-| Haiti | Γ£ô | | | Γ£ô |
-| Honduras | Γ£ô | | | Γ£ô |
-| Jamaica | Γ£ô | | | Γ£ô |
-| Martinique | Γ£ô | | | Γ£ô |
-| Mexico | Γ£ô | | | Γ£ô |
-| Montserrat | Γ£ô | | | Γ£ô |
-| Nicaragua | Γ£ô | | | Γ£ô |
-| Panama | Γ£ô | | | Γ£ô |
-| Paraguay | Γ£ô | | | Γ£ô |
-| Peru | Γ£ô | | | Γ£ô |
-| Puerto Rico | Γ£ô | | Γ£ô | Γ£ô |
-| Saint Barthélemy | ✓ | | | ✓ |
-| Saint Kitts & Nevis | Γ£ô | | | Γ£ô |
-| Saint Lucia | Γ£ô | | | Γ£ô |
-| Saint Martin | Γ£ô | | | Γ£ô |
-| Saint Pierre & Miquelon | Γ£ô | | | Γ£ô |
-| Saint Vincent & the Grenadines | Γ£ô | | | Γ£ô |
-| Sint Eustatius | Γ£ô | | | Γ£ô |
-| Sint Maarten | Γ£ô | | | Γ£ô |
-| South Georgia & South Sandwich Islands | Γ£ô | | | Γ£ô |
-| Suriname | Γ£ô | | | Γ£ô |
-| Trinidad & Tobago | Γ£ô | | | Γ£ô |
-| Turks & Caicos Islands | Γ£ô | | | Γ£ô |
-| U.S. Outlying Islands | Γ£ô | | | Γ£ô |
-| U.S. Virgin Islands | Γ£ô | | Γ£ô | Γ£ô |
-| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Uruguay | Γ£ô | | | Γ£ô |
-| Venezuela | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+||::|:--:|::|::|
+| Anguilla | Γ£ô | Γ£ô | | Γ£ô |
+| Antarctica | Γ£ô | | | Γ£ô |
+| Antigua & Barbuda | Γ£ô | Γ£ô | | Γ£ô |
+| Argentina | Γ£ô | Γ£ô | | Γ£ô |
+| Aruba | Γ£ô | Γ£ô | | Γ£ô |
+| Bahamas | Γ£ô | Γ£ô | | Γ£ô |
+| Barbados | Γ£ô | Γ£ô | | Γ£ô |
+| Belize | Γ£ô | Γ£ô | | Γ£ô |
+| Bermuda | Γ£ô | | | Γ£ô |
+| Bolivia | Γ£ô | Γ£ô | | Γ£ô |
+| Bonaire | Γ£ô | Γ£ô | | Γ£ô |
+| Brazil | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| British Virgin Islands | Γ£ô | Γ£ô | | Γ£ô |
+| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cayman Islands | Γ£ô | Γ£ô | | Γ£ô |
+| Chile | Γ£ô | Γ£ô | | Γ£ô |
+| Colombia | Γ£ô | Γ£ô | | Γ£ô |
+| Costa Rica | Γ£ô | Γ£ô | | Γ£ô |
+| Cuba | Γ£ô | Γ£ô | | Γ£ô |
+| Curaçao | ✓ | ✓ | | ✓ |
+| Dominica | Γ£ô | Γ£ô | | Γ£ô |
+| Dominican Republic | Γ£ô | Γ£ô | | Γ£ô |
+| Ecuador | Γ£ô | Γ£ô | | Γ£ô |
+| El Salvador | Γ£ô | Γ£ô | | Γ£ô |
+| Falkland Islands | Γ£ô | Γ£ô | | Γ£ô |
+| French Guiana | Γ£ô | Γ£ô | | Γ£ô |
+| Greenland | Γ£ô | | | Γ£ô |
+| Grenada | Γ£ô | Γ£ô | | Γ£ô |
+| Guadeloupe | Γ£ô | Γ£ô | | Γ£ô |
+| Guatemala | Γ£ô | Γ£ô | | Γ£ô |
+| Guyana | Γ£ô | Γ£ô | | Γ£ô |
+| Haiti | Γ£ô | Γ£ô | | Γ£ô |
+| Honduras | Γ£ô | Γ£ô | | Γ£ô |
+| Jamaica | Γ£ô | Γ£ô | | Γ£ô |
+| Martinique | Γ£ô | Γ£ô | | Γ£ô |
+| Mexico | Γ£ô | Γ£ô | | Γ£ô |
+| Montserrat | Γ£ô | Γ£ô | | Γ£ô |
+| Nicaragua | Γ£ô | Γ£ô | | Γ£ô |
+| Panama | Γ£ô | Γ£ô | | Γ£ô |
+| Paraguay | Γ£ô | Γ£ô | | Γ£ô |
+| Peru | Γ£ô | Γ£ô | | Γ£ô |
+| Puerto Rico | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Saint Barthélemy | ✓ | ✓ | | ✓ |
+| Saint Kitts & Nevis | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Lucia | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Martin | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Pierre & Miquelon | Γ£ô | | | Γ£ô |
+| Saint Vincent & the Grenadines | Γ£ô | Γ£ô | | Γ£ô |
+| Sint Eustatius | Γ£ô | | | Γ£ô |
+| Sint Maarten | Γ£ô | Γ£ô | | Γ£ô |
+| South Georgia & South Sandwich Islands | Γ£ô | | | Γ£ô |
+| Suriname | Γ£ô | Γ£ô | | Γ£ô |
+| Trinidad & Tobago | Γ£ô | Γ£ô | | Γ£ô |
+| Turks & Caicos Islands | Γ£ô | Γ£ô | | Γ£ô |
+| U.S. Outlying Islands | Γ£ô | | | Γ£ô |
+| U.S. Virgin Islands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Uruguay | Γ£ô | Γ£ô | | Γ£ô |
+| Venezuela | Γ£ô | Γ£ô | | Γ£ô |
## Asia Pacific
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|--|::|:-:|::|::|
-| Afghanistan | Γ£ô | | | Γ£ô |
-| American Samoa | Γ£ô | | Γ£ô | Γ£ô |
-| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bangladesh | Γ£ô | | | Γ£ô |
-| Bhutan | Γ£ô | | | Γ£ô |
-| British Indian Ocean Territory | Γ£ô | | | Γ£ô |
-| Brunei | Γ£ô | | | Γ£ô |
-| Cambodia | Γ£ô | | | Γ£ô |
-| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Christmas Island | Γ£ô | | | Γ£ô |
-| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô |
-| Cook Islands | Γ£ô | | | Γ£ô |
-| Fiji | Γ£ô | | | Γ£ô |
-| French Polynesia | Γ£ô | | | Γ£ô |
-| Guam | Γ£ô | | Γ£ô | Γ£ô |
-| Heard Island & McDonald Islands | Γ£ô | | | Γ£ô |
-| Hong Kong SAR | Γ£ô | | | Γ£ô |
-| India | Γ£ô | | | Γ£ô |
-| Indonesia | Γ£ô | | | Γ£ô |
-| Japan | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Kazakhstan | Γ£ô | | | Γ£ô |
-| Kiribati | Γ£ô | | | Γ£ô |
-| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Kyrgyzstan | Γ£ô | | | Γ£ô |
-| Laos | Γ£ô | | | Γ£ô |
-| Macao SAR | Γ£ô | | | Γ£ô |
-| Malaysia | Γ£ô | | | Γ£ô |
-| Maldives | Γ£ô | | | Γ£ô |
-| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô |
-| Micronesia | Γ£ô | | Γ£ô | Γ£ô |
-| Mongolia | Γ£ô | | | Γ£ô |
-| Myanmar | Γ£ô | | | Γ£ô |
-| Nauru | Γ£ô | | | Γ£ô |
-| Nepal | Γ£ô | | | Γ£ô |
-| New Caledonia | Γ£ô | | | Γ£ô |
-| New Zealand | Γ£ô | | Γ£ô | Γ£ô |
-| Niue | Γ£ô | | | Γ£ô |
-| Norfolk Island | Γ£ô | | | Γ£ô |
-| North Korea | Γ£ô | | | Γ£ô |
-| Northern Mariana Islands | Γ£ô | | Γ£ô | Γ£ô |
-| Pakistan | Γ£ô | | | Γ£ô |
-| Palau | Γ£ô | | Γ£ô | Γ£ô |
-| Papua New Guinea | Γ£ô | | | Γ£ô |
-| Philippines | Γ£ô | | Γ£ô | Γ£ô |
-| Pitcairn Islands | Γ£ô | | | Γ£ô |
-| Samoa | Γ£ô | | | Γ£ô |
-| Singapore | Γ£ô | | | Γ£ô |
-| Solomon Islands | Γ£ô | | | Γ£ô |
-| Sri Lanka | Γ£ô | | | Γ£ô |
-| Taiwan | Γ£ô | | | Γ£ô |
-| Tajikistan | Γ£ô | | | Γ£ô |
-| Thailand | Γ£ô | | | Γ£ô |
-| Timor-Leste | Γ£ô | | | Γ£ô |
-| Tokelau | Γ£ô | | | Γ£ô |
-| Tonga | Γ£ô | | | Γ£ô |
-| Turkmenistan | Γ£ô | | | Γ£ô |
-| Tuvalu | Γ£ô | | | Γ£ô |
-| Uzbekistan | Γ£ô | | | Γ£ô |
-| Vanuatu | Γ£ô | | | Γ£ô |
-| Vietnam | Γ£ô | | | Γ£ô |
-| Wallis & Futuna | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|--|::|:--:|::|::|
+| Afghanistan | Γ£ô | Γ£ô | | Γ£ô |
+| American Samoa | Γ£ô | | Γ£ô | Γ£ô |
+| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bangladesh | Γ£ô | Γ£ô | | Γ£ô |
+| Bhutan | Γ£ô | Γ£ô | | Γ£ô |
+| British Indian Ocean Territory | Γ£ô | | | Γ£ô |
+| Brunei | Γ£ô | Γ£ô | | Γ£ô |
+| Cambodia | Γ£ô | Γ£ô | | Γ£ô |
+| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Christmas Island | Γ£ô | | | Γ£ô |
+| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô |
+| Cook Islands | Γ£ô | | | Γ£ô |
+| Fiji | Γ£ô | | | Γ£ô |
+| French Polynesia | Γ£ô | | | Γ£ô |
+| Guam | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Heard Island & McDonald Islands | Γ£ô | | | Γ£ô |
+| Hong Kong SAR | Γ£ô | Γ£ô | | Γ£ô |
+| India | Γ£ô | Γ£ô | | Γ£ô |
+| Indonesia | Γ£ô | Γ£ô | | Γ£ô |
+| Japan | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kazakhstan | Γ£ô | Γ£ô | | Γ£ô |
+| Kiribati | Γ£ô | | | Γ£ô |
+| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kyrgyzstan | Γ£ô | Γ£ô | | Γ£ô |
+| Laos | Γ£ô | Γ£ô | | Γ£ô |
+| Macao SAR | Γ£ô | Γ£ô | | Γ£ô |
+| Malaysia | Γ£ô | Γ£ô | | Γ£ô |
+| Maldives | Γ£ô | | | Γ£ô |
+| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô |
+| Micronesia | Γ£ô | | Γ£ô | Γ£ô |
+| Mongolia | Γ£ô | | | Γ£ô |
+| Myanmar | Γ£ô | | | Γ£ô |
+| Nauru | Γ£ô | | | Γ£ô |
+| Nepal | Γ£ô | Γ£ô | | Γ£ô |
+| New Caledonia | Γ£ô | | | Γ£ô |
+| New Zealand | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Niue | Γ£ô | | | Γ£ô |
+| Norfolk Island | Γ£ô | | | Γ£ô |
+| North Korea | Γ£ô | Γ£ô | | Γ£ô |
+| Northern Mariana Islands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Pakistan | Γ£ô | Γ£ô | | Γ£ô |
+| Palau | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Papua New Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Philippines | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Pitcairn Islands | Γ£ô | | | Γ£ô |
+| Samoa | Γ£ô | | | Γ£ô |
+| Singapore | Γ£ô | Γ£ô | | Γ£ô |
+| Solomon Islands | Γ£ô | | | Γ£ô |
+| Sri Lanka | Γ£ô | Γ£ô | | Γ£ô |
+| Taiwan | Γ£ô | Γ£ô | | Γ£ô |
+| Tajikistan | Γ£ô | Γ£ô | | Γ£ô |
+| Thailand | Γ£ô | Γ£ô | | Γ£ô |
+| Timor-Leste | Γ£ô | Γ£ô | | Γ£ô |
+| Tokelau | Γ£ô | | | Γ£ô |
+| Tonga | Γ£ô | | | Γ£ô |
+| Turkmenistan | Γ£ô | Γ£ô | | Γ£ô |
+| Tuvalu | Γ£ô | | | Γ£ô |
+| Uzbekistan | Γ£ô | Γ£ô | | Γ£ô |
+| Vanuatu | Γ£ô | | | Γ£ô |
+| Vietnam | Γ£ô | Γ£ô | | Γ£ô |
+| Wallis & Futuna | Γ£ô | | | Γ£ô |
## Europe
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|-|::|:-:|::|::|
-| Albania | Γ£ô | | | Γ£ô |
-| Andorra | Γ£ô | | Γ£ô | Γ£ô |
-| Armenia | Γ£ô | | | Γ£ô |
-| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Azerbaijan | Γ£ô | | | Γ£ô |
-| Belarus | Γ£ô | | | Γ£ô |
-| Belgium | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bosnia & Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bulgaria | Γ£ô | | Γ£ô | Γ£ô |
-| Croatia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Cyprus | Γ£ô | | Γ£ô | Γ£ô |
-| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Estonia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Faroe Islands | Γ£ô | | | Γ£ô |
-| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Georgia | Γ£ô | | | Γ£ô |
-| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Gibraltar | Γ£ô | Γ£ô | | Γ£ô |
-| Greece | Γ£ô | | Γ£ô | Γ£ô |
-| Guernsey | Γ£ô | | | Γ£ô |
-| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Iceland | Γ£ô | | Γ£ô | Γ£ô |
-| Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Isle of Man | Γ£ô | | | Γ£ô |
-| Italy | Γ£ô | | Γ£ô | Γ£ô |
-| Jan Mayen | Γ£ô | | | Γ£ô |
-| Jersey | Γ£ô | | | Γ£ô |
-| Kosovo | Γ£ô | | Γ£ô | Γ£ô |
-| Latvia | Γ£ô | | Γ£ô | Γ£ô |
-| Liechtenstein | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Lithuania | Γ£ô | | Γ£ô | Γ£ô |
-| Luxembourg | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| North Macedonia | Γ£ô | | Γ£ô | Γ£ô |
-| Malta | Γ£ô | | Γ£ô | Γ£ô |
-| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Monaco | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Montenegro | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Netherlands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Norway | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Poland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Portugal | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Romania | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Russia | Γ£ô | | Γ£ô | Γ£ô |
-| San Marino | Γ£ô | | Γ£ô | Γ£ô |
-| Serbia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Slovakia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Slovenia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Spain | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Svalbard | Γ£ô | | | Γ£ô |
-| Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | | | Γ£ô |
-| Ukraine | Γ£ô | | | Γ£ô |
-| United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Vatican City | Γ£ô | | Γ£ô | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|-|::|:--:|::|::|
+| Albania | Γ£ô | Γ£ô | | Γ£ô |
+| Andorra | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Armenia | Γ£ô | Γ£ô | | Γ£ô |
+| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Azerbaijan | Γ£ô | Γ£ô | | Γ£ô |
+| Belarus | Γ£ô | Γ£ô | | Γ£ô |
+| Belgium | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bosnia & Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bulgaria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Croatia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cyprus | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Estonia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Faroe Islands | Γ£ô | | | Γ£ô |
+| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Georgia | Γ£ô | Γ£ô | | Γ£ô |
+| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Gibraltar | Γ£ô | Γ£ô | | Γ£ô |
+| Greece | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Guernsey | Γ£ô | | | Γ£ô |
+| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Iceland | Γ£ô | | Γ£ô | Γ£ô |
+| Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Isle of Man | Γ£ô | | | Γ£ô |
+| Italy | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Jan Mayen | Γ£ô | | | Γ£ô |
+| Jersey | Γ£ô | | | Γ£ô |
+| Kosovo | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Latvia | Γ£ô | | Γ£ô | Γ£ô |
+| Liechtenstein | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Lithuania | Γ£ô | | Γ£ô | Γ£ô |
+| Luxembourg | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| North Macedonia | Γ£ô | | Γ£ô | Γ£ô |
+| Malta | Γ£ô | | Γ£ô | Γ£ô |
+| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Monaco | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Montenegro | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Netherlands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Norway | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Poland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Portugal | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Romania | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Russia | Γ£ô | 1 | Γ£ô | Γ£ô |
+| San Marino | Γ£ô | | Γ£ô | Γ£ô |
+| Serbia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovakia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovenia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Spain | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Svalbard | Γ£ô | | | Γ£ô |
+| Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Turkey | Γ£ô | Γ£ô | | Γ£ô |
+| Ukraine | Γ£ô | Γ£ô | | Γ£ô |
+| United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Vatican City | Γ£ô | | Γ£ô | Γ£ô |
+
+1 Partial coverage includes Moscow and Saint Petersburg
## Middle East & Africa
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|-|::|:-:|::|::|
-| Algeria | Γ£ô | | | Γ£ô |
-| Angola | Γ£ô | | | Γ£ô |
-| Bahrain | Γ£ô | | | Γ£ô |
-| Benin | Γ£ô | | | Γ£ô |
-| Botswana | Γ£ô | | | Γ£ô |
-| Bouvet Island | Γ£ô | | | Γ£ô |
-| Burkina Faso | Γ£ô | | | Γ£ô |
-| Burundi | Γ£ô | | | Γ£ô |
-| Cameroon | Γ£ô | | | Γ£ô |
-| Cape Verde | Γ£ô | | | Γ£ô |
-| Central African Republic | Γ£ô | | | Γ£ô |
-| Chad | Γ£ô | | | Γ£ô |
-| Comoros | Γ£ô | | | Γ£ô |
-| Congo (DRC) | Γ£ô | | | Γ£ô |
-| C├┤te d'Ivoire | Γ£ô | | | Γ£ô |
-| Djibouti | Γ£ô | | | Γ£ô |
-| Egypt | Γ£ô | | | Γ£ô |
-| Equatorial Guinea | Γ£ô | | | Γ£ô |
-| Eritrea | Γ£ô | | | Γ£ô |
-| eSwatini | Γ£ô | | | Γ£ô |
-| Ethiopia | Γ£ô | | | Γ£ô |
-| French Southern Territories | Γ£ô | | | Γ£ô |
-| Gabon | Γ£ô | | | Γ£ô |
-| Gambia | Γ£ô | | | Γ£ô |
-| Ghana | Γ£ô | | | Γ£ô |
-| Guinea | Γ£ô | | | Γ£ô |
-| Guinea-Bissau | Γ£ô | | | Γ£ô |
-| Iran | Γ£ô | | | Γ£ô |
-| Iraq | Γ£ô | | | Γ£ô |
-| Israel | Γ£ô | | Γ£ô | Γ£ô |
-| Jordan | Γ£ô | | | Γ£ô |
-| Kenya | Γ£ô | | | Γ£ô |
-| Kuwait | Γ£ô | | | Γ£ô |
-| Lebanon | Γ£ô | | | Γ£ô |
-| Lesotho | Γ£ô | | | Γ£ô |
-| Liberia | Γ£ô | | | Γ£ô |
-| Libya | Γ£ô | | | Γ£ô |
-| Madagascar | Γ£ô | | | Γ£ô |
-| Malawi | Γ£ô | | | Γ£ô |
-| Mali | Γ£ô | | | Γ£ô |
-| Mauritania | Γ£ô | | | Γ£ô |
-| Mauritius | Γ£ô | | | Γ£ô |
-| Mayotte | Γ£ô | | | Γ£ô |
-| Morocco | Γ£ô | | | Γ£ô |
-| Mozambique | Γ£ô | | | Γ£ô |
-| Namibia | Γ£ô | | | Γ£ô |
-| Niger | Γ£ô | | | Γ£ô |
-| Nigeria | Γ£ô | | | Γ£ô |
-| Oman | Γ£ô | | | Γ£ô |
-| Palestinian Authority | Γ£ô | | | Γ£ô |
-| Qatar | Γ£ô | | | Γ£ô |
-| Réunion | ✓ | | | ✓ |
-| Rwanda | Γ£ô | | | Γ£ô |
-| Saint Helena, Ascension, Tristan da Cunha | Γ£ô | | | Γ£ô |
-| São Tomé & Príncipe | ✓ | | | ✓ |
-| Saudi Arabia | Γ£ô | | | Γ£ô |
-| Senegal | Γ£ô | | | Γ£ô |
-| Seychelles | Γ£ô | | | Γ£ô |
-| Sierra Leone | Γ£ô | | | Γ£ô |
-| Somalia | Γ£ô | | | Γ£ô |
-| South Africa | Γ£ô | | | Γ£ô |
-| South Sudan | Γ£ô | | | Γ£ô |
-| Sudan | Γ£ô | | | Γ£ô |
-| Syria | Γ£ô | | | Γ£ô |
-| Tanzania | Γ£ô | | | Γ£ô |
-| Togo | Γ£ô | | | Γ£ô |
-| Tunisia | Γ£ô | | | Γ£ô |
-| Uganda | Γ£ô | | | Γ£ô |
-| United Arab Emirates | Γ£ô | | | Γ£ô |
-| Yemen | Γ£ô | | | Γ£ô |
-| Zambia | Γ£ô | | | Γ£ô |
-| Zimbabwe | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|-|::|:--:|::|::|
+| Algeria | Γ£ô | Γ£ô | | Γ£ô |
+| Angola | Γ£ô | Γ£ô | | Γ£ô |
+| Bahrain | Γ£ô | Γ£ô | | Γ£ô |
+| Benin | Γ£ô | Γ£ô | | Γ£ô |
+| Botswana | Γ£ô | Γ£ô | | Γ£ô |
+| Bouvet Island | Γ£ô | | | Γ£ô |
+| Burkina Faso | Γ£ô | Γ£ô | | Γ£ô |
+| Burundi | Γ£ô | Γ£ô | | Γ£ô |
+| Cameroon | Γ£ô | Γ£ô | | Γ£ô |
+| Cape Verde | Γ£ô | Γ£ô | | Γ£ô |
+| Central African Republic | Γ£ô | Γ£ô | | Γ£ô |
+| Chad | Γ£ô | Γ£ô | | Γ£ô |
+| Comoros | Γ£ô | Γ£ô | | Γ£ô |
+| Congo (DRC) | Γ£ô | Γ£ô | | Γ£ô |
+| C├┤te d'Ivoire | Γ£ô | Γ£ô | | Γ£ô |
+| Djibouti | Γ£ô | Γ£ô | | Γ£ô |
+| Egypt | Γ£ô | Γ£ô | | Γ£ô |
+| Equatorial Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Eritrea | Γ£ô | Γ£ô | | Γ£ô |
+| Eswatini | Γ£ô | Γ£ô | | Γ£ô |
+| Ethiopia | Γ£ô | Γ£ô | | Γ£ô |
+| French Southern Territories | Γ£ô | | | Γ£ô |
+| Gabon | Γ£ô | Γ£ô | | Γ£ô |
+| Gambia | Γ£ô | Γ£ô | | Γ£ô |
+| Ghana | Γ£ô | Γ£ô | | Γ£ô |
+| Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Guinea-Bissau | Γ£ô | Γ£ô | | Γ£ô |
+| Iran | Γ£ô | Γ£ô | | Γ£ô |
+| Iraq | Γ£ô | Γ£ô | | Γ£ô |
+| Israel | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Jordan | Γ£ô | Γ£ô | | Γ£ô |
+| Kenya | Γ£ô | Γ£ô | | Γ£ô |
+| Kuwait | Γ£ô | Γ£ô | | Γ£ô |
+| Lebanon | Γ£ô | Γ£ô | | Γ£ô |
+| Lesotho | Γ£ô | Γ£ô | | Γ£ô |
+| Liberia | Γ£ô | Γ£ô | | Γ£ô |
+| Libya | Γ£ô | Γ£ô | | Γ£ô |
+| Madagascar | Γ£ô | Γ£ô | | Γ£ô |
+| Malawi | Γ£ô | Γ£ô | | Γ£ô |
+| Mali | Γ£ô | Γ£ô | | Γ£ô |
+| Mauritania | Γ£ô | Γ£ô | | Γ£ô |
+| Mauritius | Γ£ô | Γ£ô | | Γ£ô |
+| Mayotte | Γ£ô | Γ£ô | | Γ£ô |
+| Morocco | Γ£ô | | | Γ£ô |
+| Mozambique | Γ£ô | Γ£ô | | Γ£ô |
+| Namibia | Γ£ô | Γ£ô | | Γ£ô |
+| Niger | Γ£ô | Γ£ô | | Γ£ô |
+| Nigeria | Γ£ô | Γ£ô | | Γ£ô |
+| Oman | Γ£ô | Γ£ô | | Γ£ô |
+| Palestinian Authority | Γ£ô | Γ£ô | | Γ£ô |
+| Qatar | Γ£ô | Γ£ô | | Γ£ô |
+| Réunion | ✓ | ✓ | | ✓ |
+| Rwanda | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Helena, Ascension, Tristan da Cunha | Γ£ô | Γ£ô | | Γ£ô |
+| São Tomé & Príncipe | ✓ | ✓ | | ✓ |
+| Saudi Arabia | Γ£ô | Γ£ô | | Γ£ô |
+| Senegal | Γ£ô | Γ£ô | | Γ£ô |
+| Seychelles | Γ£ô | Γ£ô | | Γ£ô |
+| Sierra Leone | Γ£ô | Γ£ô | | Γ£ô |
+| Somalia | Γ£ô | Γ£ô | | Γ£ô |
+| South Africa | Γ£ô | Γ£ô | | Γ£ô |
+| South Sudan | Γ£ô | Γ£ô | | Γ£ô |
+| Sudan | Γ£ô | Γ£ô | | Γ£ô |
+| Syria | Γ£ô | Γ£ô | | Γ£ô |
+| Tanzania | Γ£ô | Γ£ô | | Γ£ô |
+| Togo | Γ£ô | Γ£ô | | Γ£ô |
+| Tunisia | Γ£ô | Γ£ô | | Γ£ô |
+| Uganda | Γ£ô | Γ£ô | | Γ£ô |
+| United Arab Emirates | Γ£ô | Γ£ô | | Γ£ô |
+| Yemen | Γ£ô | Γ£ô | | Γ£ô |
+| Zambia | Γ£ô | Γ£ô | | Γ£ô |
+| Zimbabwe | Γ£ô | Γ£ô | | Γ£ô |
## Next steps
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 11/3/2022 Last updated : 11/9/2022
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Agent diagnostics logs | | | X | | **Data sent to** | | | | | | | Azure Monitor Logs | X | X | |
-| | Azure Monitor Metrics<sup>1</sup> | X | | X |
+| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | X (Public preview) |
| | Azure Storage | | | X | | | Event Hub | | | X | | **Services and features supported** | | | | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | File based logs | X (Public preview) | | | | | **Data sent to** | | | | | | | | Azure Monitor Logs | X | X | | |
-| | Azure Monitor Metrics<sup>1</sup> | X | | | X |
+| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | | X (Public preview) |
| | Azure Storage | | | X | | | | Event Hub | | | X | | | **Services and features supported** | | | | | |
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing Azure Monitor Agent on Azure virtual machines
Previously updated : 09/22/2022 Last updated : 11/9/2022
The following prerequisites must be met prior to installing Azure Monitor Agent.
| Built-in role | Scopes | Reason | |:|:|:| | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets,</li><li>Azure Arc-enabled servers</li></ul> | To deploy the agent |
- | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy Azure Resource Manager templates |
+ | Any role that includes the action *Microsoft.Resources/deployments/** (for example, [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy agent extension via Azure Resource Manager templates (also used by Azure Policy) |
- **Non-Azure**: To install the agent on physical servers and virtual machines hosted *outside* of Azure (that is, on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first, at no added cost. - **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both user-assigned and system-assigned managed identities are supported. - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to Azure Monitor Agent via extension settings:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/09/2022 ms.devlang: csharp
What follows is our step-by-step troubleshooting guide for extension/agent-based
# [Linux](#tab/linux)
-1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`.
+1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~3`.
1. Browse to `https://your site name.scm.azurewebsites.net/ApplicationInsights`. 1. Within this site, confirm: * The status source exists and looks like `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
When filtering down to a particular resource in the Change Analysis standalone p
1. In the Azure portal, select **All resources**. 1. Select the actual resource you want to view. 1. In that resource's left side menu, select **Diagnose and solve problems**.
-1. Select **Change details**.
+1. In the Change Analysis card, select **View change details**.
+
+ :::image type="content" source="./media/change-analysis/change-details-card.png" alt-text="Screenshot of viewing change details from the Change Analysis card in Diagnose and solve problems tool.":::
From here, you'll be able to view all of the changes for that one resource.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
In addition to the methods below, you may be given the option to create a new Az
Use the following command to create an Azure Monitor workspace using Azure CLI. ```azurecli
-az resource create --resource-group divyaj-test --namespace microsoft.monitor --resource-type accounts --name testmac0929 --location westus2 --properties {}
+az resource create --resource-group <resource-group-name> --namespace microsoft.monitor --resource-type accounts --name <azure-monitor-workspace-name> --location <location> --properties {}
``` ### [Resource Manager](#tab/resource-manager)
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Previously updated : 07/10/2022 Last updated : 07/27/2022 #Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated. # Migrate from diagnostic settings storage retention to Azure Storage lifecycle management
-This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
+The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics use Azure Storage Lifecycle Management.
+
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+
+> [!IMPORTANT]
+> **Deprecation Timeline.**
+> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them.
+> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
+> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
++ ## Prerequisites
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 11/08/2022 Last updated : 11/09/2022 + # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
## Configurable network features
- Register for the [**configurable network features**](configure-network-features.md) to create volumes with standard network features. You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
+ You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features. For more information, see [Configure network features](configure-network-features.md).
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
Azure NetApp Files Standard network features are supported for the following reg
* Australia Central 2 * Australia East * Australia Southeast
+* Brazil South
* Canada Central * Central US * East Asia
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 10/18/2022 Last updated : 11/10/2022 # Solution architectures using Azure NetApp Files
This section provides references for High Performance Computing (HPC) solutions.
### Analytics * [SAS on Azure architecture guide - Azure Architecture Center | Azure NetApp Files](/azure/architecture/guide/sas/sas-overview#azure-netapp-files-nfs)
+* [Deploy SAS Grid 9.4 on Azure NetApp Files](/azure/architecture/guide/hpc/netapp-files-sas)
+* [Best Practices for Using Microsoft Azure with SAS®](https://communities.sas.com/t5/Administration-and-Deployment/Best-Practices-for-Using-Microsoft-Azure-with-SAS/m-p/676833#M19680)
* [Azure NetApp Files: A shared file system to use with SAS Grid on Microsoft Azure](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/705192) * [Azure NetApp Files: A shared file system to use with SAS Grid on MS Azure – RHEL8.3/nconnect UPDATE](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/722261#M21648) * [Best Practices for Using Microsoft Azure with SAS®](https://communities.sas.com/t5/Administration-and-Deployment/Best-Practices-for-Using-Microsoft-Azure-with-SAS/m-p/676833#M19680)
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 09/29/2022 Last updated : 11/09/2022
Two settings are available for network features:
* Conversion between Basic and Standard networking features in either direction is not currently supported.
-## Register the feature
-
-Follow the registration steps if you're using the feature for the first time.
-
-1. Register the feature by running the following commands:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSDNAppliance
-
- Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPoliciesOnBareMetal
- ```
-
-2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSDNAppliance
-
- Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPoliciesOnBareMetal
- ```
-
-You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- ## Set the Network Features option This section shows you how to set the Network Features option.
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
na Previously updated : 04/21/2021 Last updated : 11/09/2022 # Manage disaster recovery using cross-region replication
After disaster recovery, you can reactivate the source volume by performing a re
> [!IMPORTANT] > The reverse resync operation synchronizes the source and destination volumes by incrementally updating the source volume with the latest updates from the destination volume, based on the last available common snapshots. This operation avoids the need to synchronize the entire volume in most cases because only changes to the destination volume *after* the most recent common snapshot will have to be replicated to the source volume. >
-> The reverse resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.
+> ***The reverse resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.***
> > In case the source volume did not survive the disaster and therefore no common snapshot exists, all data in the destination will be resynchronized to a newly created source volume.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/07/2022 Last updated : 11/09/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
- Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it.
+ Standard network features now includes Global VNet peering.
Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
Previously updated : 10/05/2022 Last updated : 11/10/2022 # Retirement of Azure Percept DK
+**Update November 9, 2022**: A firmware update that enables the Vision SoM and Audio SOM to retain their functionality with the DK beyond the retirement date, will be made available before the retirement date.
+ The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023. ## How does this change affect me?
If you have questions regarding Azure Percept DK, please refer to the below **FA
| When is this change occurring? | On March 30, 2023. Until this date your DK and Studio will function as-is and updates and customer support will be offered. After this date, all updates and customer support will stop. | | Will my projects be deleted? | Your projects remain in the underlying Azure Services they were created in (example: Custom Vision, Speech Studio, etc.). They won't be deleted due to this retirement. You can no longer modify or use your project with Percept Studio. | | Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. |
-| Will my device still power on? | The various backend services that allow the DK and Audio Accessory to fully function will be shut down upon retirement, rending the DK and Audio Accessory effectively unusable. The SoMs, such as the camera and Audio Accessory, will no longer be identified by the DK after retirement and thus effectively unusable. |
+
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
description: Describes the functions to use in a Bicep file to retrieve deployme
Previously updated : 06/27/2022 Last updated : 11/09/2022 # Deployment functions for Bicep
The preceding example returns the following object when deployed to global Azure
"resourceManager": "https://management.azure.com/", "authentication": { "loginEndpoint": "https://login.microsoftonline.com/",
- "audiences": [
- "https://management.core.windows.net/",
- "https://management.azure.com/"
- ],
+ "audiences": [ "https://management.core.windows.net/", "https://management.azure.com/" ],
"tenant": "common", "identityProvider": "AAD" },
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep description: Use loops to iterate over collections in Bicep Previously updated : 12/02/2021 Last updated : 11/09/2022 # Iterative loops in Bicep This article shows you how to use the `for` syntax to iterate over items in a collection. This functionality is supported starting in v0.3.1 onward. You can use loops to define multiple copies of a resource, module, variable, property, or output. Use loops to avoid repeating syntax in your Bicep file and to dynamically set the number of copies to create during deployment. To go through a quickstart, see [Quickstart: Create multiple instances](./quickstart-loops.md).
+To use loops to create multiple resources or modules, each instance must have a unique value for the name property. You can use the index value or unique values in arrays or collections to create the names.
+ ### Training resources If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/).
module stgModule './storageAccount.bicep' = [for i in range(0, storageCount): {
## Array elements
-The following example creates one storage account for each name provided in the `storageNames` parameter.
+The following example creates one storage account for each name provided in the `storageNames` parameter. Note the name property for each resource instance must be unique.
```bicep param location string = resourceGroup().location
resource storageAcct 'Microsoft.Storage/storageAccounts@2021-06-01' = [for name
}] ```
-The next example iterates over an array to define a property. It creates two subnets within a virtual network.
+The next example iterates over an array to define a property. It creates two subnets within a virtual network. Note the subnet names must be unique.
::: code language="bicep" source="~/azure-docs-bicep-samples/samples/loops/loopproperty.bicep" highlight="23-28" :::
output deployedNSGs array = [for (name, i) in orgNames: {
## Dictionary object
-To iterate over elements in a dictionary object, use the [items function](bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
+To iterate over elements in a dictionary object, use the [items function](bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects. Note the nsg resource names must be unique.
```bicep param nsgValues object = {
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 08/23/2022 Last updated : 11/10/2022 # Azure Resource Manager template specs in Bicep
To learn more about template specs, and for hands-on guidance, see [Publish libr
To create a template spec, you need **write** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`.
-To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. You also need **write** access to any resources deployed by the template spec, and access to `Microsoft.Resources/deployments/*`.
+To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
## Why use template specs?
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 01/12/2022 Last updated : 11/10/2022
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md).
+After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md). In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a linked template in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md
Title: Template test cases for test toolkit description: Describes the template tests that are run by the Azure Resource Manager template test toolkit. Previously updated : 07/30/2021 Last updated : 11/09/2022
This test finds parameters that aren't used in the template or parameters that a
To reduce confusion in your template, delete any parameters that are defined but not used. Eliminating unused parameters simplifies template deployments because you don't have to provide unnecessary values.
+In Bicep, use [Linter rule - no unused parameters](../bicep/linter-rule-no-unused-parameters.md).
+ The following example **fails** because the expression that references a parameter is missing the leading square bracket (`[`). ```json
You use the types `secureString` or `secureObject` on parameters that contain se
When you provide a default value, that value is discoverable by anyone who can access the template or the deployment history.
+In Bicep, use [Linter rule - secure parameter default](../bicep/linter-rule-secure-parameter-default.md).
+ The following example **fails**. ```json
Test name: **DeploymentTemplate Must Not Contain Hardcoded Uri**
Don't hard-code environment URLs in your template. Instead, use the [environment](template-functions-deployment.md#environment) function to dynamically get these URLs during deployment. For a list of the URL hosts that are blocked, see the [test case](https://github.com/Azure/arm-ttk/blob/master/arm-ttk/testcases/deploymentTemplate/DeploymentTemplate-Must-Not-Contain-Hardcoded-Uri.test.ps1).
+In Bicep, use [Linter rule - no hardcoded environment URL](../bicep/linter-rule-no-hardcoded-environment-urls.md).
+ The following example **fails** because the URL is hard-coded. ```json
Template users may have limited access to regions where they can create resource
By providing a `location` parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
+In Bicep, use [Linter rule - no location expressions outside of parameter default values](../bicep/linter-rule-no-loc-expr-outside-params.md).
+ The following example **fails** because the resource's `location` is set to `resourceGroup().location`. ```json
Test name: **Resources Should Have Location**
The location for a resource should be set to a [template expression](template-expressions.md) or `global`. The template expression would typically use the `location` parameter described in [Location uses parameter](#location-uses-parameter).
+In Bicep, use [Linter rule - no hardcoded locations](../bicep/linter-rule-no-hardcoded-location.md).
+ The following example **fails** because the `location` isn't an expression or `global`. ```json
When you include parameters for `_artifactsLocation` and `_artifactsLocationSasT
- `_artifactsLocationSasToken` can only have an empty string for its default value. - `_artifactsLocationSasToken` can't have a default value in a nested template.
+In Bicep, use [Linter rule - artifacts parameters](../bicep/linter-rule-artifacts-parameters.md).
+ ## Declared variables must be used Test name: **Variables Must Be Referenced**
This test finds variables that aren't used in the template or aren't used in a v
Variables that use the `copy` element to iterate values must be referenced. For more information, see [Variable iteration in ARM templates](copy-variables.md).
+In Bicep, use [Linter rule - no unused variables](../bicep/linter-rule-no-unused-variables.md).
+ The following example **fails** because the variable that uses the `copy` element isn't referenced. ```json
A warning that an API version wasn't found only indicates the version isn't incl
Learn more about the [toolkit cache](https://github.com/Azure/arm-ttk/tree/master/arm-ttk/cache).
+In Bicep, use [Linter rule - use recent API versions](../bicep/linter-rule-use-recent-api-versions.md).
+ The following example **fails** because the API version is more than two years old. ```json
When specifying a resource ID, use one of the resource ID functions. The allowed
- [tenantResourceId](template-functions-resource.md#tenantresourceid) - [extensionResourceId](template-functions-resource.md#extensionresourceid)
-Don't use the concat function to create a resource ID. The following example **fails**.
+Don't use the concat function to create a resource ID.
+
+In Bicep, use [Linter rule - use resource ID functions](../bicep/linter-rule-use-resource-id-functions.md).
+
+The following example **fails**.
```json "networkSecurityGroup": {
When setting the deployment dependencies, don't use the [if](template-functions-
The `dependsOn` element can't begin with a [concat](template-functions-array.md#concat) function.
+In Bicep, use [Linter rule - no unnecessary dependsOn entries](../bicep/linter-rule-no-unnecessary-dependson.md).
+ The following example **fails** because it contains an `if` function. ```json
Test name: **adminUsername Should Not Be A Literal**
When setting an `adminUserName`, don't use a literal value. Create a parameter for the user name and use an expression to reference the parameter's value.
+In Bicep, use [Linter rule - admin user name should not be literal](../bicep/linter-rule-admin-username-should-not-be-literal.md).
+ The following example **fails** with a literal value. ```json
This test is disabled, but the output shows that it passed. The best practice is
If your template includes a virtual machine with an image, make sure it's using the latest version of the image.
+In Bicep, use [Linter rule - use stable VM image](../bicep/linter-rule-use-stable-vm-image.md).
+ ## Use stable VM images Test name: **Virtual Machines Should Not Be Preview**
Virtual machines shouldn't use preview images. The test checks the `storageProfi
For more information about the `imageReference` property, see [Microsoft.Compute virtualMachines](/azure/templates/microsoft.compute/virtualmachines#imagereference-object) and [Microsoft.Compute virtualMachineScaleSets](/azure/templates/microsoft.compute/virtualmachinescalesets#imagereference-object).
+In Bicep, use [Linter rule - use stable VM image](../bicep/linter-rule-use-stable-vm-image.md).
+ The following example **fails** because `imageReference` is a string that contains _preview_. ```json
Don't include any values in the `outputs` section that potentially exposes secre
The output from a template is stored in the deployment history, so a malicious user could find that information.
+In Bicep, use [Linter rule - outputs should not contain secrets](../bicep/linter-rule-outputs-should-not-contain-secrets.md).
+ The following example **fails** because it includes a secure parameter in an output value. ```json
For resources with type `CustomScript`, use the encrypted `protectedSettings` wh
Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Windows]( /azure/virtual-machines/extensions/custom-script-windows), or [Linux](../../virtual-machines/extensions/custom-script-linux.md).
+In Bicep, use [Linter rule - use protectedSettings for commandToExecute secrets](../bicep/linter-rule-protect-commandtoexecute-secrets.md).
+ The following example **fails** because `settings` uses `commandToExecute` with a secure parameter. ```json
Use the nested template's `expressionEvaluationOptions` object with `inner` scop
For more information about nested templates, see [Microsoft.Resources deployments](/azure/templates/microsoft.resources/deployments) and [Expression evaluation scope in nested templates](linked-templates.md#expression-evaluation-scope-in-nested-templates).
+In Bicep, use [Linter rule - secure params in nested deploy](../bicep/linter-rule-secure-params-in-nested-deploy.md).
+ The following example **fails** because `expressionEvaluationOptions` uses `outer` scope to evaluate secure parameters or `list*` functions. ```json
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
Last updated 08/15/2022
-# How to Configure a custom domain for Azure SignalR Service
+# How to configure a custom domain for Azure SignalR Service
In addition to the default domain provided with Azure SignalR Service, you can also add a custom DNS domain to your service. In this article, you'll learn how to add a custom domain to your SignalR Service.
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md
Title: Secure Azure SignalR outbound traffic through Shared Private Endpoints
+ Title: Secure Azure SignalR outbound traffic through shared private endpoints
-description: How to secure outbound traffic through Shared Private Endpoints to avoid traffic go to public network
+description: How to secure outbound traffic through shared private endpoints to avoid traffic go to public network
Last updated 07/08/2021
-# Secure Azure SignalR outbound traffic through Shared Private Endpoints
+# Secure Azure SignalR outbound traffic through shared private endpoints
When you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you can create outbound [private endpoint connections](../private-link/private-endpoint-overview.md) to an upstream service.
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
Title: Comparison of Azure Video Indexer and Azure Media Services v3 presets description: This article compares Azure Video Indexer capabilities and Azure Media Services v3 presets. Previously updated : 02/24/2020 Last updated : 11/10/2022
This article compares the capabilities of **Azure Video Indexer(AVI) APIs** and **Media Services v3 APIs**.
-Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
+Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). Azure Media Services have [announced deprecation](https://learn.microsoft.com/azure/media-services/latest/release-notes#retirement-of-the-azure-media-redactor-video-analyzer-and-face-detector-on-september-14-2023) of their Video Analysis preset starting September 2023. It is advised to use Azure Video Indexer Video Analysis going forward, which is general available and offers more functionality.
+
+The following table offers the current guideline for understanding the differences and similarities.
## Compare
azure-video-indexer Compliance Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compliance-privacy-security.md
- Title: Azure Video Indexer compliance, privacy and security
-description: This article discusses Azure Video Indexer compliance, privacy and security.
- Previously updated : 08/18/2022---
-# Compliance, Privacy and Security
-
-As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
-
-## Next steps
-
-[Azure Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Azure Video Indexer analyzes the video and audio content by running 30+ AI model
To start extracting insights with Azure Video Indexer, see the [how can I get started](#how-can-i-get-started-with-azure-video-indexer) section below.
-## Compliance, Privacy and Security
-
-> [!Important]
-> Before you continue with Azure Video Indexer, read [Compliance, privacy and security](compliance-privacy-security.md).
- ## What can I do with Azure Video Indexer? Azure Video Indexer's insights can be applied to many scenarios, among them are:
Learn how to [get started with Azure Video Indexer](video-indexer-get-started.md
Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
+## Compliance, Privacy and Security
+
+As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+
+To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+ ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution
+ Title: What's new in Azure VMware Solution
description: Learn about the platform updates to Azure VMware Solution. -+ Previously updated : 09/15/2022 Last updated : 11/09/2022
-# Platform updates for Azure VMware Solution
+# What's new in Azure VMware Solution
Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
-## July 8, 2022
+## November 2022
+AV36P and AV52 node sizes available in Azure VMware Solution.
+The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
-HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+## July 2022
-HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections.
+ - HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+ HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
-For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
+ - All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## July 7, 2022
-
-All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-
-Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## June 7, 2022
+## June 2022
All new Azure VMware Solution private clouds in regions (East US2, Canada Central, North Europe, and Japan East), are now deployed in with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c. Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-## May 23, 2022
-
-All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+## May 2022
-Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ - All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
+ - All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-## May 9, 2022
-
-All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-
-Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## February 18, 2022
+## February 2022
Per VMware security advisory [VMSA-2022-0004](https://www.vmware.com/security/advisories/VMSA-2022-0004.html), multiple vulnerabilities in VMware ESXi have been reported to VMware.
For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release E
No further action is required.
-## December 22, 2021
-
-Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j.
-The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX.
-We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
-
-We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
-
-If you need any assistance or have questions, please [contact us](https://portal.azure.com/#home).
---
-## December 12, 2021
-
-VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228.
+## December 2021
-Azure VMware Solution is actively monitoring this issue. We are addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available.
+ - Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
+ We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
+ If you need any assistance or have questions, please [contact us](https://portal.azure.com/#home).
-Please note that you may experience intermittent connectivity to these components when we apply a fix.
+ - VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We are addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available. Please note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for any additional VMware products that you may have deployed in Azure VMware Solution. If you need any assistance or have questions, please [contact us](https://portal.azure.com).
-We strongly recommend that you read the advisory and patch or apply the recommended workarounds for any additional VMware products that you may have deployed in Azure VMware Solution.
-
-If you need any assistance or have questions, please [contact us](https://portal.azure.com).
-
-## November 23, 2021
+## November 2021
Per VMware security advisory [VMSA-2021-0027](https://www.vmware.com/security/advisories/VMSA-2021-0027.html), multiple vulnerabilities in VMware vCenter Server have been reported to VMware.
For more information, see [VMware vCenter Server 6.7 Update 3p Release Notes](ht
No further action is required.
-## September 21, 2021
-Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware.
-
-To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o.
-
-For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html)
-
-No further action is required.
-
-## September 10, 2021
-
-All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523).
-ESXi hosts in existing private clouds have been patched to this version.
-
-For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
--
+## September 2021
+ - Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware. To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o. For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html). No further action is required.
+ - All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
-## July 23, 2021
+## July 2021
All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September, 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
You'll receive an email with the planned maintenance date and time. You can resc
For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
+## May 2021
+ - Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware. To address the vulnerabilities ([CVE-2021-21985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21985) and [CVE-2021-21986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21986)) reported in VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), vCenter Server has been updated in all Azure VMware Solution private clouds. No further action is required.
+ - Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud. During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud. There is no impact to workloads running in your private cloud.
-
-## May 25, 2021
-Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware.
-
-To address the vulnerabilities ([CVE-2021-21985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21985) and [CVE-2021-21986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21986)) reported in VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), vCenter Server has been updated in all Azure VMware Solution private clouds.
-
-No further action is required.
-
-## May 21, 2021
-
-Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud.
-
-During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud.
-
-There is no impact to workloads running in your private cloud.
--
-## April 26, 2021
+## April 2021
All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 2.5.2. We're not using NSX-T Data Center 3.1.1 for new private clouds because of an identified issue in NSX-T Data Center 3.1.1 that impacts customer VM connectivity. The VMware recommended mitigation was applied to all existing private clouds currently running NSX-T Data Center 3.1.1 on Azure VMware Solution. The workaround has been confirmed that there's no impact to customer VM connectivity.
-## March 24, 2021
-All new Azure VMware Solution private clouds are deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above.
+## March 2021
+ - All new Azure VMware Solution private clouds are deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above. You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes.
-You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes.
+ - Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter Server in your private cloud to vCenter Server 6.7 Update 3l version.
+ VMware vCenter Server will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
+ For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
-## March 15, 2021
+ - Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) to existing privates **through March 15, 2021**.
-- Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter Server in your private cloud to vCenter Server 6.7 Update 3l version.--- VMware vCenter Server will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
+ - Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**.
-For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
-
-## March 4, 2021
--- Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) to existing privates **through March 15, 2021**.--- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**.-
->[!NOTE]
->This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
+ >[!NOTE]
+ >This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses (preview) description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 05/26/2022 Last updated : 11/08/2022
To restrict access to these nodes and reduce the discoverability of these nodes
- If you plan to use a [private endpoint with Batch accounts](private-connectivity.md), you must disable private endpoint network policies. Run the following Azure CLI command:
- `az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies`
+```azurecli-interactive
+az network vnet subnet update \
+ --vnet-name <vnetname> \
+ -n <subnetname> \
+ --resource-group <resourcegroup> \
+ --disable-private-endpoint-network-policies
+```
- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. To allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)) either:
- - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This is the preferred method.
+ - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This solution is the preferred method.
- Alternatively, provide your own internet outbound access support (see [Outbound access to the internet](#outbound-access-to-the-internet)).
To restrict access to these nodes and reduce the discoverability of these nodes
1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you're creating. 1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
If you're familiar with using ARM templates, select the **Deploy to Azure** butt
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatch-pool-no-public-ip%2Fazuredeploy.json) > [!NOTE]
-> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and you've already opted in with [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region and opt in your Batch account, then retry the deployment.
+> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and your pool is using [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region, specify `simplified` node communiction mode for the pool, and then retry the deployment.
## Outbound access to the internet
-In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
+In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
-Another way to provide outbound connectivity is to use a user-defined route (UDR). This lets you route traffic to a proxy machine that has public internet access, for example [Azure Firewall](../firewall/overview.md).
+Another way to provide outbound connectivity is to use a user-defined route (UDR). This method lets you route traffic to a proxy machine that has public internet access, for example [Azure Firewall](../firewall/overview.md).
> [!IMPORTANT] > There is no extra network resource (load balancer, network security group) created for simplified node communication pools without public IP addresses. Since the compute nodes in the pool are not bound to any load balancer, Azure may provide [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md). However, Default Outbound Access is not suitable for production workloads, so it is strongly recommended to bring your own Internet outbound access.
You can follow the guide [Connect to compute nodes](error-handling.md#connect-to
## Migration from previous preview version of No Public IP pools
-For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md). To migrate the pool, follow the [opt-in process for simplified node communication](simplified-compute-node-communication.md):
+For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md).
-1. Opt in to use simplified node communication.
1. Create a [private endpoint for Batch node management](private-connectivity.md) in the virtual network.
+1. Update the pool's node communication mode to [simplified](simplified-compute-node-communication.md).
1. Scale down the pool to zero nodes. 1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
cloud-services-extended-support Feature Support Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md
+
+ Title: Feature Analysis Cloud Services vs Virtual Machine Scale Sets
+description: Learn about the feature set available in Cloud Services and Virtual Machine Scale Sets
+++++ Last updated : 11/8/2022++
+# Feature Analysis: Cloud Services (extended support) and Virtual Machine Scale Sets
+This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, please visit the documentation [here](https://learn.microsoft.com/azure/virtual-machine-scale-sets/overview)
++
+## Basic setup
+
+| Feature | CSES | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Virtual machine type|Basic Azure PaaS VM (Microsoft.compute/cloudServices)|Standard Azure IaaS VM (Microsoft.compute/virtualmachines)|Scale Set specific VMs (Microsoft.compute /virtualmachinescalesets/virtualmachines)|
+|Maximum Instance Count (with FD guarantees)|1100|1000|3000 (1000 per Availability Zone)|
+|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported|All SKUs|
+|Full control over VM, NICs, Disks|Limited control over NICs and VM via CS-ES APIs. No support for Disks|Yes|Limited control with virtual machine scale sets VM API|
+|RBAC Permissions Required|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write|
+|Accelerated networking|Yes|Yes|Yes|
+|Spot instances and pricing|No|Yes, you can have both Spot and Regular priority instances|Yes, instances must either be all Spot or all Regular|
+|Mix operating systems|Extremely limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system|
+|Disk Types|No Disk Support|Managed disks only, all storage types|Managed and unmanaged disks, All Storage Types
+|Disk Server Side Encryption with Customer Managed Keys|No|Yes| |
+|Write Accelerator|No|No|Yes|
+|Proximity Placement Groups|No|Yes, read Proximity Placement Groups documentation|Yes|
+|Azure Dedicated Hosts|No|No|Yes|
+|Managed Identity|No|User Assigned Identity Only|System Assigned or User Assigned|
+|Azure Instance Metadata Service|No|Yes|Yes|
+|Add/remove existing VM to the group|No|No|No|
+|Service Fabric|No|No|Yes|
+|Azure Kubernetes Service (AKS) / AKE|No|No|Yes|
+|UserData|No|Yes|Yes|
++
+## Autoscaling and instance orchestration
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|List VMs in Set|No|Yes|Yes|
+|Automatic Scaling (manual, metrics based, schedule based)|Yes|Yes|Yes|
+|Auto-Remove NICs and Disks when deleting VM instances|Yes|Yes|Yes|
+|Upgrade Policy (VM scale sets)|AutoUD and ManualUD policies. No support for Rolling. Cloud Services - Create Or Update - REST API (Azure Compute) | Microsoft Learn|No, upgrade policy must be null or [] during create|Automatic, Rolling, Manual|
+|Automatic OS Updates|Yes|No|Yes|
+|Customer Defined OS Images|No|Yes|Yes|
+|In Guest Security Patching|No|Yes|No|
+|Terminate Notifications (VM scale sets)|No|Yes, read Terminate Notifications documentation|Yes|
+|Monitor Application Health|No|Application health extension|Application health extension or Azure Load balancer probe|
+|Instance Repair (VM scale sets)|No|Yes, read Instance Repair documentation|Yes|
+|Instance Protection|No|No, use Azure resource lock|Yes|
+|Scale In Policy|No|No|Yes|
+|Get Instance View|Yes|No|Yes|
+|VM Batch Operations (Start all, Stop all, delete subset, etc.)|Yes|Partial, Batch delete is supported. Other operations can be triggered on each instance using VM API)|Yes|
+
+## High availability
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Availability SLA|[SLA](https://azure.microsoft.com/support/legal/sla/cloud-services/v1_5/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|
+|Availability Zones|No|Specify instances land across 1, 2 or 3 availability zones|Specify instances land across 1, 2 or 3 availability zones|
+|Assign VM to a Specific Availability Zone|No|Yes|No|
+|Fault Domain ΓÇô Max Spreading (Azure will maximally spread instances)|Yes|Yes|Yes|
+|Fault Domain ΓÇô Fixed Spreading|5 update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments|
+|Assign VM to a Specific Fault Domain|No|Yes|No|
+|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|5 update domains|
+|Perform Maintenance|No|Trigger maintenance on each instance using VM API|Yes|
+|VM Deallocation|No|Yes|Yes|
+
+## Networking
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Default outbound connectivity|Yes|No, must have explicit outbound connectivity|Yes|
+|Azure Load Balancer Standard SKU|No|Yes|Yes|
+|Application Gateway|No|Yes|Yes|
+|Infiniband Networking|No|No|Yes, single placement group only|
+|Azure Load Balancer Basic SKU|Yes|No|Yes|
+|Network Port Forwarding|Yes (NAT Pool for role instance input endpoints)|Yes (NAT Rules for individual instances)|Yes (NAT Pool)|
+|Edge Sites|No|Yes|Yes|
+|Ipv6 Support|No|Yes|Yes|
+|Internal Load Balancer|No |Yes|Yes|
+
+## Backup and recovery
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Azure Backup|No |Yes|No|
+|Azure Site Recovery|No|Yes (via PowerShell)|No|
+|Azure Alerts|Yes|Yes|Yes|
+|VM Insights|No|Can be installed into individual VMs|Yes|
++
+## Next steps
+- View the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
+- View [frequently asked questions](faq.yml) for Cloud Services (extended support).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Previously updated : 10/27/2022 Last updated : 11/10/2022
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
-1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances for the default style. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models. 1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model. 1. Select **Next**.
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select one or more preset speaking styles to train. 1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select **Next**.
-1. Optionally, you can add up to 10 custom speaking styles. Select **Add a custom style** and enter a custom style name of your choice. Select style samples as training data.
+1. Optionally, you can add up to 10 custom speaking styles:
+ 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md#adjust-speaking-styles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
+ 1. Select style samples as training data.
1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Styles, style degree, and roles are supported for a subset of neural voices. If
| Attribute | Description | Required or optional | | - | - | -- |
-| `style` | Specifies the speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored. |
+| `style` | Specifies the [prebuilt](language-support.md?tabs=stt-tts#voice-styles-and-roles) or [custom](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model) speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored.|
| `styledegree` | Specifies the intensity of the speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute, the `styledegree` attribute is ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.| | `role`| Specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`. |
Styles, style degree, and roles are supported for a subset of neural voices. If
You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
-For a list of supported styles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
+For a list of supported styles for prebuilt neural voices, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
+
+To use your [custom style](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model), specify the style name that you entered in Speech Studio.
**Syntax**
All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3
> [!NOTE] > If an element is not recognized, it will be ignored, and the child elements within it will still be processed.
-The MathML entities are not supported by XML syntax, so you must use the their corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
+The MathML entities are not supported by XML syntax, so you must use the corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
## Viseme element
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Containerization is an approach to software distribution in which an application
## Features and benefits -- **Immutable infrastructure**: Enable DevOps teams' to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
+- **Immutable infrastructure**: Enable DevOps teams to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
- **Control over data**: Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments ΓÇô across data, management, identity, and security. - **Control over model updates**: Flexibility in versioning and updating of models deployed in their solutions. - **Portable architecture**: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to [Azure Kubernetes Service](../aks/index.yml), [Azure Container Instances](../container-instances/index.yml), or to a [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
To use Azure RBAC, you must enable Azure Active Directory authentication. You ca
## Add role assignment to Language resource Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**.
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
1. Select **Cognitive Services**, and navigate to your specific Language resource. > [!NOTE] > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/tag-data.md
Previously updated : 05/05/2022 Last updated : 11/10/2022
Before creating a custom text classification model, you need to have labeled dat
Before you can label data, you need: * [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* Documents containing text data that have [been uploaded](design-schema.md#data-preparation) to your storage account.
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
cognitive-services Migrate Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-knowledge-base.md
Last updated 11/02/2021
-# Move projects and question answer sources
+# Move projects and question answer pairs
> [!NOTE]
-> This article deals with the process to move projects and knowledge bases from one Language resource to another.
+> This article deals with the process to export and move projects and sources from one Language resource to another.
-You may want to create a copy of your project for several reasons:
+You may want to create copies of your projects or sources for several reasons:
* To implement a backup and restore process * Integrate with your CI/CD pipeline
You may want to create a copy of your project for several reasons:
## Prerequisites * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and language resource name you selected when you created the resource.
+* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and the Language resource name you selected when you created the resource.
## Export a project
-Exporting a project allows you to move or back up all the sources question answer sources that are contained within a single project.
+Exporting a project allows you to back up all the question answer sources that are contained within a single project.
1. Sign in to the [Language Studio](https://language.azure.com/).
-1. Select the language resource you want to move a project from.
-1. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing all of your knowledge bases.
+1. Select the Language resource you want to move a project from.
+1. Go to Custom Question Answering service. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing the contents of your project.
+2. You can export only one project at a time.
## Import a project
-1. Select the language resource, which will be the destination for your previously exported project.
-1. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
+1. Select the Language resource, which will be the destination for your previously exported project.
+1. Go to Custom Question Answering service. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
-## Export question and answers
+## Export sources
-1. Select the language resource you want to move an individual question answer source from.
-1. Select the project that contains the question and answer source you wish to export.
+1. Select the Language resource you want to move an individual source from.
+1. Go to Custom Question Answering. Select the project that contains the source you wish to export.
1. On the Edit knowledge base page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to export in either Excel or TSV. ## Import question and answers
-1. Select the language resource, which will be the destination for your previously exported question and answer source.
-1. Select the project where you want to import a question and answer source.
+1. Select the Language resource, which will be the destination for your previously exported source.
+1. Go to Custom Question Answering. Select the project where you want to import a question and answer source.
1. On the Edit knowledge base page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to import either an Excel or TSV file. 1. Browse to the local location of the file with the **Choose File** option and select **Done**. <!-- TODO: Replace Link--> ### Test
-**Test** the question answer source by selecting the **Test** option from the toolbar in the **Edit knowledge base** page which will launch the test panel. Learn how to [test your knowledge base](../../../qnamaker/How-To/test-knowledge-base.md).
+**Test** the source by selecting the **Test** option from the toolbar in the **Edit knowledge base** page which will launch the test panel. Learn how to [test your knowledge base](../../../qnamaker/How-To/test-knowledge-base.md).
### Deploy <!-- TODO: Replace Link-->
-**Deploy** the knowledge base and create a chat bot. Learn how to [deploy your knowledge base](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
+**Deploy** the project and create a chat bot. Learn how to [deploy your knowledge base](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
## Chat logs
-There is no way to move chat logs with projects or knowledge bases. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
+There is no way to move chat logs with the projects. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
## Next steps
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 10 |
+| Requests per second per deployment | 15 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Last updated 10/25/2022
-# Context and Actions
+# Context and actions
Personalizer works by learning what your application should show to users in a given context. These are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from.
-## Table of Contents
-
-* [Context](#context) Information about the current user or state of the system
-* [Actions](#actions) A list of options to choose from
-* [Features](#features) Attributes describing the Context and Actions
-* [Feature Engineering](#feature-engineering) Tips for constructing impactful features
-* [Namespaces](#namespaces) Grouping Features
-* [Examples](#json-examples) Examples of Context and Action features in JSON format
-- ## Context Information for the _context_ depends on each application and use case, but it typically may include information such as:
Information for the _context_ depends on each application and use case, but it t
* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc. * Information extracted from mobile applications, such as location, movement, or battery level. * Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
-* Information about the state of the system.
+* Information about the state of the system.
Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API. - ## Actions Actions represent a list of options. Don't send in more than 50 actions when Ranking actions. These may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward (for example, the user will add to the basket) for the current context. - ### Examples of actions The actions you send to the Rank API will depend on what you are trying to personalize.
Here are some examples:
|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.| |Choose what to show at the top of a list of search results|Each action is one of the top few search results.| - ### Load actions from the client application Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information. - ### Prevent actions from being ranked In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked is by adding it to the [Excluded Actions](https://learn.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.personalizer.models.rankrequest.excludedactions) list, or not passing it to the Rank Request.
-In some cases, you might not want events to be trained on by default, i.e., you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you will render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events which the end user did not have a chance to interact with.
-
+In some cases, you might not want events to be trained on by default. In other words, you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you will render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events which the end user did not have a chance to interact with.
## Features
Personalizer does not prescribe, limit, or fix what features you can send for ac
It's ok and natural for features to change over time. However, keep in mind that Personalizer's machine learning model adapts based on the features it sees. If you send a request containing all new features, Personalizer's model will not be able to leverage past events to select the best action for the current event. Having a 'stable' feature set (with recurring features) will help the performance of Personalizer's machine learning algorithms.
-### Context Features
+### Context features
* Some context features may only be available part of the time. For example, if a user is logged into the online grocery store website, the context will contain features describing purchase history. These features will not be available for a guest user. * There must be at least one context feature. Personalizer does not support an empty context. * If the context features are identical for every request, Personalizer will choose the globally best action.
-### Action Features
+### Action features
* Not all actions need to contain the same features. For example, in the online grocery store scenario, microwavable popcorn will have a "cooking time" feature, while a cucumber will not.
-* Features for a certain action ID may be available one day, but later on become unavailable.
+* Features for a certain action ID may be available one day, but later on become unavailable.
Examples:
The following are good examples for action features. These will depend a lot on
Personalizer supports features of string, numeric, and boolean types. It's very likely that your application will mostly use string features, with a few exceptions.
-### How feature types affects the Machine Learning in Personalizer
+### How feature types affect machine learning in Personalizer
-* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (e.g. category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model.
+* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (for example, category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model).
* **Numeric**: Only use numeric values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as categorical strings. For example Age could be encoded as "Age":"0-5", "Age":"6-10", etc. Height could be bucketed as "Height": "<5'0", "Height": "5'0-5'4", "Height": "5'5-5'11", "Height":"6'0-6-4", "Height":">6'4". * **Boolean**
-* **Arrays** ONLY numeric arrays are supported.
-
+* **Arrays** Only numeric arrays are supported.
-## Feature Engineering
+## Feature engineering
-* Use categorical and string types for features that are not a magnitude.
+* Use categorical and string types for features that are not a magnitude.
* Make sure there are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed. * There are features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
Having features of high density helps Personalizer extrapolate learning from one
* **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if non-PII) will likely add more noise to the model and is not recommended. * **Sending unique values that will rarely occur more than a few times**. It's recommended to bucket your features to a higher level-of-detail. For example, having features such as `"Context.TimeStamp.Day":"Monday"` or `"Context.TimeStamp.Hour":13` can be useful as there are only 7 and 24 unique values, respectively. However, `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is very precise and has an extremely large number of unique values, which makes it very difficult for Personalizer to learn from it.
-### Improve feature sets
+### Improve feature sets
Analyze the user behavior by running a [Feature Evaluation Job](how-to-feature-evaluation.md). This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further. ### Expand feature sets with artificial intelligence and cognitive services
-Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
+Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
By preprocessing your items using artificial intelligence services, you can automatically extract information that is likely to be relevant for personalization. For example:
-* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
+* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
* Images can be run through object detection, faces through sentiment, etc.
-* Information in text can be augmented by extracting entities, sentiment, expanding entities with Bing knowledge graph, etc.
+* Information in text can be augmented by extracting entities, sentiment, and expanding entities with Bing knowledge graph.
You can use several other [Azure Cognitive Services](https://www.microsoft.com/cognitive-services), like
You can use several other [Azure Cognitive Services](https://www.microsoft.com/c
* [Emotion](../face/overview.md) * [Computer Vision](../computer-vision/overview.md)
-### Use Embeddings as Features
+### Use embeddings as features
Embeddings from various Machine Learning models have proven to be affective features for Personalizer * Embeddings from Large Language Models * Embeddings from Computer Vision Models - ## Namespaces
-Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
+Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
The following are examples of feature namespaces used by applications:
The following are examples of feature namespaces used by applications:
* The following characters cannot be used: codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140. * All namespaces starting with an underscore `_` will be ignored. -
-## JSON Examples
+## JSON examples
### Actions When calling Rank, you will send multiple actions to choose from:
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
```json {
JSON objects can include nested JSON objects and simple property/values. An arra
Context is expressed as a JSON object that is sent to the Rank API:
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
```JSON {
JSON objects can include nested JSON objects and simple property/values. An arra
### Namespaces
-In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
+In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
> [!Note] > We strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions. - ```JSON { "contextFeatures": [
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/recording-logs.md
+
+ Title: Azure Communication Services - Recording Analytics Public Preview
+
+description: About using Log Analytics for recording logs
++++ Last updated : 10/27/2021+++++
+# Call Recording Summary Log
+Call recording summary logs provide details about the call duration, media content (e.g., Audio-Video, Unmixed, Transcription, etc.), the format types used for the recording (e.g., WAV, MP4, etc.), as well as the reason of why the recording ended.
+
+Recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot) or ended due to a system failure.
+
+> [!IMPORTANT]
+
+> Please note the call recording logs will be published once the call recording is ready to be downloaded. The log will be published within the standard latency time for Azure Monitor Resource logs see [Log data ingestion time in Azure Monitor](../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log)
++
+## Properties Description
+
+| Field Name | DataType | Description |
+|- |--|--|
+|timeGenerated|DateTime|The timestamp (UTC) of when the log was generated|
+|operationName| String | The operation associated with log record|
+|correlationId |String |`CallID` is used to correlate events between multiple tables|
+|recordingID| String | The ID given to the recording this log refers to|
+|category| String | The log category of the event. Logs with the same log category and resource type will have the same properties fields|
+|resultType| String| The status of the operation |
+|level |String |The severity level of the operation |
+|chunkCount |Integer|The total number of chunks created for the recording|
+|channelType| String |The recording's channel type, i.e., mixed, unmixed|
+|recordingStartTime| DateTime|The time that the recording started |
+|contentType| String | The recording's content, i.e., Audio Only, Audio - Video, Transcription, etc.|
+|formatType| String | The recording's file format |
+|recordingLength| Double | Duration of the recording in seconds |
+|audioChannelsCount| Integer | Total number of audio channels in the recording|
+|recordingEndReason| String | The reason why the recording ended |
++
+## Call recording and sample data
+```json
+"operationName": "Call Recording Summary",
+"operationVersion": "1.0",
+"category": "RecordingSummaryPUBLICPREVIEW",
+
+```
+A call can have one recording or many recordings depending on how many times a recording event is triggered.
+
+For example, if an agent initiates an outbound call in a recorded line and the call drops due to poor network signal, the `callid` will have one `recordingid`. If the agent calls back the customer, the system will generate a new `callid` as well as a new `recordingid`.
++
+#### Example1: Call recording for "One call to one recording"
+```json
+"properties"
+{
+ "TimeGenerated":"2022-08-17T23:18:26.4332392Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "zzzzzz-cada-4164-be10-0000000000",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBvaW5xxxxxxxxFmNjkwxxxxxxxxxxxxSZXNvdXJjZVNwZWNpZmljSWQiOiJiZGU5YzE3Ni05M2Q3LTRkMWYtYmYwNS0yMTMwZTRiNWNlOTgifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-16T09:07:54.0000000Z",
+ "RecordingLength": "73872.94",
+ "ChunkCount": 6,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+If the agent initiated a recording and stopped and restarted the recording multiple times while the call is still on, the `callid` will have many `recordingid` depending on how many times the recording events were triggered.
+
+#### Example2: Call recording for "One call to many recordings"
+```json
+
+{
+ "TimeGenerated": "2022-08-17T23:55:46.6304762Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBxxxxxxxxxxxxjkwMC05MmEwLTRlZDYtOTcxYS1kYzZlZTkzNjU0NzciLCJSxxxxxNwZWNpZmljSWQiOiI5ZmY2ZTY2Ny04YmQyLTQ0NzAtYmRkYy00ZTVhMmUwYmNmOTYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:43.3304762Z",
+ "RecordingLength": 3.34,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+{
+ "TimeGenerated": "2022-08-17T23:55:56.7664976Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuxxxxxxiOiI4NDFmNjkwMC1mMjBiLTQzNmQtYTg0Mi1hODY2YzE4M2Y0YTEiLCJSZXNvdXJjZVNwZWNpZmljSWQiOiI2YzRlZDI4NC0wOGQ1LTQxNjEtOTExMy1jYWIxNTc3YjM1ODYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:54.0664976Z",
+ "RecordingLength": 2.7,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+See also call recording for more info
+[Azure Communication Services Call Recording overview](../../../communication-services/concepts/voice-video-calling/call-recording.md)
+
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
+
+ Title: Call Automation overview
+
+description: Learn about Azure Communication Services Call Automation.
++++ Last updated : 09/06/2022+++
+# Call Automation Overview
++
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
+
+> [!NOTE]
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+
+## Common use cases
+
+Some of the common use cases that can be built using Call Automation include:
+
+- Program VoIP or PSTN calls for transactional workflows such as click-to-call and appointment reminders to improve customer service.
+- Build interactive interaction workflows to self-serve customers for use cases like order bookings and updates, using Play (Audio URL) and Recognize (DTMF) actions.
+- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing.
+- Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform.
+- Increase engagement by building automated customer outreach programs for marketing and customer service.
+- Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
+
+ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+
+![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png)
+
+## Capabilities
+
+The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs.
+
+| Feature Area | Capability | .NET | Java |
+| -| -- | | -- |
+| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ |
+| | Answer a group call | ✔️ | ✔️ |
+| | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Reject an incoming call | ✔️ | ✔️ |
+| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ✔️ | ✔️ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
+| | Blind Transfer* a call to another endpoint | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ |
+| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
+
+*Transfer of VoIP call to a phone number is currently not supported.
+
+## Architecture
+
+Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions will have corresponding events that are triggered when the action completes successfully or fails.
+
+Azure Communication Services uses Event Grid to deliver the [IncomingCall event](./incoming-call-notification.md) and HTTPS Webhooks for all mid-call action callbacks.
+
+![Screenshot of flow for incoming call and actions.](./media/action-architecture.png)
+
+## Call actions
+
+### Pre-call actions
+
+These actions are performed before the destination endpoint listed in the IncomingCall event notification is connected. Web hook callback events only communicate the ΓÇ£answerΓÇ¥ pre-call action, not for reject or redirect actions.
+
+**Answer** ΓÇô Using the IncomingCall event from Event Grid and Call Automation SDK, a call can be answered by your application. This action allows for IVR scenarios where an inbound PSTN call can be answered programmatically by your application. Other scenarios include answering a call on behalf of a user.
+
+**Reject** ΓÇô To reject a call means your application can receive the IncomingCall event and prevent the call from being connected to the destination endpoint.
+
+**Redirect** ΓÇô Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. This means the call isn't answered by your application, it's simply ΓÇÿredirectedΓÇÖ to another destination endpoint to be answered.
+
+**Make Call** - Make Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
+
+### Mid-call actions
+
+These actions can be performed on the calls that are answered or placed using Call Automation SDKs. Each mid-call action has a corresponding success or failure web hook callback event.
+
+**Add/Remove participant(s)** ΓÇô One or more participants can be added in a single request with each participant being a variation of supported destination endpoints. A web hook callback is sent for every participant successfully added to the call.
+
+**Play** - When your application answers a call or places an outbound call, you can play an audio prompt for the caller. This audio can be looped if needed in scenarios like playing hold music. To learn more, view our [concepts](./play-action.md) and how-to guide for [Customizing voice prompts to users with Play action](../../how-tos/call-automation/play-action.md).
+
+**Recognize input** - After your application has played an audio prompt, you can request user input to drive business logic and navigation in your application. To learn more, view our [concepts](./recognize-action.md) and how-to guide for [Gathering user input](../../how-tos/call-automation/recognize-action.md).
+
+**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that call can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+
+**Record** - You decide when to start/pause/resume/stop recording based on your application business logic, or you can grant control to the end user to trigger those actions. To learn more, view our [concepts](./../voice-video-calling/call-recording.md) and [quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+
+**Hang-up** ΓÇô When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call.
+
+**Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
++
+## Events
+
+The following table outlines the current events emitted by Azure Communication Services. The two tables below show events emitted by Event Grid and from the Call Automation as webhook events.
+
+### Event Grid events
+
+Most of the events sent by Event Grid are platform agnostic meaning they're emitted regardless of the SDK (Calling or Call Automation). While you can create a subscription for any event, we recommend you use the IncomingCall event for all Call Automation use-cases where you want to control the call programmatically. Use the other events for reporting/telemetry purposes.
+
+| Event | Description |
+| -- | |
+| IncomingCall | Notification of a call to a communication user or phone number |
+| CallStarted | A call is established (inbound or outbound) |
+| CallEnded | A call is terminated and all participants are removed |
+| ParticipantAdded | A participant has been added to a call |
+| ParticipantRemoved| A participant has been removed from a call |
+| RecordingFileStatusUpdated| A recording file is available |
+
+Read more about these events and payload schema [here](../../../event-grid/communication-services-voice-video-events.md)
+
+### Call Automation webhook events
+
+The Call Automation events are sent to the web hook callback URI specified when you answer or place a new outbound call.
+
+| Event | Description |
+| -- | |
+| CallConnected | Your applicationΓÇÖs call leg is connected (inbound or outbound) |
+| CallDisconnected | Your applicationΓÇÖs call leg is disconnected |
+| CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint |
+| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed |
+| AddParticipantSucceeded| Your application added a participant |
+|AddParticipantFailed | Your application was unable to add a participant |
+| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
+| PlayCompleted| Your application successfully played the audio file provided |
+| PlayFailed| Your application failed to play audio |
+| RecognizeCompleted | Recognition of user input was successfully completed |
+| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|
++
+To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows.
+
+## Known issues
+
+1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers.
+2. Taking a pre-call action like Answer/Reject on the original call after redirected it gives a 200 success instead of failing on 'call not found'.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Call Automation](./../../quickstarts/call-automation/Callflows-for-customer-interactions.md)
+
+Here are some articles of interest to you:
+- Understand how your resource will be [charged for various calling use cases](../pricing.md) with examples.
+- Learn how to [manage an inbound phone call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md).
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
+
+ Title: Incoming call concepts
+
+description: Learn about Azure Communication Services IncomingCall notification
++++ Last updated : 09/26/2022++++
+# Incoming call concepts
++
+Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call.
+
+## Calling scenarios
+
+First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number will trigger an `IncomingCall` event. The following are examples of these resources:
+
+1. An Azure Communication Services identity
+2. A PSTN phone number owned by your Azure Communication Services resource
+
+Given the above examples, the following scenarios will trigger an `IncomingCall` event sent to Event Grid:
+
+| Source | Destination | Scenario(s) |
+| | -- | -- |
+| Azure Communication Services identity | Azure Communication Services identity | Call, Redirect, Add Participant, Transfer |
+| Azure Communication Services identity | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
+| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
+
+> [!NOTE]
+> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic.
+
+## Receiving an incoming call notification from Event Grid
+
+Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service.
+
+This architecture has the following benefits:
+
+- Using Event Grid subscription filters, you can route the `IncomingCall` notification to specific applications.
+- PSTN number assignment and routing logic can exist in your application versus being statically configured online.
+- As identified in the above [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
+
+To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
+
+## Call routing in Call Automation or Event Grid
+
+You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN.
+
+## Number assignment
+
+Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+
+## Next steps
+- [Build a Call Automation application](../../quickstarts/call-automation/callflows-for-customer-interactions.md) to simulate a customer interaction.
+- [Redirect an inbound PSTN call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) to your resource.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
+
+ Title: Playing audio in call
+
+description: Conceptual information about playing audio in call using Call Automation.
+++ Last updated : 09/06/2022++++
+# Playing audio in call
++
+The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication.
+
+> [!NOTE]
+> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../cognitive-services/Speech-Service/how-to-audio-content-creation.md).
+
+The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
+
+## Common use cases
+
+The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+
+### Announcements
+Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
+
+### Self-serve customers
+
+In scenarios with IVRs and virtual assistants, you can use your application or bots to play audio prompts to callers, this prompt can be in the form of a menu to guide the caller through their interaction.
+
+### Hold music
+The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller.
+
+### Playing compliance messages
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+
+## Sample architecture for playing audio in a call
+
+![Screenshot of flow for play action.](./media/play-action.png)
+
+## Known limitations
+- Play action isn't enabled to work with Teams Interoperability.
++
+## What's coming up next for Play action
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities, you can improve customer interactions to create more personalized messages.
+
+## Next Steps
+Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
+
+ Title: Gathering user input
+description: Conceptual information about using Recognize action to gather user input with Call Automation.
+++ Last updated : 09/16/2022++++
+# Gathering user input
++
+With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action.
+
+**DTMF**
+Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad.
+
+**DTMF events and their associated tones**
+
+|Event|Tone|
+| |--|
+|0|Zero|
+|1|One|
+|2|Two|
+|3|Three|
+|4|Four|
+|5|Five|
+|6|Six|
+|7|Seven|
+|8|Eight|
+|9|Nine|
+|A|A|
+|B|B|
+|C|C|
+|D|D|
+|*|Asterisk|
+|#|Pound|
+
+## Common use cases
+
+The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+
+### Improve user journey with self-service prompts
+
+- **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query.
+- **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+
+### Interrupt audio prompts
+
+**User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent.
++
+## Sample architecture for gathering user input in a call
+
+![Recognize Action](./media/recognize-flow.png)
+
+## What's coming up next for Recognize action
+
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the recognize action will add in new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Speech-To-Text. With these, you can improve customer interactions and recognize voice inputs from participants on the call.
+
+## Next steps
+
+- Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md).
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
# Pricing Scenarios
-Prices for Azure Communication Services are generally based on a pay-as-you-go model. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing.
+Prices for Azure Communication Services are based on a pay-as-you-go model. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing.
## Voice/Video calling and screen sharing
Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om
- One participant on the VoIP leg (Alice) from Omnichannel for Customer Service client application x 10 minutes x $0.004 per participant leg per minute = $0.04 - One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.-- Omnichannel for Customer Servicebot does not introduce additional ACS charges.
+- Omnichannel for Customer Service bot does not introduce additional ACS charges.
**Total cost for the call**: $0.04 + $0.04 = $0.08
Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
-**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $.29
+**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $0.29
+
+### Pricing example: Group call managed by Call Automation SDK
+
+Asha calls your US toll-free number (acquired from Communication Services) from her mobile. Your service application answers the call using Call Automation SDK and plays out an IVR menu using Play and Recognize actions. Your application then adds a human agent, David, to the call who answers the call through a custom application using Calling SDK.
+
+- Asha was on the call as a PSTN endpoint for a total of 10 minutes.
+- Your application was on the call for the entire 10 minutes of the call.
+- David was on the call for the last 5 minutes of the call using Calling JS SDK.
+
+**Cost calculations**
+
+- Inbound PSTN leg by Asha to toll-free number acquired from Communication Services x 10 minutes x $0.0220 per minute for receiving the call = $0.22
+- One participant on the VOIP leg (David) x 5 minutes x $0.004 per participant leg per minute = $0.02
+
+Note that the service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation.
+
+**Total cost for the call**: $0.22 + $0.02 = $0.24
## Call Recording
Bob starts a call with his financial advisor, Charlie.
## Chat
-With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
+With Communication Services, you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
### Price
Azure Communication Services allows for adding SMS messaging capabilities to you
### Pricing
-The SMS usage price is a per-message segment charge based on the destination of the message. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Please refer to the [SMS Pricing Page](./sms-pricing.md) for pricing details.
+The SMS usage price is a per-message segment charge based on the destination of the message. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Refer to the [SMS Pricing Page](./sms-pricing.md) for pricing details.
### Pricing example: 1:1 SMS sending
Contoso is a healthcare company with clinics in US and Canada. Contoso has a Pat
Contoso is a healthcare company with clinics in US and Canada. Contoso has a Patient Appointment Reminder application that sends out SMS appointment reminders to patients regarding upcoming appointments. Patients can respond to the messages with "Reschedule" and include their date/time preference to reschedule their appointments. - The application sends appointment reminders to 20 US patients and 30 Canada patients using a CA toll-free number.-- 6 US patients and 4 CA patients respond back to reschedule their appointments. Contoso receives 10 SMS responses in total.-- Message length of the reschedule messages is less than 1 message segment*. Hence, total messages received are 6 message segments for US and 4 message segments for CA.
+- Six US patients and four CA patients respond back to reschedule their appointments. Contoso receives 10 SMS responses in total.
+- Message length of the reschedule messages is less than one message segment*. Hence, total messages received are six message segments for US and four message segments for CA.
**Cost calculations** -- US - 6 message segments x $0.0075 per received message segment + 6 message segments x $0.0010 carrier surcharge per received message segment = $0.051-- CA - 4 message segments x $0.0075 per received message segment = $0.03
+- US - six message segments x $0.0075 per received message segment + 6 message segments x $0.0010 carrier surcharge per received message segment = $0.051
+- CA - four message segments x $0.0075 per received message segment = $0.03
**Total cost for receiving patient responses from 6 US patients and 4 CA patients**: $0.051 + $0.03 = $0.081 ## Telephony
-Please refer to the following links for details on Telephony pricing
+Refer to the following links for details on Telephony pricing
- [PSTN Pricing Details](./pstn-pricing.md)
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You
**Inbound calling with Dynamics 365 Omnichannel (OC)**
-Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
-
-**Inbound calling with Power Virtual Agents**
-
-*Coming soon*
+Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
**Inbound calling with ACS Call Automation SDK**
-[Available in private preview](../voice-video-calling/call-automation.md)
+Call Automation enables you to build custom calling workflows within your applications to optimize business processes and boost customer satisfaction. The library supports managing incoming calls to the phone numbers acquired from Communication Services and incoming calls via Direct Routing. You can also use Call Automation to place outbound calls from the phone numbers owned by your resource, among other capabilities.
+
+Learn more about [Call Automation](../voice-video-calling/call-automation.md), currently available in public preview.
**Inbound calling with Azure Bot Framework**
-Customers participating in Azure Bot Framework Telephony Channel preview can find the [instructions here](/azure/bot-service/bot-service-channel-connect-telephony)
+Customers participating in Azure Bot Framework Telephony Channel preview can find the [instructions here](/azure/bot-service/bot-service-channel-connect-telephony)
+
+**Inbound calling with Power Virtual Agents**
+
+*Coming soon*
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/plan-solution.md
Communication Services offers two types of phone numbers: **local** and **toll-f
Local (Geographic) numbers are 10-digit telephone numbers consisting of the local area codes in the United States. For example, `+1 (206) XXX-XXXX` is a local number with an area code of `206`. This area code is assigned to the city of Seattle. These phone numbers are generally used by individuals and local businesses. Azure Communication Services offers local numbers in the United States. These numbers can be used to place phone calls, but not to send SMS messages. ### Toll-free Numbers
-Toll-free numbers are 10-digit telephone numbers with distinct area codes that can be called from any phone number free of charge. For example, `+1 (800) XXX-XXXX` is a toll-free number in the North America region. These phone numbers are generally used for customer service purposes. Azure Communication Services offers toll-free numbers in the United states. These numbers can be used to place phone calls and to send SMS messages. Toll-free numbers cannot be used by people and can only be assigned to applications.
+Toll-free numbers are 10-digit telephone numbers with distinct area codes that can be called from any phone number free of charge. For example, `+1 (800) XXX-XXXX` is a toll-free number in the North America region. These phone numbers are generally used for customer service purposes. Azure Communication Services offers toll-free numbers in the United states. These numbers can be used to place phone calls and to send SMS messages. Toll-free numbers can't be used by people and can only be assigned to applications.
#### Choosing a phone number type
The following table shows you where you can acquire different types of phone num
| Toll-Free | US | US | US |US | US | *Currently, you can receive calls only to a Microsoft number that is assigned to a Telephony Channel bot. Read more about Telephony Channel [here](/azure/bot-service/bot-service-channel-connect-telephony)
-**For more details about call destinations and pricing, refer to the [pricing page](../pricing.md).
+**For more information about call destinations and pricing, see the [pricing page](../pricing.md).
## Next steps
The following table shows you where you can acquire different types of phone num
### Quickstarts - [Get a phone Number](../../quickstarts/telephony/get-phone-number.md)
+- [Manage inbound and outbound calls](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md) with Call Automation.
- [Place a call](../../quickstarts/voice-video-calling/getting-started-with-calling.md) - [Send an SMS](../../quickstarts/sms/send.md)
The following table shows you where you can acquire different types of phone num
- [Voice and video concepts](../voice-video-calling/about-call-types.md) - [Telephony concepts](./telephony-concept.md)
+- [Call Automation concepts](../voice-video-calling/call-automation.md)
- [Call Flows](../call-flows.md) - [Pricing](../pricing.md)
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Call ID**: This ID is used to identify Communication Services calls. * **SMS message ID**: This ID is used to identify SMS messages. * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application.
+* **Correlation ID**: This ID is used to identify requests made using Call Automation.
* **Call logs**: These logs contain detailed information that can be used to troubleshoot calling and network issues. Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations.
chat_client = ChatClient(
```
-## Access your server call ID
-When troubleshooting issues with the Call Automation SDK, like call recording and call management problems, you'll need to collect the Server Call ID. This ID can be collected using the ```getServerCallId``` method.
+## Access IDs required for Call Automation
+When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you'll need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
+- From the header of API response, locate the field `X-Ms-Skype-Chain-Id`.
+
+ ![Screenshot of response header showing X-Ms-Skype-Chain-Id.](media/troubleshooting/response-header.png)
+- From the callback events your application receives after executing an action e.g. `CallConnected` or `PlayFailed`, locate the correlationID.
-#### JavaScript
-```
-callAgent.on('callsUpdated', (e: { added: Call[]; removed: Call[] }): void => {
- e.added.forEach((addedCall) => {
- addedCall.on('stateChanged', (): void => {
- if (addedCall.state === 'Connected') {
- addedCall.info.getServerCallId().then(result => {
- dispatch(setServerCallId(result));
- }).catch(err => {
- console.log(err);
- });
- }
- });
- });
-});
-```
+ ![Screenshot of call disconnected event showing correlation ID.](media/troubleshooting/correlation-id-in-callback-event.png)
+In addition to one of these IDs, please provide the details on the failing use case and the timestamp for when the failure was observed.
## Access your client call ID
The Azure Communication Services Calling SDK uses the following error codes to h
| 500, 503, 504 | Communication Services infrastructure error. | File a support request through the Azure portal. | | 603 | Call globally declined by remote Communication Services participant | Expected behavior. |
+## Call Automation SDK error codes
+The below error codes are exposed by Call Automation SDK.
+
+| Error Code | Description | Actions to take |
+|--|--|--|
+| 400 | Bad request | The input request is invalid. Look at the error message to determine which input is incorrect.
+| 401 | Unauthorized | HMAC authentication failed. Verify whether the connection string used to create CallAutomationClient is correct.
+| 403 | Forbidden | Request is forbidden. Make sure that you can have access to the resource you are trying to access.
+| 404 | Resource not found | The call you are trying to act on doesn't exist. For example, transferring a call that has already disconnected.
+| 429 | Too many requests | Retry after a delay suggested in the Retry-After header, then exponentially backoff.
+| 500 | Internal server error | Retry after a delay. If it persists, raise a support ticket.
+| 502 | Bad gateway | Retry after a delay with a fresh http client.
+
+Consider the below tips when troubleshooting certain issues.
+- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription will be marked as succeeded if the validation was successful.
+- Getting the error 'The field CallbackUri is invalid': Call Automation does not support HTTP endpoints. Make sure the callback url you provide supports HTTPS.
+- PlayAudio action does not play anything: Currently only Wave file (.wav) format is supported for audio files. The audio content in the wave file must be mono (single-channel), 16-bit samples with a 16,000 (16KHz) sampling rate.
+- Actions on PSTN endpoints aren't working: CreateCall, Transfer, AddParticipant and Redirect to phone numbers require you to set the SourceCallerId in the action request. Unless you are using Direct Routing, the source caller ID should be a phone number owned by your Communication Services resource for the action to succeed.
+
+Refer to [this article](./known-issues.md) to learn about any known issues being tracked by the product team.
+ ## Chat SDK error codes The Azure Communication Services Chat SDK uses the following error codes to help you troubleshoot chat issues. The error codes are exposed through the `error.code` property in the error response.
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
+
+ Title: Azure Communication Services Call Automation how-to for managing calls with Call Automation
+
+description: Provides a how-to guide on using call actions to steer and manage a call with Call Automation.
+++++ Last updated : 11/03/2022++++
+zone_pivot_groups: acs-csharp-java
++
+# How to control and steer calls with Call Automation
++
+Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams will help you visualize how to program your service application with Call Automation.
+
+Call Automation supports various other actions to manage call media and recording that aren't included in this guide.
+
+> [!NOTE]
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+
+As a pre-requisite, we recommend you to read the below articles to make the most of this guide:
+1. Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
+2. Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+
+For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
+## [csharp](#tab/csharp)
+```csharp
+var client = new CallAutomationClient("<resource_connection_string>");
+```
+## [Java](#tab/java)
+```java
+ CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient();
+```
+--
+
+## Make an outbound call
+You can place a 1:1 or group call to a communication user or phone number (public or Communication Services owned number). Below sample makes an outbound call from your service application to a phone number.
+callerIdentifier is used by Call Automation as your application's identity when making an outbound a call. When calling a PSTN endpoint, you also need to provide a phone number that will be used as the source caller ID and shown in the call notification to the target PSTN endpoint.
+To place a call to a Communication Services user, you'll need to provide a CommunicationUserIdentifier object instead of PhoneNumberIdentifier.
+### [csharp](#tab/csharp)
+```csharp
+Uri callBackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+var callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CallSource callSource = new CallSource(callerIdentifier);
+callSource.CallerId = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeCalled = new List<CommunicationIdentifier>();
+listOfPersonToBeCalled.Add(callThisPerson);
+var createCallOptions = new CreateCallOptions(callSource, listOfPersonToBeCalled, callBackUri);
+CreateCallResult response = await client.CreateCallAsync(createCallOptions);
+```
+### [Java](#tab/java)
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567")));
+CommunicationUserIdentifier callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CreateCallOptions createCallOptions = new CreateCallOptions(callerIdentifier, targets, callbackUri)
+ .setSourceCallerId("+18001234567"); // This is the ACS provisioned phone number for the caller
+Response<CreateCallResult> response = client.createCallWithResponse(createCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the callee.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+![Sequence diagram for placing an outbound call.](media/make-call-flow.png)
++
+## Answer an incoming call
+Once you've subscribed to receive [incoming call notifications](../../concepts/call-automation/incoming-call-notification.md) to your resource, below is sample code on how to answer that call. When answering a call, it's necessary to provide a callback url. Communication Services will post all subsequent events about this call to that url.
+### [csharp](#tab/csharp)
+
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+Uri callBackUri = new Uri("https://<myendpoint_where_I_want_to_receive_callback_events");
+
+var answerCallOptions = new AnswerCallOptions(incomingCallContext, callBackUri);
+AnswerCallResult answerResponse = await client.AnswerCallAsync(answerCallOptions);
+CallConnection callConnection = answerResponse.CallConnection;
+```
+### [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+String callbackUri = "https://<myendpoint>/Events";
+
+AnswerCallOptions answerCallOptions = new AnswerCallOptions(incomingCallContext, callbackUri);
+Response<AnswerCallResult> response = client.answerCallWithResponse(answerCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the caller.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+
+![Sequence diagram for answering an incoming call.](media/answer-flow.png)
+
+## Reject a call
+You can choose to reject an incoming call as shown below. You can provide a reject reason: none, busy or forbidden. If nothing is provided, none is chosen by default.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var rejectOption = new RejectCallOptions(incomingCallContext);
+rejectOption.CallRejectReason = CallRejectReason.Forbidden;
+_ = await client.RejectCallAsync(rejectOption);
+```
+# [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+RejectCallOptions rejectCallOptions = new RejectCallOptions(incomingCallContext)
+ .setCallRejectReason(CallRejectReason.BUSY);
+Response<Void> response = client.rejectCallWithResponse(rejectCallOptions).block();
+```
+--
+No events are published for reject action.
+
+## Redirect a call
+You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call will remove your application's ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+var redirectOption = new RedirectCallOptions(incomingCallContext, target);
+_ = await client.RedirectCallAsync(redirectOption);
+```
+# [Java](#tab/java)
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+CommunicationIdentifier target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+RedirectCallOptions redirectCallOptions = new RedirectCallOptions(incomingCallContext, target);
+Response<Void> response = client.redirectCallWithResponse(redirectCallOptions).block();
+```
+--
+To redirect the call to a phone number, set the target to be PhoneNumberIdentifier.
+# [csharp](#tab/csharp)
+```csharp
+var target = new PhoneNumberIdentifier("+16041234567");
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+18001234567");
+```
+--
+No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it will generate a new IncomingCall event with 'to' field set to the target you specified.
+
+## Transfer a 1:1 call
+When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application from the call and hence remove its ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new CommunicationUserIdentifier("<user_id>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination);
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+When transferring to a phone number, it's mandatory to provide a source caller ID. This ID serves as the identity of your application(the source) for the destination endpoint.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new PhoneNumberIdentifier("+16041234567");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("+16471234567");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+The below sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
+![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png)
+
+## Add a participant to a call
+You can add one or more participants (Communication Services users or phone numbers) to an existing call. When adding a phone number, it's mandatory to provide source caller ID. This caller ID will be shown on call notification to the participant being added.
+# [csharp](#tab/csharp)
+```csharp
+var addThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeAdded = new List<CommunicationIdentifier>();
+listOfPersonToBeAdded.Add(addThisPerson);
+var addParticipantsOption = new AddParticipantsOptions(listOfPersonToBeAdded);
+addParticipantsOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+AddParticipantsResult result = await callConnection.AddParticipantsAsync(addParticipantsOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+16041234567");
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(target));
+AddParticipantsOptions addParticipantsOptions = new AddParticipantsOptions(targets)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<AddParticipantsResult> addParticipantsResultResponse = callConnectionAsync.addParticipantsWithResponse(addParticipantsOptions).block();
+```
+--
+To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
+
+AddParticipant will publish a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
+
+![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png)
+
+## Remove a participant from a call
+# [csharp](#tab/csharp)
+```csharp
+var removeThisUser = new CommunicationUserIdentifier("<user_id>");
+var listOfParticipantsToBeRemoved = new List<CommunicationIdentifier>();
+listOfParticipantsToBeRemoved.Add(removeThisUser);
+var removeOption = new RemoveParticipantsOptions(listOfParticipantsToBeRemoved);
+RemoveParticipantsResult result = await callConnection.RemoveParticipantsAsync(removeOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier removeThisUser = new CommunicationUserIdentifier("<user_id>");
+RemoveParticipantsOptions removeParticipantsOptions = new RemoveParticipantsOptions(new ArrayList<>(Arrays.asList(removeThisUser)));
+Response<RemoveParticipantsResult> removeParticipantsResultResponse = callConnectionAsync.removeParticipantsWithResponse(removeParticipantsOptions).block();
+```
+--
+RemoveParticipant only generates `ParticipantUpdated` event describing the latest list of participants in the call. The removed participant is excluded if remove operation was successful.
+![Sequence diagram for removing a participant from the call.](media/remove-participant-flow.png)
+
+## Hang up on a call
+Hang Up action can be used to remove your application from the call or to terminate a group call by setting forEveryone parameter to true. For a 1:1 call, hang up will terminate the call with the other participant by default.
+
+# [csharp](#tab/csharp)
+```csharp
+_ = await callConnection.HangUpAsync(true);
+```
+# [Java](#tab/java)
+```java
+Response<Void> response1 = callConnectionAsync.hangUpWithResponse(new HangUpOptions(true)).block();
+```
+--
+CallDisconnected event is published once the hangUp action has completed successfully.
+
+## Get information about a call participant
+# [csharp](#tab/csharp)
+```csharp
+CallParticipant participantInfo = await callConnection.GetParticipantAsync("<user_id>")
+```
+# [Java](#tab/java)
+```java
+CallParticipant participantInfo = callConnection.getParticipant("<user_id>").block();
+```
+--
+
+## Get information about all call participants
+# [csharp](#tab/csharp)
+```csharp
+List<CallParticipant> participantList = (await callConnection.GetParticipantsAsync()).Value.ToList();
+```
+# [Java](#tab/java)
+```java
+List<CallParticipant> participantsInfo = Objects.requireNonNull(callConnection.listParticipants().block()).getValues();
+```
+--
+
+## Get latest info about a call
+# [csharp](#tab/csharp)
+```csharp
+CallConnectionProperties thisCallsProperties = callConnection.GetCallConnectionProperties();
+```
+# [Java](#tab/java)
+```java
+CallConnectionProperties thisCallsProperties = callConnection.getCallProperties().block();
+```
+--
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
+
+ Title: Customize voice prompts to users with Play action
+
+description: Provides a quick start for playing audio to participants as part of a call.
+++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Customize voice prompts to users with Play action
++
+This guide will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
+- Learn more about [Gathering user input in a call](../../concepts/call-automation/recognize-action.md)
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
+
+ Title: Gather user input
+
+description: Provides a how-to guide for gathering user input from participants on a call.
+++ Last updated : 09/16/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Gather user input with Recognize action
++
+This guide will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK.
+++
+## Event codes
+
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|RecognizeCompleted|200|8531|Action completed, max digits received.|
+|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
+|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
+|RecognizeFailed|400|8510|Action failed, initial silence timeout reached|
+|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.|
+|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
+|RecognizeFailed|500|8512|Unknown internal server error.|
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next Steps
+
+- Learn more about [Gathering user input](../../concepts/call-automation/recognize-action.md)
+- Learn more about [Playing audio in call](../../concepts/call-automation/play-action.md)
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
communication-services Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/data-model.md
The UI Library makes it simple for developers to inject that user data model int
- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Localization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/localization.md
Learn how to set up the localization correctly using the UI Library in your appl
## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Theming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md
ACS UI Library uses components and icons from both [Fluent UI](https://developer
- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
- [Learn more about UI Library Design Kit](../../quickstarts/ui-library/get-started-ui-kit.md)
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/callflows-for-customer-interactions.md
+
+ Title: Build a customer interaction workflow using Call Automation
+
+description: Quickstart on how to use Call Automation to answer a call, recognize DTMF input, and add a participant to a call.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Build a customer interaction workflow using Call Automation
++
+In this quickstart, you'll learn how to build an application that uses the Azure Communication Services Call Automation SDK to handle the following scenario:
+- handling the `IncomingCall` event from Event Grid
+- answering a call
+- playing an audio file and recognizing input(DTMF) from caller
+- adding a communication user to the call such as a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services
+++
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+![Screenshot of portal page to create a new event subscription.](./media/event-susbcription.png)
+
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal.
+2. Your Event Grid subscription to the `IncomingCall` should execute and call your application that will request to answer the call.
+3. When the call is connected, a `CallConnected` event will be sent to your application's callback url. At this point, the application will request audio to be played and to receive input from the caller.
+4. From your phone, press any three number keys, or press one number key and then # key.
+5. When the input has been received and recognized, the application will make a request to add a participant to the call.
+6. Once the added user answers, you can talk to them.
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn how to [redirect inbound telephony calls](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) with Call Automation.
+- Learn more about [Play action](../../concepts/call-automation/play-action.md).
+- Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/redirect-inbound-telephony-calls.md
+
+ Title: Azure Communication Services Call Automation how-to for redirecting inbound PSTN calls
+
+description: Provides a how-to for redirecting inbound telephony calls with Call Automation.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Redirect inbound telephony calls with Call Automation
++
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+++
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
+2. Your Event Grid subscription to the IncomingCall should execute and call your application.
+3. The call will be redirected to the endpoint(s) you specified in your application.
+
+Since this call flow involves a redirected call instead of answering it, pre-call web hook callbacks to notify your application the other endpoint accepted the call aren't published.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Last updated 06/30/2022
-zone_pivot_groups: acs-plat-android-web
+zone_pivot_groups: acs-plat-android-web-ios
[!INCLUDE [Raw media with Android](./includes/raw-medi)] ::: zone-end [!INCLUDE [Raw media with iOS](./includes/raw-medi)] ::: zone-end
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps - Learn more about [Media Streaming](../../concepts/voice-video-calling/media-streaming.md).-- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md).-- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md).
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn more about [Play action](../../concepts/call-automation/play-action.md).
+- Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/play-action.md
- Title: Play Audio-
-description: Provides a quick start for playing audio to participants as part of a call.
--- Previously updated : 09/06/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Quickstart: Play action
-
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-This quickstart will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK.
---
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
--- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)-- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/recognize-action.md
- Title: Recognize Action-
-description: Provides a quick start for recognizing user input from participants on a call.
--- Previously updated : 09/16/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Quickstart: Recognize action
-
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-This quickstart will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK.
---
-## Event codes
-
-|Status|Code|Subcode|Message|
-|-|--|--|--|
-|RecognizeCompleted|200|8531|Action completed, max digits received.|
-|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
-|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
-|RecognizeFailed|400|8510|Action failed, initial silence timeout reached|
-|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.|
-|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
-|RecognizeFailed|500|8512|Unknown internal server error.|
--
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next Steps
--- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)-- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md)-- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
container-apps Authentication Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md
You've now configured a native client application that can request access your c
### Daemon client application (service-to-service calls)
-Your application can acquire a token to call a Web API hosted in your container app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
+Your application can acquire a token to call a Web API hosted in your container app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant.
1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your daemon app registration.
Your application can acquire a token to call a Web API hosted in your container
1. After the app registration is created, copy the value of **Application (client) ID**. 1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again.
-You can now [request an access token using the client ID and client secret](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#use-the-access-token-to-access-the-secured-resource), and Container Apps Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
+You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and Container Apps Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
This process allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must adjust the configuration.
This process allows _any_ client application in your Azure AD tenant to request
1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md). 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to select **Grant admin consent** to authorize the client application to request the permission.
-1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
+1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
1. Within the target Container Apps code, you can now validate that the expected roles are present in the token. The validation steps aren't performed by the Container Apps auth layer. For more information, see [Access user claims](authentication.md#access-user-claims-in-application-code). You've now configured a daemon client application that can access your container app using its own identity.
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
The first time you attach GitHub Actions to your container app, you need to prov
az ad sp create-for-rbac \ --name <SERVICE_PRINCIPAL_NAME> \ --role "contributor" \
- --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME> \
- --sdk-auth
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
``` # [PowerShell](#tab/powershell)
az ad sp create-for-rbac \
az ad sp create-for-rbac ` --name <SERVICE_PRINCIPAL_NAME> ` --role "contributor" `
- --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME> `
- --sdk-auth
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
``` As you interact with this example, replace the placeholders surrounded by `<>` with your values.
-The return value from this command is a JSON payload, which includes the service principal's `tenantId`, `clientId`, and `clientSecret`.
+The return values from this command includes the service principal's `appId`, `password` and `tenant`. You need to pass these values to the `az containerapp github-action add` command.
The following example shows you how to add an integration while using a personal access token.
az containerapp github-action add \
--registry-url <URL_TO_CONTAINER_REGISTRY> \ --registry-username <REGISTRY_USER_NAME> \ --registry-password <REGISTRY_PASSWORD> \
- --service-principal-client-id <CLIENT_ID> \
- --service-principal-client-secret <CLIENT_SECRET> \
- --service-principal-tenant-id <TENANT_ID> \
+ --service-principal-client-id <appId> \
+ --service-principal-client-secret <password> \
+ --service-principal-tenant-id <tenant> \
--token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN> ```
az containerapp github-action add `
--registry-url <URL_TO_CONTAINER_REGISTRY> ` --registry-username <REGISTRY_USER_NAME> ` --registry-password <REGISTRY_PASSWORD> `
- --service-principal-client-id <CLIENT_ID> `
- --service-principal-client-secret <CLIENT_SECRET> `
- --service-principal-tenant-id <TENANT_ID> `
+ --service-principal-client-id <appId> `
+ --service-principal-client-secret <password> `
+ --service-principal-tenant-id <tenant> `
--token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN> ```
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Content-Type: application/json
```
-This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you'll then add the value of `access_token` to a client connection with the vault.
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#successful-response). To access Key Vault, you'll then add the value of `access_token` to a client connection with the vault.
### REST endpoint reference
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
Last updated 09/26/2022
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](../postgresql/index.yml) database with a managed identity running on [Container Apps](overview.md).
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+ What you will learn: > [!div class="checklist"]
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)]
-Next, declare a variable to hold the VNET name.
+Register the `Microsoft.ContainerService` provider.
+
+# [Bash](#tab/bash)
+
+```bash
+az provider register --namespace Microsoft.ContainerService
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
+```
+++
+Declare a variable to hold the VNET name.
# [Bash](#tab/bash)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
WITH (num varchar(100)) AS [IntToFloat]
The full fidelity schema representation is designed to handle the full breadth of polymorphic schemas in the schema-agnostic operational data. In this schema representation, no items are dropped from the analytical store even if the well-defined schema constraints (that is no mixed data type fields nor mixed data type arrays) are violated.
-This is achieved by translating the leaf properties of the operational data into the analytical store with distinct columns based on the data type of values in the property. The leaf property names are extended with data types as a suffix in the analytical store schema such that they can be queries without ambiguity.
+This is achieved by translating the leaf properties of the operational data into the analytical store as JSON `key-value` pairs, where the datatype is the `key` and the property content is the `value`. This JSON object representation allows queries without ambiguity, and you can individually analyze each datatype.
-In the full fidelity schema representation, each datatype of each property will generate a column for that datatype. Each of them count as one of the 1000 maximum properties.
+In other words, in the full fidelity schema representation, each datatype of each property of each document will generate a `key-value`pair in a JSON object for that property. Each of them count as one of the 1000 maximum properties limit.
For example, let's take the following sample document in the transactional store:
salary: 1000000
} ```
-The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note it's a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
+The nested object `address` is a property in the root level of the document and will be represented as a column. Each leaf property in the `address` object will be represented as a JSON object: `{"object":{"streetNo":{"int32":15850},"streetName":{"string":"NE 40th St."},"zip":{"int32":98052}}}`.
-##### Data type to suffix map for full fidelity schema
+Unlike the well-defined schema representation, the full fidelity method allows variation in datatypes. If the next document in this collection of the example above has `streetNo` as a string, it will be represented in analytical store as `"streetNo":{"string":15850}`. In well-defined schema method, it wouldn't be represented.
-Here's a map of all the property data types and their suffix representations in the analytical store in full fidelity schema representation:
+
+##### Datatypes map for full fidelity schema
+
+Here's a map of all the property data types and their representations in the analytical store in full fidelity schema representation:
|Original data type |Suffix |Example | ||||
Here's a map of all the property data types and their suffix representations in
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
+##### Using full fidelity schema on Spark
+
+Spark will manage each datatype as a column when loading into a `DataFrame`. Let's assume a collection with the documents below.
+
+```json
+{
+ "_id" : "1" ,
+ "item" : "Pizza",
+ "price" : 3.49,
+ "rating" : 3,
+ "timestamp" : 1604021952.6790195
+},
+{
+ "_id" : "2" ,
+ "item" : "Ice Cream",
+ "price" : 1.59,
+ "rating" : "4" ,
+ "timestamp" : "2022-11-11 10:00 AM"
+}
+```
+
+While the first document has `rating` as a number and `timestamp` in utc format, the second document has `rating` and `timestamp` as strings. Assuming that this collection was loaded into `DataFrame` without any data transformation, the output of the `df.printSchema()` is:
+
+```JSON
+root
+ |-- _rid: string (nullable = true)
+ |-- _ts: long (nullable = true)
+ |-- id: string (nullable = true)
+ |-- _etag: string (nullable = true)
+ |-- _id: struct (nullable = true)
+ | |-- objectId: string (nullable = true)
+ |-- item: struct (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- price: struct (nullable = true)
+ | |-- float64: double (nullable = true)
+ |-- rating: struct (nullable = true)
+ | |-- int32: integer (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- timestamp: struct (nullable = true)
+ | |-- float64: double (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- _partitionKey: struct (nullable = true)
+ | |-- string: string (nullable = true)
+ ```
+
+In well-defined schema representation, both `rating` and `timestamp` of the second document wouldn't be represented. In full fidelity schema, you can use the following examples to individually access to each value of each datatype.
+
+In the example below, we can use `PySpark` to run an aggregation:
+
+```PySpark
+df.groupBy(df.item.string).sum().show()
+```
+
+In the example below, we can use `PySQL` to run another aggregation:
+
+```PySQL
+df.createOrReplaceTempView("Pizza")
+sql_results = spark.sql("SELECT sum(price.float64),count(*) FROM Pizza where timestamp.string is not null and item.string = 'Pizza'")
+sql_results.show()
+```
+
+##### Using full fidelity schema on SQL
+
+Considering the same documents of the Spark example above, customers can use the following syntax example:
+
+```SQL
+SELECT rating,timestamp_string,timestamp_utc
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating integer '$.rating.int32',
+timestamp varchar(50) '$.timestamp.string',
+timestamp_utc float '$.timestamp.float64'
+) as HTAP
+WHERE timestamp is not null or timestamp_utc is not null
+```
+
+Starting from the query above, customers can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. Customers can also hide complex datatype structures by using views.
+
+```SQL
+create view MyView as
+SELECT MyRating=rating,MyTimestamp = convert(varchar(50),timestamp_utc)
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating integer '$.rating.int32',
+timestamp_utc float '$.timestamp.float64'
+) as HTAP
+WHERE timestamp_utc is not null
+union all
+SELECT MyRating=convert(integer,rating_string),MyTimestamp = timestamp_string
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating_string varchar(50) '$.rating.string',
+timestamp_string varchar(50) '$.timestamp.string'
+) as HTAP
+WHERE timestamp_string is not null
+```
++ ##### Working with the MongoDB `_id` field
-the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, `Full Fidelity Schema` will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
+the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
-###### Spark
+###### Working with the MongoDB `_id` field in Spark
```Python import org.apache.spark.sql.types._
df = spark.read.format("cosmos.olap")\
df.select("id", "_id.objectId").show() ```
-###### SQL
+###### Working with the MongoDB `_id` field in SQL
```SQL SELECT TOP 100 id=CAST(_id as VARBINARY(1000))
The schema representation type decision must be made at the same time that Synap
> In the command above, replace `create` with `update` for existing accounts. With the PowerShell:
- ```
+ ```PowerShell
New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity" ```
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
After the 10 seconds is over, the burst capacity has been used up. If the worklo
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page. + Before submitting your request: - Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Change feed functionality is surfaced as change stream in API for MongoDB and Qu
Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
-## Measuing change feed request unit consumption
+## Measuring change feed request unit consumption
Use Azure Monitor to measure the request unit (RU) consumption of the change feed. For more information, see [monitor throughput or request unit usage in Azure Cosmos DB](monitor-request-unit-usage.md).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
To check whether an Azure Cosmos DB account is eligible for the preview, you can
:::image type="content" source="media/merge/throughput-and-scaling-category.png" alt-text="Screenshot of Throughput and Scaling content in Diagnose and solve issues page."::: ### How to identify containers to merge
Containers that meet both of these conditions are likely to benefit from merging
Condition 1 often occurs when you've previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state. Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions.
-#### Criteria 1
+#### Condition 1
To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
In the below example, we have an autoscale container provisioned with 5000 RU/s
:::image type="content" source="media/merge/RU-per-physical-partition-metric.png" alt-text="Screenshot of Azure Monitor metric Physical Partition Throughput in Azure portal.":::
-#### Criteria 2
+#### Condition 2
To determine the current average storage per physical partition, first find the overall storage (data + index) of the container.
Navigate to **Insights** > **Storage** > **Data & Index Usage**. The total stora
:::image type="content" source="media/merge/storage-per-container.png" alt-text="Screenshot of Azure Monitor storage (data + index) metric for container in Azure portal.":::
-Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have five physical partitions.
+Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Condition 1. In our example, we have five physical partitions.
Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / five physical partitions) = 14.8 GB per physical partition.
-Based on criteria 1 and 2, our container can potentially benefit from merging partitions.
+Based on conditions 1 and 2, our container can potentially benefit from merging partitions.
### Merging physical partitions
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
Title: Get started using Azure Cosmos DB for MongoDB and Python
-description: Presents a Python code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
--
+ Title: Quickstart - Azure Cosmos DB for MongoDB for Python with MongoDB driver
+description: Learn how to build a Python app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
+++ - Previously updated : 04/26/2022 ms.devlang: python-+ Last updated : 11/08/2022+
-# Quickstart: Get started using Azure Cosmos DB for MongoDB and Python
+# Quickstart: Azure Cosmos DB for MongoDB for Python with MongoDB driver
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](quickstart-python.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](quickstart-go.md)
->
+Get started with the PyMongo package to create databases, collections, and documents within your Azure Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+In this quickstart, you'll communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/). Also, you'll use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands), which are designed to help you create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](/azure/cosmos-db/account-databases-containers-items).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Python 3.8+](https://www.python.org/downloads/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run `python --version` to check that you have a recent version of Python.
+* Run ``az --version`` (Azure CLI) or `Get-Module -ListAvailable Az*` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
-This [quickstart](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) demonstrates how to:
-1. Create an [Azure Cosmos DB for MongoDB account](introduction.md)
-2. Connect to your account using PyMongo
-3. Create a sample database and collection
-4. Perform CRUD operations in the sample collection
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the MongoDB npm package.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the API for MongoDB.
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
+++
-## Prerequisites to run the sample app
+### Get MongoDB connection string
-* [Python](https://www.python.org/downloads/) 3.9+ (It's best to run the [sample code](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) described in this article with this recommended version. Although it may work on older versions of Python 3.)
-* [PyMongo](https://pypi.org/project/pymongo/) installed on your machine
+#### [Azure CLI](#tab/azure-cli)
-<a id="create-account"></a>
-## Create a database account
+#### [PowerShell](#tab/azure-powershell)
-## Learn the object model
-Before you continue building the application, let's look into the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+#### [Portal](#tab/azure-portal)
-* Azure Cosmos DB for MongoDB account
-* Databases
-* Collections
-* Documents
+++
+### Create a new Python app
+
+1. Create a new empty folder using your preferred terminal and change directory to the folder.
+
+ > [!NOTE]
+ > If you just want the finished code, download or fork and clone the [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) repo that has the full example. You can also `git clone` the repo in Azure Cloud Shell to walk through the steps shown in this quickstart.
+
+2. Create a *requirements.txt* file that lists the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages.
+
+ ```text
+ # requirements.txt
+ pymongo
+ python-dotenv
+ ```
+
+3. Create a virtual environment and install the packages.
+
+ #### [Windows](#tab/venv-windows)
+
+ ```bash
+ # py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
+ py -3 -m venv .venv
+ source .venv/Scripts/activate
+ pip install -r requirements.txt
+ ```
+
+ #### [Linux / macOS](#tab/venv-linux+macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ pip install -r requirements.txt
+ ```
+
+
+
+### Configure environment variables
++
+## Object model
+
+Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
+
+* [Database](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - Azure Cosmos DB's API for MongoDB can support one or more independent databases.
+
+* [Collection](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - A database can contain one or more collections. A collection is a group of documents stored in MongoDB, and can be thought of as roughly the equivalent of a table in a relational database.
+
+* [Document](https://pymongo.readthedocs.io/en/stable/tutorial.html#documents) - A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection don't need to have the same set of fields or structure. And common fields in a collection's documents may hold different types of data.
To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
-## Get the code
+## Code examples
-Download the sample Python code [from the repository](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) or use git clone:
+* [Authenticate the client](#authenticate-the-client)
+* [Get database](#get-database)
+* [Get collection](#get-collection)
+* [Create an index](#create-an-index)
+* [Create a document](#create-a-document)
+* [Get an document](#get-a-document)
+* [Query documents](#query-documents)
-```shell
-git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started
-```
+The sample code described in this article creates a database named `adventureworks` with a collection named `products`. The `products` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier. The complete sample code is at https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started/tree/main/001-quickstart/.
-## Retrieve your connection string
+For the steps below, the database won't use sharding and shows a synchronous application using the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) driver. For asynchronous applications, use the [Motor](https://www.mongodb.com/docs/drivers/motor/) driver.
-When running the sample code, you have to enter your API for MongoDB account's connection string. Use the following steps to find it:
+### Authenticate the client
-1. In the [Azure portal](https://portal.azure.com/), select your Azure Cosmos DB account.
+1. In the project directory, create an *run.py* file. In your editor, add require statements to reference packages you'll use, including the PyMongo and python-dotenv packages.
-2. In the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the primary connection string.
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="package_dependencies":::
-> [!WARNING]
-> Never check passwords or other sensitive data into source code.
+2. Get the connection information from the environment variable defined in an *.env* file.
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="client_credentials":::
-## Run the code
+3. Define constants you'll use in the code.
-```shell
-python run.py
-```
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="constant_values":::
-## Understand how it works
+### Connect to Azure Cosmos DBΓÇÖs API for MongoDB
-### Connecting
+Use the [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient) object to connect to your Azure Cosmos DB for MongoDB resource. The connect method returns a reference to the database.
-The following code prompts the user for the connection string. It's never a good idea to have your connection string in code since it enables anyone with it to read or write to your database.
-```python
-CONNECTION_STRING = getpass.getpass(prompt='Enter your primary connection string: ') # Prompts user for connection string
-```
+### Get database
-The following code creates a client connection to your API for MongoDB and tests to make sure it's valid.
+Check if the database exists with [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method. If the database doesn't exist, use the [create database extension command](/azure/cosmos-db/mongodb/custom-commands#create-database) to create it with a specified provisioned throughput.
-```python
-client = pymongo.MongoClient(CONNECTION_STRING)
-try:
- client.server_info() # validate connection string
-except pymongo.errors.ServerSelectionTimeoutError:
- raise TimeoutError("Invalid API for MongoDB connection string or timed out when attempting to connect")
-```
-### Resource creation
-The following code creates the sample database and collection that will be used to perform CRUD operations. When creating resources programmatically, it's recommended to use the API for MongoDB extension commands (as shown here) because these commands have the ability to set the resource throughput (RU/s) and configure sharding.
+### Get collection
-Implicitly creating resources will work but will default to recommended values for throughput and will not be sharded.
+Check if the collection exists with the [list_collection_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_names) method. If the collection doesn't exist, use the [create collection extension command](/azure/cosmos-db/mongodb/custom-commands#create-collection) to create it.
-```python
-# Database with 400 RU throughput that can be shared across the DB's collections
-db.command({'customAction': "CreateDatabase", 'offerThroughput': 400})
-```
-```python
- # Creates a unsharded collection that uses the DB s shared throughput
-db.command({'customAction': "CreateCollection", 'collection': UNSHARDED_COLLECTION_NAME})
-```
+### Create an index
+
+Create an index using the [update collection extension command](/azure/cosmos-db/mongodb/custom-commands#update-collection). You can also set the index in the create collection extension command. Set the index to `name` property in this example so that you can later sort with the cursor class [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort) method on product name.
++
+### Create a document
+
+Create a document with the *product* properties for the `adventureworks` database:
+
+* A *category* property. This property can be used as the logical partition key.
+* A *name* property.
+* An inventory *quantity* property.
+* A *sale* property, indicating whether the product is on sale.
++
+Create a document in the collection by calling the collection level operation [update_one](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.update_one). In this example, you'll *upsert* instead of *create* a new document. Upsert isn't necessary in this example because the product *name* is random. However, it's a good practice to upsert in case you run the code more than once and the product name is the same.
+
+The result of the `update_one` operation contains the `_id` field value that you can use in subsequent operations. The *_id* property was created automatically.
+
+### Get a document
+
+Use the [find_one](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.find_one) method to get a document.
++
+In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (`_id`) and a partition key.
+
+### Query documents
+
+After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [`Collection.find`](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.find) to get a [`Cursor`](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor) result, and then use [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort).
-### Writing a document
-The following inserts a sample document we will continue to use throughout the sample. We get its unique _id field value so that we can query it in subsequent operations.
-```python
-"""Insert a sample document and return the contents of its _id field"""
-document_id = collection.insert_one({SAMPLE_FIELD_NAME: randint(50, 500)}).inserted_id
+Troubleshooting:
+
+* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+
+## Run the code
+
+This app creates an API for MongoDB database and collection and creates a document and then reads the exact same document back. Finally, the example issues a query that returns documents that match a specified product *category*. With each step, the example outputs information to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```console
+python run.py
```
-### Reading/Updating a document
-The following queries, updates, and again queries for the document that we previously inserted.
+The output of the app should be similar to this example:
++
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB for NoSQL account, you can delete the corresponding resource group.
-```python
-print("Found a document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+### [Azure CLI](#tab/azure-cli)
-collection.update_one({"_id": document_id}, {"$set":{SAMPLE_FIELD_NAME: "Updated!"}})
-print("Updated document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
```
-### Deleting a document
-Lastly, we delete the document we created from the collection.
-```python
-"""Delete the document containing document_id from the collection"""
-collection.delete_one({"_id": document_id})
+### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
```
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the [Azure portal](https://portal.azure.com).
+
+1. Select **Delete resource group**.
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+++ ## Next steps
-In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and perform CRUD operations.
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the PyMongo driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
+> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](/azure/cosmos-db/migration-choices)
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/distribute-throughput-across-partitions.md
If you aren't seeing 429 responses and your end to end latency is acceptable, th
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page. + Before submitting your request: - Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to. - Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
The Azure Cosmos DB team will review your request and contact you via email to c
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic. ## Example scenario
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
Previously updated : 06/01/2022 Last updated : 11/09/2022 ms.devlang: csharp
The following classes have been replaced on the 3.0 SDK:
* `Microsoft.Azure.Documents.UriFactory`
-* `Microsoft.Azure.Documents.Document`
-
-* `Microsoft.Azure.Documents.Resource`
-
-The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
+The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
# [.NET SDK v3](#tab/dotnet-v3)
await client.CreateDocumentAsync(
+* `Microsoft.Azure.Documents.Document`
+ Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+* `Microsoft.Azure.Documents.Resource`
+
+There is no direct replacement for `Resource`, in cases where it was used for documents, follow the guidance for `Document`.
+
+* `Microsoft.Azure.Documents.AccessCondition`
+
+`IfNoneMatch` or `IfMatch` are now available on the `Microsoft.Azure.Cosmos.ItemRequestOptions` directly.
+ ### Changes to item ID generation Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
public Guid Id { get; set; }
### Changed default behavior for connection mode
-The SDK v3 now defaults to Direct + TCP connection modes compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
+The SDK v3 now defaults to [Direct + TCP connection modes](sdk-connection-modes.md) compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
### Changes to FeedOptions (QueryRequestOptions in v3.0 SDK) The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` in the SDK v3 and within the class, several properties have had changes in name and/or default value or been removed completely.
-`FeedOptions.MaxDegreeOfParallelism` has been renamed to `QueryRequestOptions.MaxConcurrency` and default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.
-
-`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
-
-`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
-
-`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
-
-The following properties have been removed:
-
-* `FeedOptions.DisableRUPerMinuteUsage`
-
-* `FeedOptions.EnableCrossPartitionQuery`
-
-* `FeedOptions.JsonSerializerSettings`
-
-* `FeedOptions.PartitionKeyRangeId`
-
-* `FeedOptions.PopulateQueryMetrics`
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`FeedOptions.MaxDegreeOfParallelism`|`QueryRequestOptions.MaxConcurrency` - Default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.|
+|`FeedOptions.PartitionKey`|`QueryRequestOptions.PartitionKey` - Behavior maintained. |
+|`FeedOptions.EnableCrossPartitionQuery`|Removed. Default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically. |
+|`FeedOptions.PopulateQueryMetrics`|Removed. It is now enabled by default and part of the [diagnostics](troubleshoot-dotnet-sdk.md#capture-diagnostics).|
+|`FeedOptions.RequestContinuation`|Removed. It is now promoted to the query methods themselves. |
+|`FeedOptions.JsonSerializerSettings`|Removed. Serialization can be customized through a [custom serializer](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializer) or [serializer options](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializeroptions).|
+|`FeedOptions.PartitionKeyRangeId`|Removed. Same outcome can be obtained from using [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization) as input to the query method.|
+|`FeedOptions.DisableRUPerMinuteUsage`|Removed.|
### Constructing a client
catch (CosmosException cosmosException) {
### ConnectionPolicy
-Some settings in `ConnectionPolicy` have been renamed or replaced:
+Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClientOptions`:
| .NET v2 SDK | .NET v3 SDK | |-|-|
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md
If there's high percent of rate limited requests and no hot partition:
If there's high percent of rate limited requests and there's an underlying hot partition: - Long-term, for best cost and performance, consider **changing the partition key**. The partition key can't be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.-- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+- Short-term, you can temporarily increase the overall RU/s of the resource to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+- Short-term, you can use the [**throughput redistribution across partitions feature** (preview)](distribute-throughput-across-partitions.md) to assign more RU/s to the physical partition that is hot. This is recommended only when the hot physical partition is predictable and consistent.
> [!TIP] > When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation. Learn more about [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md).
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
As a result, we see in the following diagram that each physical partition gets 3
In general, if you have a starting number of physical partitions `P`, and want to set a desired RU/s `S`:
-Increase your RU/s to: `10,000 * P * 2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P)))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly.
+Increase your RU/s to: `10,000 * P * (2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P))))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly.
> [!NOTE] > When you increase the RU/s of a database or container, this can impact the minimum RU/s you can lower to in the future. Typically, the minimum RU/s is equal to MAX(400 RU/s, Current storage in GB * 10 RU/s, Highest RU/s ever provisioned / 100). For example, if the highest RU/s you've ever scaled to is 100,000 RU/s, the lowest RU/s you can set in the future is 1000 RU/s. Learn more about [minimum RU/s](concepts-limits.md#minimum-throughput-limits). #### Step 2: Lower your RU/s to the desired RU/s
-For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * 2 ^ (ROUND(LOG_2(150,000/(10,000 * 5)))` = 200,000 RU/s, and then lower to 150,000 RU/s.
+For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * (2 ^ (ROUND(LOG_2(150,000/(10,000 * 5))))` = 200,000 RU/s, and then lower to 150,000 RU/s.
When we scaled up to 200,000 RU/s, the lowest manual RU/s we can now set in the future is 2000 RU/s. The [lowest autoscale max RU/s](autoscale-faq.yml#lowering-the-max-ru-s) we can set is 20,000 RU/s (scales between 2000 - 20,000 RU/s). Since our target RU/s is 150,000 RU/s, we are not affected by the minimum RU/s.
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
The process of key rotation and regeneration is simple. First, make sure that **
1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
# [If your application is currently using the secondary key](#tab/using-secondary-key)
The process of key rotation and regeneration is simple. First, make sure that **
1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
Azure Cosmos DB RBAC is the ideal access control method in situations where:
See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
-For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md).
+For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md).
## <a id="resource-tokens"></a> Resource tokens
Resource tokens provide access to the application resources within a database. R
- Are created when a [user](#users) is granted [permissions](#permissions) to a specific resource. - Are recreated when a permission resource is acted upon on by POST, GET, or PUT call. - Use a hash resource token specifically constructed for the user, resource, and permission.-- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of five hours.
+- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of 24 hours.
- Provide a safe alternative to giving out the primary key. - Enable clients to read, write, and delete resources in the Azure Cosmos DB account according to the permissions they've been granted.
As a database service, Azure Cosmos DB enables you to search, select, modify and
- To learn more about Azure Cosmos DB database security, see [Azure Cosmos DB Database security](database-security.md). - To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources). - For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)-- For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md)
+- For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md
Any container that is created in a serverless account is a serverless container.
- You can't create a shared throughput database in a serverless account and doing so returns an error. - Serverless containers can store a maximum of 50 GB of data and indexes.
-> [!NOTE]
-> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
+### Serverless 1 TB container preview
+Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md). After the request is approved, all existing and future serverless accounts in the subscription will be able to use containers with size up to 1 TB.
## Monitoring your consumption
If you have used Azure Cosmos DB in provisioned throughput mode before, you'll f
When browsing the **Metrics** pane of your account, you'll find a chart named **Request Units consumed** under the **Overview** tab. This chart shows how many Request Units your account has consumed: You can find the same chart when using Azure Monitor, as described [here](monitor-request-unit-usage.md). Azure Monitor enables the ability to configure [alerts](../azure-monitor/alerts/alerts-metric-overview.md), which can be used to notify you when your Request Unit consumption has passed a certain threshold.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Title: Try Azure Cosmos DB free
-description: Try Azure Cosmos DB free of charge. No sign-up or credit card required. It's easy to test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time during your trial.
+ Title: |
+ Try Azure Cosmos DB free
+description: |
+ Try Azure Cosmos DB free. No credit card required. Test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time.
-+ Previously updated : 11/02/2022 Last updated : 11/07/2022 # Try Azure Cosmos DB free [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table, PostgreSQL](includes/appliesto-nosql-mongodb-cassandra-gremlin-table-postgresql.md)]
-[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. If you're using the API for NoSQL, migrate your Try Azure Cosmos DB data to your upgraded account.
+[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period.
+
+If you're using the API for NoSQL or PostgreSQL, you can also migrate your Try Azure Cosmos DB data to your upgraded account before the trial ends.
This article walks you through how to create your account, limits, and upgrading your account. This article also walks through how to migrate your data from your Try Azure Cosmos DB sandbox to your own account using the API for NoSQL.
-## Try Azure Cosmos DB limits
+## Limits to free account
+
+### [NoSQL / Cassandra/ Gremlin / Table](#tab/nosql+cassandra+gremlin+table)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial. | Resource | Limit | | | |
-| Duration of the trial | 30 days (a new trial can be requested after expiration) After expiration, the information stored is deleted. Prior to expiration you can upgrade your account and migrate the information stored. |
-| Maximum containers per subscription (API for NoSQL, Gremlin, Table) | 1 |
-| Maximum containers per subscription (API for MongoDB) | 3 |
+| Duration of the trial | 30 days┬╣┬▓ |
+| Maximum containers per subscription | 1 |
| Maximum throughput per container | 5,000 | | Maximum throughput per shared-throughput database | 20,000 | | Maximum total storage per account | 10 GB |
-Try Azure Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+
+### [MongoDB](#tab/mongodb)
+
+The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial.
+
+| Resource | Limit |
+| | |
+| Duration of the trial | 30 days┬╣┬▓ |
+| Maximum containers per subscription | 3 |
+| Maximum throughput per container | 5,000 |
+| Maximum throughput per shared-throughput database | 20,000 |
+| Maximum total storage per account | 10 GB |
+
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+
+### [PostgreSQL](#tab/postgresql)
+
+The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial.
+
+| Resource | Limit |
+| | |
+| Duration of the trial | 30 days┬╣┬▓ |
+
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
++ ## Create your Try Azure Cosmos DB account
From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an
Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation.
-* [API for NoSQL Quickstart](nosql/quickstart-portal.md#create-container-database)
-* [API for PostgreSQL Quickstart](postgresql/quickstart-create-portal.md)
-* [API for MongoDB Quickstart](mongodb/quickstart-python.md#learn-the-object-model)
+* [API for NoSQL](nosql/quickstart-portal.md#create-container-database)
+* [API for PostgreSQL](postgresql/quickstart-create-portal.md)
+* [API for MongoDB](mongodb/quickstart-python.md#object-model)
* [API for Apache Cassandra](cassandr) * [API for Apache Gremlin](gremlin/quickstart-console.md#add-a-graph) * [API for Table](table/quickstart-dotnet.md)
-You can also get started with one of the learning resources in Data Explorer.
+You can also get started with one of the learning resources in the Data Explorer.
:::image type="content" source="media/try-free/data-explorer.png" lightbox="media/try-free/data-explorer.png" alt-text="Screenshot of the Azure Cosmos DB Data Explorer landing page.":::
You can also get started with one of the learning resources in Data Explorer.
Your account is free for 30 days. After expiration, a new sandbox account can be created. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. Here are the steps to start an upgrade.
-1. Select the option to upgrade your current account in the Dashboard page or from the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
+### Start upgrade
- :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+1. From either the Azure portal or the Try Azure Cosmos DB free page, select the option to **Upgrade** your account.
-1. Select **Sign up for Azure Account** & create an Azure Cosmos DB account.
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Screenshot of the confirmation page for the account upgrade experience.":::
-You can migrate your database from Try Azure Cosmos DB to your new Azure account if you're utilizing the API for NoSQL after you've signed up for an Azure account. Here are the steps to migrate.
+1. Choose to either **Sign up for an Azure account** or **Sign in** and create a new Azure Cosmos DB account following the instructions in the next section.
-### Create an Azure Cosmos DB account
+### Create a new account
-
-Navigate back to the **Upgrade** page and select **Next** to move on to the third step and move your data.
+#### [NoSQL / MongoDB / Cassandra / Gremlin / Table](#tab/nosql+mongodb+cassandra+gremlin+table)
> [!NOTE]
-> You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+> While this example uses API for NoSQL, the steps are similar for the APIs for MongoDB, Cassandra, Gremlin, or Table.
-## Migrate your Try Azure Cosmos DB data
+#### [PostgreSQL](#tab/postgresql)
-If you're using the API for NoSQL, you can migrate your Try Azure Cosmos DB data to your upgraded account. HereΓÇÖs how to migrate your Try Azure Cosmos DB database to your new Azure Cosmos DB API for NoSQL account.
-### Prerequisites
+
-* Must be using the Azure Cosmos DB API for NoSQL.
-* Must have an active Try Azure Cosmos DB account and Azure account.
-* Must have an Azure Cosmos DB account using the API for NoSQL in your Azure subscription.
+### Move data to your new account
-### Migrate your data
+1. Navigate back to the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide. Select **Next** to move on to the third step and move your data.
-1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data.
+ :::image type="content" source="media/try-free/account-creation-options.png" lightbox="media/try-free/account-creation-options.png" alt-text="Screenshot of the sign-in/sign-up experience to upgrade your current account.":::
- 1. Go to your Azure Cosmos DB Account in the Azure portal.
+## Migrate your data
- 1. Find the connection string of your new Azure Cosmos DB account within the **Keys** page of your new account.
+### [NoSQL / MongoDB / Cassandra / Gremlin / Table](#tab/nosql+mongodb+cassandra+gremlin+table)
- :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+> [!NOTE]
+> While this example uses API for NoSQL, the steps are similar for the APIs for MongoDB, Cassandra, Gremlin, or Table.
+
+1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data. This information can be found within the **Keys** page of your new account.
+
+ :::image type="content" source="media/try-free/account-keys.png" lightbox="media/try-free/account-keys.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+
+1. Back in the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide, insert the connection string of the new Azure Cosmos DB account in the **Connection string** field.
+
+ :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the migrate data options in the portal.":::
-1. Insert the connection string of the new Azure Cosmos DB account in the **Upgrade your account** page.
+1. Select **Next** to move the data to your account. Provide your email address to be notified by email once the migration has been completed.
+
+### [PostgreSQL](#tab/postgresql)
+
+1. Locate your **PostgreSQL connection URL** of the Azure Cosmos DB account you created for your data. This information can be found within the **Connection String** page of your new account.
+
+1. Back in the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide, insert the connection string of the new Azure Cosmos DB account in the **Connection string** field.
1. Select **Next** to move the data to your account.
-1. Provide your email address to be notified by email once the migration has been completed.
+ ## Delete your account There can only be one free Try Azure Cosmos DB account per Microsoft account. You may want to delete your account or to try different APIs, you'll have to create a new account. HereΓÇÖs how to delete your account.
-1. Go to the [Try AzureAzure Cosmos DB](https://aka.ms/trycosmosdb) page.
+1. Go to the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
-1. Select Delete my account.
+1. Select **Delete my account**.
- :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+ :::image type="content" source="media/try-free/delete-account.png" lightbox="media/try-free/delete-account.png" alt-text="Screenshot of the confirmation page for the account deletion experience.":::
## Next steps
After you create a Try Azure Cosmos DB sandbox account, you can start building a
* Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-portal.md#create-container-database) * [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md)
- * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#learn-the-object-model)
+ * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#object-model)
* [Get started with Azure Cosmos DB for Cassandra](cassandr) * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md#add-a-graph) * [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 11/04/2022 Last updated : 11/08/2022
Users with a Microsoft Customer Agreement must always submit a request to Azure
> * An outstanding invoice is paid by your default payment method. In order to have it paid by check or wire transfer, you must change your default payment method to check or wire transfer after you've been approved. > * Currently, payment by check or wire transfer isn't supported for Global Azure in China. > * For Microsoft Online Services Program accounts, if you switch to pay by check or wire transfer, you can't switch back to paying by credit or debit card.
+> * Currently, only customers in the United States can get automatically approved to change their payment method to check/wire transfer. Support for other regions is being evaluated.
## Request to pay by check or wire transfer
+> [!NOTE]
+> Currently only customers in the United States can get automatically approved to change their payment method to check/wire transfer. Support for other regions is being evaluated. If you are not in the United States, you must [Submit a request to set up pay by check or wire transfer](#submit-a-request-to-set-up-pay-by-check-or-wire-transfer) to change your payment method.
+ 1. Sign in to the Azure portal. 1. Navigate to **Subscriptions** and then select the one that you want to set up check or wire transfer for. 1. In the left menu, select **Payment methods**.
Users with a Microsoft Customer Agreement must always submit a request to Azure
## Submit a request to set up pay by check or wire transfer
+Users in all regions can submit a request to pay by check or wire transfer through support. Currently, only customers in the United States can get automatically approved to change their payment method to check/wire transfer.
+ If you're not automatically approved, you can submit a request to Azure support to approve payment by check or wire transfer. If your request is approved, you can switch to pay by check or wire transfer in the Azure portal. 1. Sign in to the Azure portal to submit a support request. Search for and select **Help + support**.
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
Linked services can be created in the Azure Data Factory UX via the [management
You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), [Azure Resource Manager Template](quickstart-create-data-factory-resource-manager-template.md), and [Azure portal](quickstart-create-data-factory-portal.md).
+When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access is not granted, the user will not be able to see the available resources and will need to use manual entry option.
## Data store linked services
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
The SharePoint List Online connector uses service principal authentication to co
1. Open SharePoint Online site link e.g. `https://[your_site_url]/_layouts/15/appinv.aspx` (replace the site URL). 2. Search the application ID you registered, fill the empty fields, and click "Create".
- - App Domain: `localhost.com`
- - Redirect URL: `https://www.localhost.com`
+ - App Domain: `contoso.com`
+ - Redirect URL: `https://www.contoso.com`
- Permission Request XML: ```xml
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Here are details of the application's actions and arguments:
> [!NOTE] > Release Notes are available on the same [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717).
-## Service account for Self-hosted integration runtime
+## Service account for self-hosted integration runtime
-The default log on service account of Self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
+The default log on service account of the self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
When the processor and available RAM aren't well utilized, but the execution of
> > Data movement in transit from a self-hosted IR to other data stores always happens within an encrypted channel, regardless of whether or not this certificate is set.
-### Credential Sync
+### Credential sync
If you don't store credentials or secret values in an Azure Key Vault, the credentials or secret values will be stored in the machines where your self-hosted integration runtime locates. Each node will have a copy of credential with certain version. In order to make all nodes work together, the version number should be the same for all nodes. ## Proxy server considerations
If you see error messages like the following ones, the likely reason is improper
```output Unable to connect to the remote server
- A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted).
+ A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (self-hosted).
``` ### Enable remote access from an intranet
There are two ways to store the credentials when using self-hosted integration r
This is the recommended way to store your credentials in Azure. The self-hosted integration runtime can directly get the credentials from Azure Key Vault which can highly avoid some potential security issues or any credential in-sync problems between self-hosted integration runtime nodes. 2. Store credentials locally. The credentials will be push to the machine of your self-hosted integration runtime and be encrypted.
-When your self-hosted integration runtime is recovered from crash, you can either recover credential from the one you backup before or edit linked service and let the credential be pushed to self-hosted integration runtime again. Otherwise, the pipeline doesn't work due to the lack of credential when running via self-hosted integration runtime.
+When your self-hosted integration runtime is recovered from crash, you can either recover credential from the one you back up before or edit linked service and let the credential be pushed to self-hosted integration runtime again. Otherwise, the pipeline doesn't work due to the lack of credential when running via self-hosted integration runtime.
> [!NOTE] > If you prefer to store the credential locally, your need to put the domain for interactive authoring in the allowlist of your firewall > and open the port. This channel is also for the self-hosted integration runtime to get the credentials.
You can install the self-hosted integration runtime by downloading a Managed Ide
- Regularly back up the credentials associated with the self-hosted integration runtime. - To automate self-hosted IR setup operations, refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime).
+## Important considerations
+
+When installing a self-hosted integration runtime consider following
+
+- Keep it close to your data source but not necessarily on the same machine
+- Don't install it on the same machine as Power BI gateway
+- Windows Server only(FIPS-compliant encryption servers might cause jobs to fail)
+- Share across multiple data sources
+- Share across multiple data factories
+ ## Next steps For step-by-step instructions, see [Tutorial: Copy on-premises data to cloud](tutorial-hybrid-copy-powershell.md).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 10/14/2022 Last updated : 11/08/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
* [Data preview](#data-preview) [**Pipeline experimental view**](#pipeline-experimental-view)
- * [Adding activities](#adding-activities)
- * [Iteration & conditionals container view](#iteration-and-conditionals-container-view)
+ * [Dynamic content flyout](#dynamic-content-flyout)
[**Monitoring experimental view**](#monitoring-experimental-view)
- * [Simplified default monitoring view](#simplified-default-monitoring-view)
* [Error message relocation to Status column](#error-message-relocation-to-status-column)
+ * [Hierarchy view](#hierarchy-view)
+ * [Simplified default monitoring view](#simplified-default-monitoring-view)
### Dataflow data-first experimental view
Columns can be rearranged by dragging a column by its header. You can also sort
UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process.
-#### Adding activities to the canvas
-
-> [!NOTE]
-> This experience is now available in the default ADF settings.
-
-You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
-
-Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success.
--
-#### Iteration and conditionals container view
-
-> [!NOTE]
-> This experience is now available in the default ADF settings.
-
-You can now view the activities contained iteration and conditional activities.
-
+#### Dynamic content flyout
-##### Adding Activities
+A new flyout has been added to make it easier to set dynamic content in your pipeline activities without having to use the expression builder. The dynamic content flyout is currently supported in these activities and settings:
-You have two options to add activities to your iteration and conditional activities.
+| **Activity** | **Setting name** |
+| | |
+| Azure Function | Function Name |
+| Databricks-Notebook | Notebook path |
+| Databricks-Jar | Main class name |
+| Databricks-Python | Python file |
+| Fail | Fail message |
+| Fail | Error code |
+| Web | Url |
+| Webhook | Url |
+| Wait | Wait time in seconds |
+| Filter | Items |
+| Filter | Conditions |
+| ForEach | Items |
+| If/Switch/Until | Expression |
+
+In supported activities, you will see an icon next to the setting. Clicking this will open up the flyout where you can choose your dynamic content.
++
-1. Use the + button in your container to add an activity.
+### Monitoring experimental view
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new activity container with the add button highlighted on the left side of the center of the screen.":::
-
- Clicking this button will bring up a drop-down list of all activities that you can add.
+UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
+The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-13.png" alt-text="Screenshot of a drop-down list in the activity container with all the activities listed.":::
-
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
+#### Error message relocation to Status column
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-14.png" alt-text="Screenshot of the container with three activities in the center of the container.":::
+To make it easier for you to view errors when you see a **Failed** pipeline run, error messages have been relocated to the **Status** column.
-> [!NOTE]
-> If your container includes more than 5 activities, only the first 4 will be shown in the container preview.
+Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
-2. Use the edit button in your container to see everything within the container. You can use the canvas to edit or add to your pipeline.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-15.png" alt-text="Screenshot of the container with the edit button highlighted on the right side of a box in the center of the screen.":::
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-16.png" alt-text="Screenshot of the inside of the container with three activities linked together.":::
-
- Add additional activities by dragging new activities to the canvas or click the add button on the right-most activity to bring up a drop-down list of all activities.
+#### Hierarchy view
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-17.png" alt-text="Screenshot of the Add activity button in the bottom left corner of the right-most activity.":::
-
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right-most activity.":::
-
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
+When monitoring your pipeline run, you have the option to enable the hierarchy view, which will provide a consolidated view of the activities that ran.
+This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab.
-##### Adjusting activity size
+##### How to enable the hierarchy view in pipeline debug output
-Your containerized activities can be viewed in two sizes. In the expanded size, you will be able to see all the activities in the container.
+In the **Output** tab in your pipeline, there is a new dropdown to select your monitoring view.
-To save space on your canvas, you can also collapse the containerized view using the **Minimize** arrows found in the top right corner of the activity.
+Select **Hierarchy** to see the new hierarchy view. If you have iteration or conditional activities, the nested activities will be grouped under parent activity.
-This will shrink the activity size and hide the nested activities.
+Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view.
-If you have multiple container activities, you can save time by collapsing or expanding all activities at once by right clicking on the canvas. This will bring up the option to hide all nested activities.
+##### How to enable the hierarchy view in pipeline monitoring
+In the detailed view of your pipeline run, there is a new dropdown to select your monitoring view next to the Status filter.
-Click **Hide nested activities** to collapse all containerized activities. To expand all the activities, click **Show nested activities**, found in the same list of canvas options.
+Select **Hierarchy** to see the new hierarchy view. If you have iteration or conditional activities, the nested activities will be grouped under parent activity.
-### Monitoring experimental view
+Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view.
-UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
-The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
#### Simplified default monitoring view
The default monitoring view has been simplified with fewer default columns. You
| Error | If the pipeline failed, the run error | | Run ID | ID of the pipeline run | - You can edit your default view by clicking **Edit Columns**. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-21.png" alt-text="Screenshot of the Edit Columns button in the center of the top row.":::
Add columns by clicking **Add column** or remove columns by clicking the trashca
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-22.png" alt-text="Screenshot of the Add column button and trashcan icon to edit column view.":::
-#### Error message relocation to Status column
-
-Error messages have now been relocated to the **Status** column. This will allow you to easily view errors when you see a **Failed** pipeline run.
-
-Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
--- ## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service. To understand how to estimate pricing for any scenario, not just the examples here, refer to the article [Plan and manage costs for Azure Data Factory](plan-manage-costs.md).
For more details about pricing in Azure Data Factory, refer to the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/).
databox-online Azure Stack Edge Gpu 2207 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2207-release-notes.md
Previously updated : 08/04/2022 Last updated : 11/09/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2037.5375**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2038.5916**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
-The 2207 release has the following features and enhancements:
+The 2207 release has the following features and enhancements:
- **Kubernetes version update** - This release contains a Kubernetes version update from 1.20.9 to v1.22.6.
databox-online Azure Stack Edge Gpu 2209 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2209-release-notes.md
Previously updated : 09/21/2022 Last updated : 11/10/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2209** release, which maps to software version **2.2.2088.5593**. This software can be applied to your device if you're running at least **Azure Stack Edge 2207** (2.2.2307.5375).
+This article applies to the **Azure Stack Edge 2209** release, which maps to software version **2.2.2088.5593**. This software can be applied to your device if you're running at least **Azure Stack Edge 2207** (2.2.2038.5916).
> [!IMPORTANT] > Azure Stack Edge 2209 update contains critical security fixes. As with any new release, we strongly encourage customers to apply this update at the earliest opportunity.
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
Previously updated : 05/03/2022 Last updated : 11/04/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
Ordering through Azure Edge Hardware Center will create an Azure resource that w
[!INCLUDE [Create management resource](../../includes/azure-edge-hardware-center-create-management-resource.md)] - ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro 2 device with the resource. You can get this key now while you are in the Azure portal.
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 11/03/2022 Last updated : 11/09/2022
The following table lists the dimensions of the shipping package in millimeters
### Enclosure weight
-# [Model 642GT](#tab/sku-a)
+# [Model 64G2T](#tab/sku-a)
| Line # | Hardware | Weight lbs | |--|||
-| 1 | Model 642GT | 21.0 |
+| 1 | Model 64G2T | 21.0 |
| | | | | 2 | Shipping weight, with 4-post mount | 35.3 |
-| 3 | Model 642GT install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
+| 3 | Model 64G2T install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
| | | | | 4 | Shipping weight, with 2-post mount | 32.1 |
-| 5 | Model 642GT install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| 5 | Model 64G2T install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
| | | | | 6 | Shipping weight with wall mount | 31.1 |
-| 7 | Model 642GT install handling without bezel | 19.8 |
+| 7 | Model 64G2T install handling without bezel | 19.8 |
| | | |
-| 4 | 4-post in box | 6.28 |
-| 7 | 2-post in box | 3.08 |
+| 8 | 4-post in box | 6.28 |
+| 9 | 2-post in box | 3.08 |
| 10 | Wallmount as packaged | 2.16 | # [Model 128G4T1GPU](#tab/sku-b)
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
To remediate the issues:
1. For further details, and the list of affected machines, select an alert.
- The alerts page shows the more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
+ The security alerts page shows more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
:::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="The start time of adaptive application controls alerts is the time that adaptive application controls created the alert."::: > [!NOTE]
- > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
+ > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the security alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
## Move a machine from one group to another
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Use sample alerts to:
To create sample alerts:
-1. As a user with the role **Subscription Contributor**, from the toolbar on the alerts page, select **Create sample alerts**.
+1. As a user with the role **Subscription Contributor**, from the toolbar on the security alerts page, select **Sample alerts**.
1. Select the subscription. 1. Select the relevant Microsoft Defender plan/s for which you want to see alerts. 1. Select **Create sample alerts**.
You can simulate alerts for both of the control plane, and workload alerts with
1. Wait 30 minutes.
-1. In the Azure portal, navigate to the Defender for Cloud's alerts page.
+1. In the Azure portal, navigate to the Defender for Cloud's security alerts page.
1. On the relevant Kubernetes cluster, locate the following alert `Microsoft Defender for Cloud test alert for K8S (not a threat)`
You can simulate alerts for both of the control plane, and workload alerts with
1. Wait 10 minutes.
-1. In the Azure portal, navigate to the Defender for Cloud's alerts page.
+1. In the Azure portal, navigate to the Defender for Cloud's security alerts page.
1. On the relevant AKS cluster, locate the following alert `Microsoft Defender for Cloud test alert (not a threat)`.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 10/30/2022 Last updated : 11/09/2022 # Cloud Security Posture Management (CSPM)
Defender for Cloud continually assesses your resources, subscriptions, and organ
|Aspect|Details| |-|:-| |Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
+| Prerequisites | - **Foundational CSPM capabilities** - None <br> <br> - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled. |
|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. | ## Defender CSPM plan options
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
zone_pivot_groups: k8s-host Previously updated : 07/25/2022 Last updated : 10/30/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
The triggers for an image scan are:
- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image. -- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+- **On import** - Azure Container Registry has import tools to bring images to your registry from an existing registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
- **Continuous scan**- This trigger has two modes:
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate the
1. Sign in to the [Azure portal](https://portal.azure.com/) as a Subscription Contributor user.
-1. Navigate to the Alerts page.