Updates from: 11/11/2022 02:15:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
To use MS Graph API, and interact with resources in your Azure AD B2C tenant, yo
- [Update a user](/graph/api/user-update) - [Delete a user](/graph/api/user-delete)
-## User phone number management (beta)
+## User phone number management
A phone number that can be used by a user to sign-in using [SMS or voice calls](sign-in-options.md#phone-sign-in), or [multifactor authentication](multi-factor-authentication.md). For more information, see [Azure AD authentication methods API](/graph/api/resources/phoneauthenticationmethod).
Note, the [list](/graph/api/authentication-list-phonemethods) operation returns
![Enable phone sign-in](./media/microsoft-graph-operations/enable-phone-sign-in.png) > [!NOTE]
-> In the current beta version, this API works only if the phone number is stored with a space between the country code and the phone number. The Azure AD B2C service doesn't currently add this space by default.
+> A correctly represented phone number is stored with a space between the country code and the phone number. The Azure AD B2C service doesn't currently add this space by default.
-## Self-service password reset email address (beta)
+## Self-service password reset email address
An email address that can be used by a [username sign-in account](sign-in-options.md#username-sign-in) to reset the password. For more information, see [Azure AD authentication methods API](/graph/api/resources/emailauthenticationmethod).
An email address that can be used by a [username sign-in account](sign-in-option
- [Update](/graph/api/emailauthenticationmethod-update) - [Delete](/graph/api/emailauthenticationmethod-delete)
-## Software OATH token authentication method (beta)
+## Software OATH token authentication method
A software OATH token is a software-based number generator that uses the OATH time-based one-time password (TOTP) standard for multifactor authentication via an authenticator app. Use the Microsoft Graph API to manage a software OATH token registered to a user:
An email address that can be used by a [username sign-in account](sign-in-option
Manage the [identity providers](add-identity-provider.md) available to your user flows in your Azure AD B2C tenant. -- [List identity providers registered in the Azure AD B2C tenant](/graph/api/identityprovider-list)-- [Create an identity provider](/graph/api/identityprovider-post-identityproviders)-- [Get an identity provider](/graph/api/identityprovider-get)-- [Update identity provider](/graph/api/identityprovider-update)-- [Delete an identity provider](/graph/api/identityprovider-delete)
+- [List identity providers available in the Azure AD B2C tenant](/graph/api/identityproviderbase-availableprovidertypes)
+- [List identity providers configured in the Azure AD B2C tenant](/graph/api/iidentitycontainer-list-identityproviders)
+- [Create an identity provider](/graph/api/identitycontainer-post-identityproviders)
+- [Get an identity provider](/graph/api/identityproviderbase-get)
+- [Update identity provider](/graph/api/identityproviderbase-update)
+- [Delete an identity provider](/graph/api/identityproviderbase-delete)
-## User flow
+## User flow (beta)
Configure pre-built policies for sign-up, sign-in, combined sign-up and sign-in, password reset, and profile update.
Choose a mechanism for letting users register via local accounts. Local accounts
- [Get](/graph/api/b2cauthenticationmethodspolicy-get) - [Update](/graph/api/b2cauthenticationmethodspolicy-update)
-## Custom policies
+## Custom policies (beta)
The following operations allow you to manage your Azure AD B2C Trust Framework policies, known as [custom policies](custom-policy-overview.md).
The following operations allow you to manage your Azure AD B2C Trust Framework p
- [Update or create trust framework policy.](/graph/api/trustframework-put-trustframeworkpolicy) - [Delete an existing trust framework policy](/graph/api/trustframeworkpolicy-delete)
-## Policy keys
+## Policy keys (beta)
The Identity Experience Framework stores the secrets referenced in a custom policy to establish trust between components. These secrets can be symmetric or asymmetric keys/values. In the Azure portal, these entities are shown as **Policy keys**.
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
## Conditional Access -- [List all of the Conditional Access policies](/graph/api/conditionalaccessroot-list-policies?tabs=http)
+- [List the built-in templates for Conditional Access policy scenarios](/graph/api/conditionalaccessroot-list-templates)
+- [List all of the Conditional Access policies](/graph/api/conditionalaccessroot-list-policies)
- [Read properties and relationships of a Conditional Access policy](/graph/api/conditionalaccesspolicy-get) - [Create a new Conditional Access policy](/graph/api/resources/application) - [Update a Conditional Access policy](/graph/api/conditionalaccesspolicy-update)
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
## Retrieve or restore deleted users and applications
-Deleted items can only be restored if they were deleted within the last 30 days.
+Deleted users and apps can only be restored if they were deleted within the last 30 days.
- [List deleted items](/graph/api/directory-deleteditems-list) - [Get a deleted item](/graph/api/directory-deleteditems-get)
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> [!NOTE] > Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute is not automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
+> [!NOTE]
+> When a directory extension attribute in Azure AD does not show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx _acneCostCenter`, make sure you enter it in the same format as defined in the directory.
+ When editing the list of supported attributes, the following properties are provided: - **Name** - The system name of the attribute, as defined in the target object's schema.
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
At this point, the MIM Sync server is no longer needed.
## Import a connector configuration
- 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles.
+ 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#3-install-and-configure-the-azure-ad-connect-provisioning-agent) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles.
1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host runs as. 1. Change to the directory C:\Program Files\Microsoft ECMA2host\Service\ECMA. Ensure there are one or more DLLs already present in that directory. Those DLLs correspond to Microsoft-delivered connectors. 1. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory. 1. Change to the directory C:\Program Files\Microsoft ECMA2Host\Wizard. Run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration. 1. A new window appears with a list of connectors. By default, no connectors will be present. Select **New connector**.
- 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Create a connector" in either the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#create-a-generic-sql-connector) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#configure-a-generic-ldap-connector) articles.
+ 1. Specify the management agent XML file that was exported from MIM Sync earlier. Continue with the configuration and schema-mapping instructions from the section "Create a connector" in either the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#6-create-a-generic-sql-connector) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#configure-a-generic-ldap-connector) articles.
## Next steps
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
After you edit the MSAL code with these parameters, your users can authenticate
## Next steps
-For more information about the native application flow, see [Native apps in Azure Active Directory](../azuread-dev/native-app.md).
+For more information about the native application flow, see [mobile](../develop/authentication-flows-app-scenarios.md#mobile-app-that-calls-a-web-api-on-behalf-of-an-interactive-user) and [desktop](../develop/authentication-flows-app-scenarios.md#desktop-app-that-calls-a-web-api-on-behalf-of-a-signed-in-user) apps in Azure Active Directory.
Learn about setting up [Single sign-on to applications in Azure Active Directory](../manage-apps/sso-options.md#choosing-a-single-sign-on-method).
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
Previously updated : 10/05/2022 Last updated : 11/10/2022
The Windows smart card sign-in works with the latest preview build of Windows 11
## Restrictions and caveats -- Azure AD CBA is supported on Windows Hybrid or Azure AD Joined.
+- Azure AD CBA is supported on Windows devices that are hybrid or Azure AD joined.
- Users must be in a managed domain or using Staged Rollout and can't use a federated authentication model. ## Next steps
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Now we'll walk through each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png" alt-text="Screenshot of the certificate picker." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/cert-picker.png"::: 1. Azure AD verifies the certificate revocation list to make sure the certificate isn't revoked and is valid. Azure AD identifies the user by using the [username binding configured](how-to-certificate-based-authentication.md#step-4-configure-username-binding-policy) on the tenant to map the certificate field value to the user attribute value.
-1. If a unique user is found with a Conditional Access policy that requires multifactor authentication (MFA), and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If the certificate satisfies only a single factor, then it requests the user for a second factor to complete Azure AD Multi-Factor Authentication.
+1. If a unique user is found with a Conditional Access policy that requires multifactor authentication (MFA), and the [certificate authentication binding rule](how-to-certificate-based-authentication.md#step-3-configure-authentication-binding-policy) satisfies MFA, then Azure AD signs the user in immediately. If multifactor authentication is required but the certificate satisfies only a single factor, authentication will fail.
1. Azure AD completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application.
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 09/23/2022 Last updated : 11/10/2022
Combined registration supports the authentication methods and actions in the fol
| Email | Yes | Yes | Yes | | Security questions | Yes | No | Yes | | App passwords* | Yes | No | Yes |
-| FIDO2 security keys*| Yes | Yes | Yes |
+| FIDO2 security keys*| Yes | No | Yes |
> [!NOTE] > <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />
For both modes, users who have previously registered a method that can be used f
### Interrupt mode
-Combined registration adheres to both multifactor authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip the registration interruption and complete it at a later time.
+Combined registration adheres to both multifactor authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip (indefinitely) the registration interruption and complete it at a later time.
The following are sample scenarios where users might be prompted to register or refresh their security info:
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 05/04/2022 Last updated : 11/10/2022
The following Azure AD password policy options are defined. Unless noted, you ca
| Characters allowed |<ul><li>A ΓÇô Z</li><li>a - z</li><li>0 ΓÇô 9</li> <li>@ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ></li> <li>blank space</li></ul> | | Characters not allowed | Unicode characters. | | Password restrictions |<ul><li>A minimum of 8 characters and a maximum of 256 characters.</li><li>Requires three out of four of the following:<ul><li>Lowercase characters.</li><li>Uppercase characters.</li><li>Numbers (0-9).</li><li>Symbols (see the previous password restrictions).</li></ul></li></ul> |
-| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days.</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
+| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
| Password expiry notification (When users are notified of password expiration) |<ul><li>Default value: **14** days (before password expires).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet.</li></ul> | | Password expiry (Let passwords never expire) |<ul><li>Default value: **false** (indicates that password's have an expiration date).</li><li>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.</li></ul> | | Password change history | The last password *can't* be used again when the user changes a password. |
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Each user that's enabled in the text message authentication method policy must b
Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign-in. The user can [set this phone number themselves](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42) in *My Account*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
-When a phone number is set for SMS-sign, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
+When a phone number is set for SMS-based sign-in, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
1. Search for and select **Azure Active Directory**. 1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Users**.
If you receive an error when you try to set a phone number for a user account in
[m365-licensing]: https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans [o365-f1]: https://www.microsoft.com/microsoft-365/business/office-365-f1?market=af [o365-f3]: https://www.microsoft.com/microsoft-365/business/office-365-f3?activetab=pivot%3aoverviewtab
-[azure-ad-pricing]: https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing
+[azure-ad-pricing]: https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Users with a Temporary Access Pass can navigate the setup process on Windows 10
For Azure AD Joined devices: - During the Azure AD Join setup process, users can authenticate with a TAP (no password required) to join the device and register Windows Hello for Business. - On already joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business. -- If the [Web sign-in](https://learn.microsoft.com/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
+- If the [Web sign-in](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
For Hybrid Azure AD Joined devices: - Users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
With the policy applied, it can take up to 1 hour to propagate and for users to
### PowerShell > [!NOTE]
-> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0).
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0&preserve-view=true).
Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign-in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
To change the per-user Azure AD Multi-Factor Authentication state for a user, co
After you enable users, notify them via email. Tell the users that a prompt is displayed to ask them to register the next time they sign in. Also, if your organization uses non-browser apps that don't support modern authentication, they need to create app passwords. For more information, see the [Azure AD Multi-Factor Authentication end-user guide](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to help them get started.
-### Convert users from per-user MFA to Conditional Access based MFA
+### Convert per-user MFA enabled and enforced users to disabled
If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication. Run this PowerShell in an ISE window or save as a `.PS1` file to run locally. The operation can only be done by using the [MSOnline module](/powershell/module/msonline#msonline). ```PowerShell
+# Connect to tenant
+Connect-MsolService
+ # Sets the MFA requirement state function Set-MfaState { [CmdletBinding()]
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout can be integrated with hybrid deployments that use password hash s
When using [pass-through authentication](../hybrid/how-to-connect-pta.md), the following considerations apply: * The Azure AD lockout threshold is **less** than the AD DS account lockout threshold. Set the values so that the AD DS account lockout threshold is at least two or three times greater than the Azure AD lockout threshold.
-* The Azure AD lockout duration must be set longer than the AD DS reset account lockout counter after duration. The Azure AD duration is set in seconds, while the AD duration is set in minutes.
+* The Azure AD lockout duration must be set longer than the AD DS account lockout duration. The Azure AD duration is set in seconds, while the AD duration is set in minutes.
For example, if you want your Azure AD smart lockout duration to be higher than AD DS, then Azure AD would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Azure AD lockout threshold to be 5, then you want your on-premises AD lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD accounts from being locked out by brute force attacks on your Azure AD accounts.
active-directory Azure Ad Endpoint Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md
Previously updated : 07/17/2020 Last updated : 11/09/2022 -+
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Previously updated : 12/05/2019 Last updated : 11/10/2022
This tutorial walks you through creating a hybrid identity environment using Azure Active Directory (Azure AD) Connect cloud sync.
-![Create](media/tutorial-single-forest/diagram-2.png)
+![Diagram that shows the Azure AD Connect cloud sync flow](media/tutorial-single-forest/diagram-2.png)
You can use the environment you create in this tutorial for testing or for getting more familiar with cloud sync. ## Prerequisites+ ### In the Azure Active Directory admin center 1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
You can use the environment you create in this tutorial for testing or for getti
### In your on-premises environment
-1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4 GB RAM and .NET 4.7.1+ runtime
+1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
-2. If there is a firewall between your servers and Azure AD, configure the following items:
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used |
You can use the environment you create in this tutorial for testing or for getti
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy allows you to specify safe suffixes, then add connections t to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly. - Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-1. Sign in to the domain joined server. If you are using the [Basic A D and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
-2. Sign in to the Azure portal using cloud-only global admin credentials.
-3. On the left, select **Azure Active Directory**, click **Azure AD Connect**, and in the center select **Manage cloud sync**.
- ![Azure portal](media/how-to-install/install-6.png)
+1. Sign in to the domain joined server. If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1.
+
+1. Sign in to the Azure portal using cloud-only global admin credentials.
+
+1. On the left, select **Azure Active Directory**.
+
+1. Select **Azure AD Connect**, and in the center select **Manage Azure AD cloud sync**.
+
+ ![Screenshot that shows how to download the Azure AD cloud sync.](media/how-to-install/install-6.png)
+
+1. Select **Download agent**, and select **Accept terms & download**.
+
+ [![Screenshot that shows how to accept the terms and start the download of Azure AD cloud sync.](media/how-to-install/install-6a.png)](media/how-to-install/install-6a.png#lightbox)
+
+1. Run the **Azure AD Connect Provisioning Agent Package** AADConnectProvisioningAgentSetup.exe in your downloads folder.
+
+1. On the splash screen, select **I agree to the license and conditions**, and select **Install**.
-4. Click **Download agent**.
-5. Run the Azure AD Connect provisioning agent.
-6. On the splash screen, **Accept** the licensing terms and click **Install**.
+ ![Screenshot that shows the "Microsoft Azure AD Connect Provisioning Agent Package" splash screen.](media/how-to-install/install-1.png)
- ![Screenshot that shows the "Microsoft Azure A D Connect Provisioning Agent Package" splash screen.](media/how-to-install/install-1.png)
+1. Once this operation completes, the configuration wizard will launch. Sign in with your Azure AD global administrator account. If you have Internet Explorer enhanced security enabled, it will block the sign-in. If so, close the installation, [disable Internet Explorer enhanced security](/troubleshoot/developer/browsers/security-privacy/enhanced-security-configuration-faq), and restart the **Azure AD Connect Provisioning Agent Package** installation.
-7. Once this operation completes, the configuration wizard will launch. Sign in with your Azure AD global administrator account. Note that if you have IE enhanced security enabled this will block the sign-in. If this is the case, close the installation, disable IE enhanced security in Server Manager, and click the **AAD Connect Provisioning Agent Wizard** to restart the installation.
-8. On the **Connect Active Directory** screen, click **Add directory** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account should not have password change requirements. If the password expires or changes, you will need to re-configure the agent with the new credentials. This operation will add your on-premises directory. Click **Next**.
+1. On the **Connect Active Directory** screen, select **Authenticate** and then sign in with your Active Directory domain administrator account. NOTE: The domain administrator account shouldn't have password change requirements. If the password expires or changes, you'll need to reconfigure the agent with the new credentials.
- ![Screenshot of the "Connect Active Directory" screen.](media/how-to-install/install-3a.png)
+ ![Screenshot of the "Connect Active Directory" screen.](media/how-to-install/install-3.png)
-9. On the **Configuration complete** screen, click **Confirm**. This operation will register and restart the agent.
+1. On the **Configure Service Account screen**, select **Create gMSA** and enter the Active Directory domain administrator credentials to create the group Managed Service Account. This account will be used to run the agent service. To continue, select **Next**.
+
+ [![Screenshot that shows create service account.](media/how-to-install/new-install-7.png)](media/how-to-install/new-install-7.png#lightbox)
+
+1. On the **Connect Active Directory** screen, select **Next**. Your current domain has been added automatically.
+
+ [![Screenshot that shows connecting to the Active Directory.](media/how-to-install/new-install-8.png)](media/how-to-install/new-install-8.png#lightbox)
+
+1. On the **Configuration complete** screen, select **Confirm**. This operation will register and restart the agent.
![Screenshot that shows the "Configuration complete" screen.](media/how-to-install/install-4a.png)
-10. Once this operation completes you should see a notice: **Your agent configuration was successfully verified.** You can click **Exit**.</br>
-![Welcome screen](media/how-to-install/install-5.png)</br>
-11. If you still see the initial splash screen, click **Close**.
+1. Once this operation completes, you should see a notice: **Your agent configuration was successfully verified.** You can select **Exit**.
+
+ ![Screenshot that shows the "configuration complete" screen.](media/how-to-install/install-5.png)
+
+1. If you still get the initial splash screen, select **Close**.
## Verify agent installation+ Agent verification occurs in the Azure portal and on the local server that is running the agent. ### Azure portal agent verification
-To verify the agent is being seen by Azure follow these steps:
+
+To verify the agent is being registered by Azure AD, follow these steps:
1. Sign in to the Azure portal.
-2. On the left, select **Azure Active Directory**, click **Azure AD Connect** and in the center select **Manage cloud sync**.</br>
-![Azure portal](media/how-to-install/install-6.png)</br>
+1. On the left, select **Azure Active Directory**, select **Azure AD Connect** and in the center select **Manage Azure AD cloud sync**.
-3. On the **Azure AD Connect cloud sync** screen click **Review all agents**.
-![Azure A D Provisioning](media/how-to-install/install-7.png)</br>
+ ![Screenshot that shows how to manage the Azure AD could sync.](media/how-to-install/install-6.png)
+
+1. On the **Azure AD Connect cloud sync** screen, select
+**Review all agents**.
+
+ [![Screenshot that shows the Azure AD provisioning agents.](media/how-to-install/install-7.png)](media/how-to-install/install-7.png#lightbox)
-4. On the **On-premises provisioning agents screen** you will see the agents you have installed. Verify that the agent in question is there and is marked **active**.
-![Provisioning agents](media/how-to-install/verify-1.png)</br>
+1. On the **On-premises provisioning agents screen**, you'll see the agents you've installed. Verify that the agent in question is there and is marked **active**.
+
+ [![Screenshot that shows the status of a provisioning agent.](media/how-to-install/verify-1.png)](media/how-to-install/verify-1.png#lightbox)
### On the local server
-To verify that the agent is running follow these steps:
-1. Log on to the server with an administrator account
-2. Open **Services** by either navigating to it or by going to Start/Run/Services.msc.
-3. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are present and the status is **Running**.
-![Services](media/how-to-install/troubleshoot-1.png)
+To verify that the agent is running, follow these steps:
+
+1. Log on to the server with an administrator account
+
+1. Open **Services** by either navigating to it or by going to Start/Run/Services.msc.
+
+1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are present and the status is **Running**.
+
+ [![Screenshot that shows the Windows services.](media/how-to-install/troubleshoot-1.png)](media/how-to-install/troubleshoot-1.png#lightbox)
## Configure Azure AD Connect cloud sync
- Use the following steps to configure provisioning
-
-1. Sign in to the Azure AD portal.
-2. Click **Azure Active Directory**
-3. Click **Azure AD Connect**
-4. Select **Manage cloud sync**
-![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-5. Click **New Configuration**
-![Screenshot of Azure A D Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and click **Save**.
-![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)
-1. The configuration status should now be **Healthy**.
-![Screenshot of Azure A D Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
+
+Use the following steps to configure and start the provisioning:
+
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**
+1. Select **Azure AD Connect**
+1. Select **Manage cloud sync**
+
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+
+1. Select **New Configuration**
+
+ [![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)](media/tutorial-single-forest/configure-1.png#lightbox)
+
+1. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
+
+ [![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)](media/how-to-configure/configure-2.png#lightbox)
+
+1. The configuration status should now be **Healthy**.
+
+ [![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)](media/how-to-configure/manage-4.png#lightbox)
## Verify users are created and synchronization is occurring
-You will now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
+
+You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription. 2. On the left, select **Azure Active Directory** 3. Under **Manage**, select **Users**.
-4. Verify that you see the new users in your tenant</br>
+4. Verify that the new users appear in your tenant
## Test signing in with one of your users 1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign in with a user account that was created in your tenant. You will need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.</br>
- ![Verify](media/tutorial-single-forest/verify-1.png)</br>
-You have now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
+1. Sign in with a user account that was created in your tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
+
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
+You've now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
## Next steps
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
This section covers the configuration options under optional claims for changing
| **name:** | Must be "groups" | | **source:** | Not used. Omit or specify null | | **essential:** | Not used. Omit or specify false |
- | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" |
+ | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" and ΓÇ£cloud_displaynameΓÇ¥ |
- In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored.
+ In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored. Additionally you can add ΓÇ£cloud_displaynameΓÇ¥ to emit display name of the cloud group. Note, that this option works only when `ΓÇ£groupMembershipClaimsΓÇ¥` is set to `ΓÇ£ApplicationGroupΓÇ¥`.
Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
This section covers the configuration options under optional claims for changing
] } ```
+3) Emit group names in the format of samAccountName for on-prem synced groups and display name for cloud groups in SAML and OIDC ID Tokens for the groups assigned to the application:
+
+ **Application manifest entry:**
+
+ ```json
+ "groupMembershipClaims": "ApplicationGroup",
+ "optionalClaims": {
+ "saml2Token": [
+ {
+ "name": "groups",
+ "additionalProperties": [
+ "sam_account_name",
+ "cloud_displayname"
+ ]
+ }
+ ],
+ "idToken": [
+ {
+ "name": "groups",
+ "additionalProperties": [
+ "sam_account_name",
+ "cloud_displayname"
+ ]
+ }
+ ]
+ }
+ ```
## Optional claims example
active-directory Delegated And App Perms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md
Previously updated : 09/27/2021 Last updated : 11/10/2022
## Recommended documents - Learn more about how client applications use [delegated and application permission requests](developer-glossary.md#permissions) to access resources.
+- Learn about [delegated and application permissions](permissions-consent-overview.md).
- See step-by-step instructions on how to [configure a client application's permission requests](quickstart-configure-app-access-web-apis.md) - For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal.
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
multiple Previously updated : 10/11/2021 Last updated : 11/09/2022
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
The following code snippets are examples of such an implementation. If you use t
#### Log level from configuration file
-It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](https://docs.microsoft.com/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application.
+It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application.
#### Log Level as Environment Variable
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
# Get a token from the token cache using MSAL.NET
-When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
+When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should try to fetch it from the cache first.
+
+You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property
+
+## Websites and web APIs
+
+ASP.NET Core and ASP.NET Classic websites should integrate with [Microsoft.Identity.Web](microsoft-identity-web.md), a wrapper for MSAL.NET. Memory token caching or distributed token caching can be configured as described in [token cache serialization](msal-net-token-cache-serialization.md?tabs=aspnetcore).
+
+Web APIs on ASP.NET Core should use Microsoft.Identity.Web. Web APIs on ASP.NET classic, use MSAL directly, by calling `AcquireTokenOnBehalfOf` and should configure memory or distributed caching. For more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet). There is no need to call `AcquireTokenSilent` API. There is no API to clear the cache. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+
+## Web service / Daemon apps
+
+Applications which request tokens for an app identity, with no user involved, by calling `AcquiretTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
+
+Since no user is involved, there is no need to call `AcquireTokenSilent` API. `AcquireTokenForClient` will look in the cache on its own. There is no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+
+## Desktop, command-line, and mobile applications
+
+Desktop, command-line, and mobile applications should first call the AcquireTokenSilent method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, as well as the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token is not applicable.
if (result != null)
// Use the token } ```+
+### Clearing the cache
+
+In public client applications, clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though.
+
+```csharp
+var accounts = (await app.GetAccountsAsync()).ToList();
+
+// clear the cache
+while (accounts.Any())
+{
+ await app.RemoveAsync(accounts.First());
+ accounts = (await app.GetAccountsAsync()).ToList();
+}
+```
active-directory Msal Net Clear Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-clear-token-cache.md
# Clear the token cache using MSAL.NET
+## Web API and daemon apps
+
+There is no API to remove the tokens from the cache. Cache size should be handled by setting eviction policies on the underlying storage. See [Cache Serialization](msal-net-token-cache-serialization.md?tabs=aspnetcore) for details on how to use a memory cache or distributed cache.
+
+## Desktop, command line and mobile applications
+ When you [acquire an access token](msal-acquire-cache-tokens.md) using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. Clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though. The following example instantiates a public client application, gets the accounts for the application, and removes the accounts.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
You can also specify options to limit the size of the in-memory token cache:
#### Distributed caches
-If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0) interface.
+If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0&preserve-view=true) interface.
For testing purposes only, you may want to use `services.AddDistributedMemoryCache()`, an in-memory implementation of `IDistributedCache`.
active-directory Perms For Given Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/perms-for-given-api.md
Previously updated : 07/15/2019 Last updated : 11/10/2022
## Recommended documents - Learn more about how client applications use [delegated and application permission requests](./developer-glossary.md#permissions) to access resources.
+- Learn about [scopes and permissions in the Microsoft identity platform](scopes-oidc.md)
- See step-by-step instructions on how to [configure a client application's permission requests](./quickstart-configure-app-access-web-apis.md) - For more depth, learn how resource applications expose [scopes](./developer-glossary.md#scopes) and [application roles](./developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. ## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
+[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 06/01/2021 Last updated : 11/09/2022 -+ # Publisher verification
App developers must meet a few requirements to complete the publisher verificati
- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. -- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center.
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
- In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
The IDP initiated feature exposes two settings for each application.  
## Next steps -- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0)
+- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0&preserve-view=true)
- To learn more about how to configure this policy see [Customize app SAML token claims](active-directory-saml-claims-customization.md)
active-directory Registration Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md
Previously updated : 09/27/2021 Last updated : 11/09/2022
active-directory Scenario Spa Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-call-api.md
Title: Build single-page app calling a web API description: Learn how to build a single-page application that calls a web API -+
Last updated 09/27/2021-+ #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
active-directory Setup Multi Tenant App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/setup-multi-tenant-app.md
Previously updated : 07/15/2019 Last updated : 11/10/2022
Here is a list of recommended topics to learn more about multi-tenant applications: - Get a general understanding of [what it means to be a multi-tenant application](./developer-glossary.md#multi-tenant-application)
+- Learn about [tenancy in Azure Active Directory](single-and-multi-tenant-apps.md)
- Get a general understanding of [how to configure an application to be multi-tenant](./howto-convert-app-to-be-multi-tenant.md) - Get a step-by-step overview of [how the Azure AD consent framework is used to implement consent](./quickstart-register-app.md), which is required for multi-tenant applications - For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios ## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
+[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Test Throttle Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md
Previously updated : 09/17/2021 Last updated : 11/09/2022 #Customer intent: As a developer, I want to understand the throttling and service limits I might hit so that I can test my app without interruption.
Throttling behavior can depend on the type and number of requests. For example,
When you exceed a throttling limit, you receive the HTTP status code `429 Too many requests` and your request fails. The response includes a `Retry-After` header value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. Retry the request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned. If the request fails again with a 429 error code, you are still being throttled. Continue to use the recommended `Retry-After` delay and retry the request until it succeeds. ## Next steps
-Learn how to [setup a test environment](test-setup-environment.md).
+Learn how to [setup a test environment](test-setup-environment.md).
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Fill in these details with the values you obtain from Azure app registration por
## Add code for user sign-in and token acquisition
-1. Create a new file named *auth.js* under the *router* folder and add the following code there:
+1. Create a new file named *auth.js* under the *routes* folder and add the following code there:
:::code language="js" source="~/ms-identity-node/App/routes/auth.js":::
active-directory Enterprise State Roaming Windows Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-windows-settings-reference.md
The following is a list of the settings that will be roamed or backed up in Wind
## Windows Settings details
-List of settings that can be configured to sync in recent Windows versions. These can be found in Windows 10 under **Settings** > **Accounts** > **Sync your settings** or **Settings** > **Accounts** > **Windows backup** > **Remember my preferences** on Windows 11.
+List of settings that can be configured to sync in recent Windows versions. These can be found in Windows 11 under **Settings** > **Accounts** > **Sync your settings** or **Settings** > **Accounts** > **Windows backup** > **Remember my preferences**.
| Settings | Windows 10 (21H1 or newer) | | | |
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
-Learn more about [how to manage inactive user accounts in Azure AD](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+Learn more about [how to manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
-2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access
-1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an Access Review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review)
+2. To [create an Access Review](/azure/active-directory/governance/create-access-review)
for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
## Create a review to remove inactive external guests
-1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an access review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+2. To [create an access review](/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
3. Select **New access review**.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 10/28/2022 Last updated : 11/09/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on October 28th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on November 9th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | | Windows 10 Enterprise E5 Commercial (GCC Compatible) | WINE5_GCC_COMPAT | 938fd547-d794-42a4-996c-1cc206619580 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) | | Windows 10/11 Enterprise VDA | E3_VDA_only | d13ef257-988a-46f3-8fce-f47484dd4550 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872) |
-| Windows 365 Business 2 vCPU, 4 GB, 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>(CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) |
-| Windows 365 Business 4 vCPU, 16 GB, 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
-| Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
-| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB | CPC_E_2C_8GB_128GB | e2aebe6c-897d-480f-9d62-fff1381581f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
-| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (Preview) | CPC_LVL_2 | 461cb62c-6db7-41aa-bf3c-ce78236cdb9e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
-| Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (Preview) | CPC_LVL_3 | bbb4bf6e-3e12-4343-84a1-54d160c00f40 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) |
+| Windows 365 Business 1 vCPU 2 GB 64 GB | CPC_B_1C_2RAM_64GB | 816eacd3-e1e3-46b3-83c8-1ffd37e053d9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_1C_2RAM_64GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 1 vCPU, 2 GB, 64 GB (3b98b912-1720-4a1e-9630-c9a41dbb61d8) |
+| Windows 365 Business 2 vCPU 4 GB 128 GB | CPC_B_2C_4RAM_128GB | 135bee78-485b-4181-ad6e-40286e311850 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_128GB (1a13832e-cd79-497d-be76-24186f55c8b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 128 GB (1a13832e-cd79-497d-be76-24186f55c8b0) |
+| Windows 365 Business 2 vCPU 4 GB 256 GB | CPC_B_2C_4RAM_256GB | 805d57c3-a97d-4c12-a1d0-858ffe5015d0 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_256GB (a0b1c075-51c9-4a42-b34c-308f3993bb7e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 256 GB (a0b1c075-51c9-4a42-b34c-308f3993bb7e) |
+| Windows 365 Business 2 vCPU 4 GB 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) |
+| Windows 365 Business 2 vCPU 8 GB 128 GB | CPC_B_2C_8RAM_128GB | 71f21848-f89b-4aaa-a2dc-780c8e8aac5b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_SS_2 (9d2eed2c-b0c0-4a89-940c-bc303444a41b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 8 GB, 128 GB (9d2eed2c-b0c0-4a89-940c-bc303444a41b) |
+| Windows 365 Business 2 vCPU 8 GB 256 GB | CPC_B_2C_8RAM_256GB | 750d9542-a2f8-41c7-8c81-311352173432 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_2C_8RAM_256GB (1a3ef005-2ef6-434b-8be1-faa56c892854) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 2 vCPU, 8 GB, 256 GB (1a3ef005-2ef6-434b-8be1-faa56c892854) |
+| Windows 365 Business 4 vCPU 16 GB 128 GB | CPC_B_4C_16RAM_128GB | ad83ac17-4a5a-4ebb-adb2-079fb277e8b9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
+| Windows 365 Business 4 vCPU 16 GB 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) |
+| Windows 365 Business 4 vCPU 16 GB 256 GB | CPC_B_4C_16RAM_256GB | b3891a9f-c7d9-463c-a2ec-0b2321bda6f9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_256GB (30f6e561-8805-41d0-80ce-f82698b72d7d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 256 GB (30f6e561-8805-41d0-80ce-f82698b72d7d) |
+| Windows 365 Business 4 vCPU 16 GB 512 GB | CPC_B_4C_16RAM_512GB | 1b3043ad-dfc6-427e-a2c0-5ca7a6c94a2b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_4C_16RAM_512GB (15499661-b229-4a1f-b0f9-bd5832ef7b3e) | Exchange Foundation(113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 4 vCPU, 16 GB, 512 GB (15499661-b229-4a1f-b0f9-bd5832ef7b3e) |
+| Windows 365 Business 8 vCPU 32 GB 128 GB | CPC_B_8C_32RAM_128GB | 3cb45fab-ae53-4ff6-af40-24c1915ca07b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_128GB (648005fc-b330-4bd9-8af6-771f28958ac0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 128 GB (648005fc-b330-4bd9-8af6-771f28958ac0) |
+| Windows 365 Business 8 vCPU 32 GB 256 GB | CPC_B_8C_32RAM_256GB | fbc79df2-da01-4c17-8d88-17f8c9493d8f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_256GB (d7a5113a-0276-4dc2-94f8-ca9f2c5ae078) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 256 GB (d7a5113a-0276-4dc2-94f8-ca9f2c5ae078) |
+| Windows 365 Business 8 vCPU 32 GB 512 GB | CPC_B_8C_32RAM_512GB | 8ee402cd-e6a8-4b67-a411-54d1f37a2049 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1(6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>CPC_B_8C_32RAM_512GB (4229a0b4-7f34-4835-b068-6dc8d10be57c) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Windows 365 Business 8 vCPU, 32 GB, 512 GB (4229a0b4-7f34-4835-b068-6dc8d10be57c) |
+| Windows 365 Enterprise 1 vCPU 2 GB 64 GB | CPC_E_1C_2GB_64GB | 0c278af4-c9c1-45de-9f4b-cd929e747a2c | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_1C_2GB_64GB (86d70dbb-d4c6-4662-ba17-3014204cbb28) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 1 vCPU, 2 GB, 64 GB (86d70dbb-d4c6-4662-ba17-3014204cbb28) |
+| Windows 365 Enterprise 2 vCPU 4 GB 128 GB | CPC_E_2C_4GB_128GB | 226ca751-f0a4-4232-9be5-73c02a92555e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_1 (545e3611-3af8-49a5-9a0a-b7867968f4b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 128 GB (545e3611-3af8-49a5-9a0a-b7867968f4b0) |
+| Windows 365 Enterprise 2 vCPU 4 GB 256 GB | CPC_E_2C_4GB_256GB | 5265a84e-8def-4fa2-ab4b-5dc278df5025 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_256GB (0d143570-9b92-4f57-adb5-e4efcd23b3bb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 256 GB (0d143570-9b92-4f57-adb5-e4efcd23b3bb) |
+| Windows 365 Enterprise 2 vCPU 4 GB 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
+| Windows 365 Enterprise 2 vCPU 8 GB 128 GB | CPC_E_2C_8GB_128GB | e2aebe6c-897d-480f-9d62-fff1381581f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
+| Windows 365 Enterprise 2 vCPU 8 GB 256 GB | CPC_E_2C_8GB_256GB | 1c79494f-e170-431f-a409-428f6053fa35 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_8GB_256GB (d3468c8c-3545-4f44-a32f-b465934d2498) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 256 GB (d3468c8c-3545-4f44-a32f-b465934d2498) |
+| Windows 365 Enterprise 4 vCPU 16 GB 128 GB | CPC_E_4C_16GB_128GB | d201f153-d3b2-4057-be2f-fe25c8983e6f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_128GB (2de9c682-ca3f-4f2b-b360-dfc4775db133) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 128 GB (2de9c682-ca3f-4f2b-b360-dfc4775db133) |
+| Windows 365 Enterprise 4 vCPU 16 GB 256 GB | CPC_E_4C_16GB_256GB | 96d2951e-cb42-4481-9d6d-cad3baac177e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) |
+| Windows 365 Enterprise 4 vCPU 16 GB 512 GB | CPC_E_4C_16GB_512GB | 0da63026-e422-4390-89e8-b14520d7e699 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_512GB (3bba9856-7cf2-4396-904a-00de74fba3a4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 512 GB (3bba9856-7cf2-4396-904a-00de74fba3a4) |
+| Windows 365 Enterprise 8 vCPU 32 GB 128 GB | CPC_E_8C_32GB_128GB | c97d00e4-0c4c-4ec2-a016-9448c65de986 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_128GB (2f3cdb12-bcde-4e37-8529-e9e09ec09e23) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 128 GB (2f3cdb12-bcde-4e37-8529-e9e09ec09e23) |
+| Windows 365 Enterprise 8 vCPU 32 GB 256 GB | CPC_E_8C_32GB_256GB | 7818ca3e-73c8-4e49-bc34-1276a2d27918 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_256GB (69dc175c-dcff-4757-8389-d19e76acb45d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 256 GB (69dc175c-dcff-4757-8389-d19e76acb45d) |
+| Windows 365 Enterprise 8 vCPU 32 GB 512 GB | CPC_E_8C_32GB_512GB | 9fb0ba5f-4825-4e84-b239-5167a3a5d4dc | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_8C_32GB_512GB (0e837228-8250-4047-8a80-d4a34ba11658) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 8 vCPU, 32 GB, 512 GB (0e837228-8250-4047-8a80-d4a34ba11658) |
+| Windows 365 Enterprise 2 vCPU 4 GB 128 GB (Preview) | CPC_LVL_1 | bce09f38-1800-4a51-8d50-5486380ba84a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_1 (545e3611-3af8-49a5-9a0a-b7867968f4b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 128 GB (545e3611-3af8-49a5-9a0a-b7867968f4b0) |
+| Windows 365 Shared Use 2 vCPU 4 GB 64 GB | Windows_365_S_2vCPU_4GB_64GB | 1f9990ca-45d9-4c8d-8d04-a79241924ce1 | CPC_S_2C_4GB_64GB (64981bdb-a5a6-4a22-869f-a9455366d5bc) | Windows 365 Shared Use 2 vCPU, 4 GB, 64 GB (64981bdb-a5a6-4a22-869f-a9455366d5bc) |
+| Windows 365 Shared Use 2 vCPU 4 GB 128 GB | Windows_365_S_2vCPU_4GB_128GB | 90369797-7141-4e75-8f5e-d13f4b6092c1 | CPC_S_2C_4GB_128GB (51855c77-4d2e-4736-be67-6dca605f2b57) | Windows 365 Shared Use 2 vCPU, 4 GB, 128 GB (51855c77-4d2e-4736-be67-6dca605f2b57) |
+| Windows 365 Shared Use 2 vCPU 4 GB 256 GB | Windows_365_S_2vCPU_4GB_256GB | 8fe96593-34d3-49bb-aeee-fb794fed0800 | CPC_S_2C_4GB_256GB (aa8fbe7b-695c-4c05-8d45-d1dddf6f7616) | Windows 365 Shared Use 2 vCPU, 4 GB, 256 GB (aa8fbe7b-695c-4c05-8d45-d1dddf6f7616) |
+| Windows 365 Shared Use 2 vCPU 8 GB 128 GB | Windows_365_S_2vCPU_8GB_128GB | 2d21fc84-b918-491e-ad84-e24d61ccec94 | CPC_S_2C_8GB_128GB (057efbfe-a95d-4263-acb0-12b4a31fed8d) | Windows 365 for Shared Use 2 vCPU, 8 GB, 128 GB (057efbfe-a95d-4263-acb0-12b4a31fed8d) |
+| Windows 365 Shared Use 2 vCPU 8 GB 256 GB | Windows_365_S_2vCPU_8GB_256GB | 2eaa4058-403e-4434-9da9-ea693f5d96dc | CPC_S_2C_8GB_256GB (50ef7026-6174-40ba-bff7-f0e4fcddbf65) | Windows 365 for Shared Use 2 vCPU, 8 GB, 256 GB (50ef7026-6174-40ba-bff7-f0e4fcddbf65) |
+| Windows 365 Shared Use 4 vCPU 16 GB 128 GB | Windows_365_S_4vCPU_16GB_128GB | 1bf40e76-4065-4530-ac37-f1513f362f50 | CPC_S_4C_16GB_128GB (dd3801e2-4aa1-4b16-a44b-243e55497584) | Windows 365 Shared Use 4 vCPU, 16 GB, 128 GB (dd3801e2-4aa1-4b16-a44b-243e55497584) |
+| Windows 365 Shared Use 4 vCPU 16 GB 256 GB | Windows_365_S_4vCPU_16GB_256GB | a9d1e0df-df6f-48df-9386-76a832119cca | CPC_S_4C_16GB_256GB (2d1d344e-d10c-41bb-953b-b3a47521dca0) | Windows 365 Shared Use 4 vCPU, 16 GB, 256 GB (2d1d344e-d10c-41bb-953b-b3a47521dca0) |
+| Windows 365 Shared Use 4 vCPU 16 GB 512 GB | Windows_365_S_4vCPU_16GB_512GB | 469af4da-121c-4529-8c85-9467bbebaa4b | CPC_S_4C_16GB_512GB (48b82071-99a5-4214-b493-406a637bd68d) | Windows 365 Shared Use 4 vCPU, 16 GB, 512 GB (48b82071-99a5-4214-b493-406a637bd68d) |
+| Windows 365 Shared Use 8 vCPU 32 GB 128 GB | Windows_365_S_8vCPU_32GB_128GB | f319c63a-61a9-42b7-b786-5695bc7edbaf | CPC_S_8C_32GB_128GB (e4dee41f-a5c5-457d-b7d3-c309986fdbb2) | Windows 365 Shared Use 8 vCPU, 32 GB, 128 GB (e4dee41f-a5c5-457d-b7d3-c309986fdbb2) |
+| Windows 365 Shared Use 8 vCPU 32 GB 256 GB | Windows_365_S_8vCPU_32GB_256GB | fb019e88-26a0-4218-bd61-7767d109ac26 | CPC_S_8C_32GB_256GB (1e2321a0-f81c-4d43-a0d5-9895125706b8) | Windows 365 Shared Use 8 vCPU, 32 GB, 256 GB (1e2321a0-f81c-4d43-a0d5-9895125706b8) |
+| Windows 365 Shared Use 8 vCPU 32 GB 512 GB | Windows_365_S_8vCPU_32GB_512GB | f4dc1de8-8c94-4d37-af8a-1fca6675590a | CPC_S_8C_32GB_512GB (fa0b4021-0f60-4d95-bf68-95036285282a) | Windows 365 Shared Use 8 vCPU, 32 GB, 512 GB (fa0b4021-0f60-4d95-bf68-95036285282a) |
| Windows Store for Business | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Windows Store for Business EDU Faculty | WSFB_EDU_FACULTY | c7e9d9e6-1981-4bf3-bb50-a5bdfaa06fb2 | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) |
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Last updated 08/30/2022
-+
Developers can use Azure AD business-to-business APIs to customize the invitatio
## Collaborate with any partner using their identities
-With Azure AD B2B, the partner uses their own identity management solution, so there is no external administrative overhead for your organization. Guest users sign in to your apps and services with their own work, school, or social identities.
+With Azure AD B2B, the partner uses their own identity management solution, so there's no external administrative overhead for your organization. Guest users sign in to your apps and services with their own work, school, or social identities.
- The partner uses their own identities and credentials, whether or not they have an Azure AD account. - You don't need to manage external accounts or passwords.
B2B collaboration is enabled by default, but comprehensive admin settings let yo
- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and Microsoft Azure Government or Microsoft Azure China 21Vianet.
+- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](/azure/azure-government) or [Microsoft Azure China 21Vianet](/azure/china).
## Easily invite guest users from the Azure AD portal
As an administrator, you can easily add guest users to your organization in the
- [Create a new guest user](b2b-quickstart-add-guest-users-portal.md) in Azure AD, similar to how you'd add a new user. - Assign guest users to apps or groups.-- Send an invitation email that contains a redemption link, or send a direct link to an app you want to share.
+- [Send an invitation email](invitation-email-elements.md) that contains a redemption link, or send a direct link to an app you want to share.
-![Screenshot showing the New Guest User invitation entry page.](media/what-is-b2b/add-a-b2b-user-to-azure-portal.png)
- Guest users follow a few simple [redemption steps](redemption-experience.md) to sign in.
-![Screenshot showing the Review permissions page.](media/what-is-b2b/consentscreen.png)
## Allow self-service sign-up
With a self-service sign-up user flow, you can create a sign-up experience for e
You can also use [API connectors](api-connectors-overview.md) to integrate your self-service sign-up user flows with external cloud systems. You can connect with custom approval workflows, perform identity verification, validate user-provided information, and more.
-![Screenshot showing the user flows page.](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
## Use policies to securely share your apps and services
You can use authentication and authorization policies to protect your corporate
- At the application level. - For specific guest users to protect corporate apps and data.
-![Screenshot showing the Conditional Access option.](media/what-is-b2b/tutorial-mfa-policy-2.png)
## Let application and group owners manage their own guest users
You can delegate guest user management to application owners so that they can ad
- Administrators set up self-service app and group management. - Non-administrators use their [Access Panel](https://myapps.microsoft.com) to add guest users to applications or groups.
-![Screenshot showing the Access panel for a guest user.](media/what-is-b2b/access-panel-manage-app.png)
## Customize the onboarding experience for B2B guest users
Bring your external partners on board in ways customized to your organization's
## Integrate with Identity providers
-Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers so your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
+Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers. This way your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
-![Screenshot showing the Identity providers page.](media/what-is-b2b/identity-providers.png)
## Integrate with SharePoint and OneDrive
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided by Azure AD B2B to provide better security, lower cost, and reduce complexity when compared to local account creation. Learn more
-[here.](https://learn.microsoft.com/azure/active-directory/fundamentals/secure-external-access-resources)
+[here.](/azure/active-directory/fundamentals/secure-external-access-resources)
If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
If your organization currently issues local credentials that external users have
Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application. The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about [provisioning B2B guests to on-premises
-applications.](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+applications.](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
External users should be notified that the migration will be taking place and wh
## Migrate local guest accounts to Azure AD B2B
-Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](https://learn.microsoft.com/azure/active-directory/external-identities/invite-internal-users)
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](/azure/active-directory/external-identities/invite-internal-users)
This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
The Azure AD provisioning service enables organizations to [bring identities fro
### On-premises HR + joining multiple data sources
-To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
+To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS.
Use the numbered sections in the next two section to cross reference the followi
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
|No.| What | From | To | Technology | | - | - | - | - | - |
-| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](https://learn.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync) |
-| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](https://learn.microsoft.com/azure/active-directory/hybrid/whatis-azure-ad-connect) |
+| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](/azure/active-directory/cloud-sync/what-is-cloud-sync) |
+| 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](/azure/active-directory/hybrid/whatis-azure-ad-connect) |
| 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) | | 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)| | 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) |
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
* **Leaver**: When users leave the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
-[Learn more about Azure AD Lifecycle Workflows](https://learn.microsoft.com/azure/active-directory/governance/what-are-lifecycle-workflows)
+[Learn more about Azure AD Lifecycle Workflows](/azure/active-directory/governance/what-are-lifecycle-workflows)
> [!Note] > For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md).
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
Another approach could have been to utilize the capabilities of Azure AD Connect
## Multi-tenant resource isolation
-A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
Subscriptions that enable [delegated resource management](../../lighthouse/conce
It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
-**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
## Azure resource management with Azure AD
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
After deleting workflows, you can view them on the **Deleted Workflows (Preview)
## Delete a workflow using Microsoft Graph
-To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta).
+To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true).
To view
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
The first time your organization uses these cmdlets for this scenario, you need
1. If there were users who couldn't be located in Azure AD, or weren't active and able to sign in, but you want to have their access reviewed or their attributes updated in the database, you need to update or create Azure AD users for them. You can create users in bulk by using either: - A CSV file, as described in [Bulk create users in the Azure AD portal](../enterprise-users/users-bulk-add.md)
- - The [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples) cmdlet
+ - The [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples&preserve-view=true) cmdlet
Ensure that these new users are populated with the attributes required for Azure AD to later match them to the existing users in the application.
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
For a guide on supplying this information to a custom task extension via Microso
## Next steps -- [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta)
+- [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta&preserve-view=true)
- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md) - [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
Separating processing of the workflow from the tasks is important because, in a
## Next steps -- [userProcessingResult resource type](/graph/api/resources/identitygovernance-userprocessingresult?view=graph-rest-beta)-- [taskReport resource type](/graph/api/resources/identitygovernance-taskreport?view=graph-rest-beta)-- [run resource type](/graph/api/resources/identitygovernance-run?view=graph-rest-beta)-- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta)
+- [userProcessingResult resource type](/graph/api/resources/identitygovernance-userprocessingresult?view=graph-rest-beta&preserve-view=true)
+- [taskReport resource type](/graph/api/resources/identitygovernance-taskreport?view=graph-rest-beta&preserve-view=true)
+- [run resource type](/graph/api/resources/identitygovernance-run?view=graph-rest-beta&preserve-view=true)
+- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta&preserve-view=true)
- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) - [Lifecycle Workflow templates](lifecycle-workflow-templates.md)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
The default specific parameters for the **Post-Offboarding of an employee** temp
## Next steps -- [workflowTemplate resource type](/graph/api/resources/identitygovernance-workflowtemplate?view=graph-rest-beta)
+- [workflowTemplate resource type](/graph/api/resources/identitygovernance-workflowtemplate?view=graph-rest-beta&preserve-view=true)
- [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) - [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Detailed **Version information** are as follows:
## Next steps -- [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta)
+- [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta&preserve-view=true)
- [Manage workflow Properties (Preview)](manage-workflow-properties.md) - [Manage workflow versions (Preview)](manage-workflow-tasks.md)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Use the following steps to create a pre-hire workflow that will generate a TAP a
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Use the following steps to create a scheduled leaver workflow that will configur
7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
You can add extra expressions using **And/Or** to create complex conditionals, a
[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox) > [!NOTE]
-> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters)
+> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
# Configure group claims for applications by using Azure Active Directory
-Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports two main patterns:
+Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports three main patterns:
- Groups identified by their Azure AD object identifier (OID) attribute - Groups identified by the `sAMAccountName` or `GroupSID` attribute for Active Directory-synchronized groups and users
+- Groups identified by their Display Name attribute for cloud-only groups (Preview)
> [!IMPORTANT] > The number of groups emitted in a token is limited to 150 for SAML assertions and 200 for JWT, including nested groups. In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Exceeding a limit can lead to unpredictable results. For workarounds to these limits, read more in [Important caveats for this functionality](#important-caveats-for-this-functionality).
Azure Active Directory (Azure AD) can provide a user's group membership informat
## Important caveats for this functionality - Support for use of `sAMAccountName` and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from Active Directory Federation Services (AD FS) and other identity providers. Groups managed in Azure AD don't contain the attributes necessary to emit these claims.-- In order to avoid the number of groups limit if your users have large numbers of group memberships, you can restrict the groups emitted in claims to the relevant groups for the application. Read more about emitting groups assigned to the application for [JWT tokens](..\develop\active-directory-optional-claims.md#configuring-groups-optional-claims) and [SAML tokens](#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration). If assigning groups to your applications is not possible, you can also configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim. Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+- In order to avoid the number of groups limit if your users have large numbers of group memberships, you can restrict the groups emitted in claims to the relevant groups for the application. Read more about emitting groups assigned to the application for [JWT tokens](..\develop\active-directory-optional-claims.md#configuring-groups-optional-claims) and [SAML tokens](#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration). If assigning groups to your applications is not possible, you can also configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim. Group filtering applies to tokens emitted for apps where group claims and filtering were configured in the **Enterprise apps** blade in the portal.
- Group claims have a five-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will have a `"hasgroups":true` claim only if the user is in more than five groups. - We recommend basing in-app authorization on application roles rather than groups when:
To configure group claims for a gallery or non-gallery SAML application via sing
1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
-1. Select **Add a group claim**.
+2. Select **Add a group claim**.
![Screenshot that shows the page for user attributes and claims, with the button for adding a group claim selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
-1. Use the options to select which groups should be included in the token.
+3. Use the options to select which groups should be included in the token.
![Screenshot that shows the Group Claims window with group options.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
To configure group claims for a gallery or non-gallery SAML application via sing
For more information about managing group assignment to applications, see [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+## Emit cloud-only group display name in token (Preview)
+
+You can configure group claim to include the group display name for the cloud-only groups.
+
+1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
+
+2. If you already have group claims configured, select it from the **Additional claims** section. Otherwise, you can add the group claim as described in the previous steps.
+
+3. For the group type emitted in the token select **Groups assigned to the application**:
+
+ ![Screenshot that shows the Group Claims window, with the option for groups assigned to the application selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
+
+4. To emit group display name just for cloud groups, in the **Source attribute** dropdown select the **Cloud-only group display names (Preview)**:
+
+ ![Screenshot that shows the Group Claims source attribute dropdown, with the option for configuring cloud only group names selected.](media/how-to-connect-fed-group-claims/group-claims-ui-8.png)
+
+5. For a hybrid setup, to emit on-premises group attribute for synced groups and display name for cloud groups, you can select the desired on-premises sources attribute and check the checkbox **Emit group name for cloud-only groups (Preview)**:
+
+ ![Screenshot that shows the configuration to emit on-premises group attribute for synced groups and display name for cloud groups.](media/how-to-connect-fed-group-claims/group-claims-ui-9.png)
++ ### Set advanced options #### Customize group claim name
You can also configure group claims in the [optional claims](../../active-direct
| `name` | Must be `"groups"`. | | `source` | Not used. Omit or specify `null`. | | `essential` | Not used. Omit or specify `false`. |
- | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, and `"emit_as_roles"`. |
+ | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, `"cloud_displayname"`, and `"emit_as_roles"`. |
In `additionalProperties`, only one of `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, or `"netbios_domain_and_sam_account_name"` is required. If more than one is present, the first is used and any others are ignored. Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add `"emit_as_roles"` to additional properties. The group values will be emitted in the role claim.
+ To emit group display name for cloud-only groups, you can add `"cloud_displayname"` to `additional properties`. This option will work only when `ΓÇ£groupMembershipClaimsΓÇ¥` is set to `ApplicationGroup`
+ > [!NOTE] > If you use `"emit_as_roles"`, any configured application roles that the user is assigned to will not appear in the role claim.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources. > [!NOTE]
-> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
+> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](/azure/active-directory/hybrid/how-to-connect-group-writeback-v2#understand-limitations-of-public-preview) before you enable this functionality.
There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
To view the existing writeback settings on Microsoft 365 groups in the portal, g
[![Screenshot of Microsoft 365 group properties.](media/how-to-connect-group-writeback/group-2.png)](media/how-to-connect-group-writeback/group-2.png#lightbox)
-You can also view the writeback state via Microsoft Graph. For more information, see [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta).
+You can also view the writeback state via Microsoft Graph. For more information, see [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta&preserve-view=true).
> Example: `GET https://graph.microsoft.com/beta/groups?$filter=groupTypes/any(c:c eq 'Unified')&$select=id,displayName,writebackConfiguration`
Finally, you can view the writeback state via PowerShell by using the [Microsof
For groups that haven't been created yet, you can view whether or not they'll be written back automatically.
-To see the default behavior in your environment for newly created groups, use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type in Microsoft Graph.
+To see the default behavior in your environment for newly created groups, use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type in Microsoft Graph.
> Example: `GET https://graph.microsoft.com/beta/Settings`
You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-u
> If `directorySetting` is returned with a `NewUnifiedGroupWritebackDefault` value of `false`, Microsoft 365 groups *won't automatically* be enabled for writeback when they're created. If the value is not specified or is set to `true`, newly created Microsoft 365 groups *will automatically* be written back. ## Discover if Active Directory has been prepared for Exchange
-To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked).
+To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019&preserve-view=true#how-do-you-know-this-worked).
## Meet prerequisites for public preview The following are prerequisites for group writeback:
The following are prerequisites for group writeback:
- An Azure AD Premium 1 license - Azure AD Connect version 2.0.89.0 or later
-An optional prerequisite is Exchange Server 2016 CU15 or later. You need it only for configuring cloud groups with an Exchange hybrid. For more information, seeΓÇ»[Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites). If you haven't [prepared Active Directory for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail-related attributes of groups won't be written back.
+An optional prerequisite is Exchange Server 2016 CU15 or later. You need it only for configuring cloud groups with an Exchange hybrid. For more information, seeΓÇ»[Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites). If you haven't [prepared Active Directory for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019&preserve-view=true), mail-related attributes of groups won't be written back.
## Choose the right approach The right deployment approach for your organization depends on the current state of group writeback in your environment and the desired writeback behavior.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor. - Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch).-- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](https://learn.microsoft.com/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant).
+- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0&preserve-view=true#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant).
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
- * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](https://learn.microsoft.com/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
+ * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
* You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
To configure directory settings to disable automatic writeback of newly created
New-AzureADDirectorySetting -DirectorySetting $Setting ``` -- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type.
+- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type.
### Disable writeback for each existing Microsoft 365 group
To configure directory settings to disable automatic writeback of newly created
- PowerShell: Use the [Microsoft Identity Tools PowerShell module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16). For example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false` -- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta).
+- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta&preserve-view=true).
## Delete groups when they're disabled for writeback or soft deleted
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
To change domain-based filtering, run the installation wizard: [domain and OU fi
## Organizational unitΓÇôbased filtering To change OU-based filtering, run the installation wizard: [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
+> [!IMPORTANT]
+> If you explicitly select an OU for synchronization, Azure AD Connect will add the DistinguishedName of that OU in the inclusion list for the domain's sync scope. However, if you later rename that OU in Active Directory, the DistinguishedName of the OU is changed, and consequently, Azure AD Connect will no longer consider that OU in sync scope. This will not cause an immediate issue, but upon a full import step, Azure AD Connect will reevaluate the sync scope and delete (i.e. obsolete) any objects out of sync scope, which can potentially cause an unexpected mass deletion of objects in Azure AD. To prevent this issue, after renaming a OU, run Azure AD Connect Wizard and re-select the OU to be again included in sync scope.
## Attribute-based filtering Make sure that you're using the November 2015 ([1.0.9125](reference-connect-version-history.md)) or later build for these steps to work.
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Windows 7 and 8.1 devices are not affected by this issue after UPN changes.
**Known Issues**
-Your organization may use [MAM app protection policies](https://learn.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
+Your organization may use [MAM app protection policies](/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state. **Work Around**
-IT admins should [issue a selective wipe](https://learn.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
+IT admins should [issue a selective wipe](/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
## Microsoft Authenticator known issues and workarounds
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
If you want all the latest features and updates, check this page and install wha
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+## 2.1.20.0
+
+### Release status:
+11/9/2022: Released for download
+
+### Bug fixes
+
+ - We fixed a bug where the new employeeLeaveDateTime attribute was not syncing correctly in version 2.1.19.0. Note that if the incorrect attribute was already used in a rule, then the rule must be updated with the new attribute and any objects in the AAD connector space that have the incorrect attribute must be removed with the "Remove-ADSyncCSObject" cmdlet, and then a full sync cycle must be run.
+ ## 2.1.19.0 ### Release status:
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Functional changes
+ - We added a new attribute 'employeeLeaveDateTime' for syncing to Azure AD. To learn more about how to use this attribute to manage your users' life cycles, please refer to [this article](/azure/active-directory/governance/how-to-lifecycle-workflow-sync-attributes)
### Bug fixes
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Further prompts can be expected in various scenarios:
* The user who originally consented to the application was an administrator, but they didn't consent on-behalf of the entire organization.
-* The application is using [incremental and dynamic consent](../azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
+* The application is using [incremental and dynamic consent](../develop/permissions-consent-overview.md#consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
* Consent was revoked after being granted initially.
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
The scenario solution has the following components:
- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](https://learn.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
- An Oracle PeopleSoft environment
For the Oracle PeopleSoft application to recognize the user correctly, there's a
## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](https://learn.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea
- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
-- [Configure Datawiza with Azure AD B2C](https://learn.microsoft.com/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
- [Datawiza documentation](https://docs.datawiza.com/)
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
To resolve the error, follow these steps, or watch this [short video about how t
- Claims issued in the token - Certificate used to sign the token.
- For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
+ For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md).
1. Now that you've reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem. 1. If you're still not able to sign in successfully, you can ask the application vendor what is missing from the SAML response.
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
This section shows how to create a contained user in the database that represent
- [Universal Authentication with SQL Database and Azure Synapse Analytics (SSMS support for MFA)](/azure/azure-sql/database/authentication-mfa-ssms-overview) - [Configure and manage Azure Active Directory authentication with SQL Database or Azure Synapse Analytics](/azure/azure-sql/database/authentication-aad-configure)
-SQL DB requires unique Azure AD display names. With this, the Azure AD accounts such as users, groups and Service Principals (applications), and VM names enabled for managed identity must be uniquely defined in AAD regarding their display names. SQL DB checks the Azure AD display name during T-SQL creation of such users and if it is not unique, the command fails requesting to provide a unique Azure AD display name for a given account.
+SQL DB requires unique Azure AD display names. With this, the Azure AD accounts such as users, groups and Service Principals (applications), and VM names enabled for managed identity must be uniquely defined in Azure AD regarding their display names. SQL DB checks the Azure AD display name during T-SQL creation of such users and if it is not unique, the command fails requesting to provide a unique Azure AD display name for a given account.
**To create a contained user:**
Code running in the VM can now get a token using its system-assigned managed ide
## Access data
-This section shows how to get an access token using the VM's system-assigned managed identity and use it to call Azure SQL. Azure SQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. You use the **access token** method of creating a connection to SQL. This is part of Azure SQL's integration with Azure AD, and is different from supplying credentials on the connection string.
+This section shows how to get an access token using the VM's system-assigned managed identity and use it to call Azure SQL. Azure SQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. This method doesn't require supplying credentials on the connection string.
-Here's a .NET code example of opening a connection to SQL using an access token. The code must run on the VM to be able to access the VM's system-assigned managed identity's endpoint. **.NET Framework 4.6** or higher or **.NET Core 2.2** or higher is required to use the access token method. Replace the values of AZURE-SQL-SERVERNAME and DATABASE accordingly. Note the resource ID for Azure SQL is `https://database.windows.net/`.
+Here's a .NET code example of opening a connection to SQL using Active Directory Managed Identity authentication. The code must run on the VM to be able to access the VM's system-assigned managed identity's endpoint. **.NET Framework 4.6.2** or higher or **.NET Core 3.1** or higher is required to use this method. Replace the values of AZURE-SQL-SERVERNAME and DATABASE accordingly and add a NuGet reference to the Microsoft.Data.SqlClient library.
```csharp
-using System.Net;
-using System.IO;
-using System.Data.SqlClient;
-using System.Web.Script.Serialization;
-
-//
-// Get an access token for SQL.
-//
-HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://database.windows.net/");
-request.Headers["Metadata"] = "true";
-request.Method = "GET";
-string accessToken = null;
+using Microsoft.Data.SqlClient;
try {
- // Call managed identities for Azure resources endpoint.
- HttpWebResponse response = (HttpWebResponse)request.GetResponse();
-
- // Pipe response Stream to a StreamReader and extract access token.
- StreamReader streamResponse = new StreamReader(response.GetResponseStream());
- string stringResponse = streamResponse.ReadToEnd();
- JavaScriptSerializer j = new JavaScriptSerializer();
- Dictionary<string, string> list = (Dictionary<string, string>) j.Deserialize(stringResponse, typeof(Dictionary<string, string>));
- accessToken = list["access_token"];
-}
-catch (Exception e)
-{
- string errorText = String.Format("{0} \n\n{1}", e.Message, e.InnerException != null ? e.InnerException.Message : "Acquire token failed");
-}
- //
-// Open a connection to the server using the access token.
+// Open a connection to the server using Active Direcotry Managed Identity authentication.
//
-if (accessToken != null) {
- string connectionString = "Data Source=<AZURE-SQL-SERVERNAME>; Initial Catalog=<DATABASE>;";
- SqlConnection conn = new SqlConnection(connectionString);
- conn.AccessToken = accessToken;
- conn.Open();
-}
+string connectionString = "Data Source=<AZURE-SQL-SERVERNAME>; Initial Catalog=<DATABASE>; Authentication=Active Directory Managed Identity; Encrypt=True";
+SqlConnection conn = new SqlConnection(connectionString);
+conn.Open();
``` >[!NOTE]
Alternatively, a quick way to test the end-to-end setup without having to write
```powershell $SqlConnection = New-Object System.Data.SqlClient.SqlConnection
- $SqlConnection.ConnectionString = "Data Source = <AZURE-SQL-SERVERNAME>; Initial Catalog = <DATABASE>"
+ $SqlConnection.ConnectionString = "Data Source = <AZURE-SQL-SERVERNAME>; Initial Catalog = <DATABASE>; Encrypt=True;"
$SqlConnection.AccessToken = $AccessToken $SqlConnection.Open() ```
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To configure the integration of AWS Single-Account Access into Azure AD, you nee
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for AWS Single-Account Access
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for AWS IAM Identity Center
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
To configure the integration of Cisco AnyConnect into Azure AD, you need to add
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Cisco AnyConnect
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for DocuSign
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for FortiGate SSL VPN
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
To configure the integration of Azure AD SAML Toolkit into Azure AD, you need to
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Azure AD SAML Toolkit
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for ServiceNow
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
To configure the integration of Slack into Azure AD, you need to add Slack from
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO for Slack
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
Devices integrated with Azure AD can be either [hybrid joined devices](../device
* [Azure Linux virtual machines](../devices/howto-vm-sign-in-azure-ad-linux.md)
-* [Azure Virtual Desktop](https://learn.microsoft.com/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join)
+* [Azure Virtual Desktop](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join)
* [Virtual desktop infrastructure](../devices/howto-device-identity-virtual-desktop-infrastructure.md)
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
Microsoft offers authentication methods that enable you to meet required NIST au
| FIDO2 security key<br>or<br> Smart card (Active Directory Federation Services [AD FS])<br>or<br>Windows Hello for Business with hardware TPM| Multifactor cryptographic hardware | | **Additional methods**| | | Password<br> and<br>(Hybrid Azure AD joined with hardware TPM <br>or <br> Azure AD joined with hardware TPM)| Memorized secret<br>and<br> Single-factor cryptographic hardware |
-| Password <br>and<br>Single-factor one-time password hardware (from an OTP manufacturer) <br>and<br>(Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> [Compliant managed device](https://learn.microsoft.com/mem/intune/protect/device-compliance-get-started))| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
+| Password <br>and<br>Single-factor one-time password hardware (from an OTP manufacturer) <br>and<br>(Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> [Compliant managed device](/mem/intune/protect/device-compliance-get-started))| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
### Our recommendations
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
+- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
VU Identity Card works as a link between users who need to access an application
Verifiable credentials can be used to enable faster and easier user onboarding by replacing some human interactions. For example, a user or employee who wants to create or remotely access an account can use a Verified ID through VU Identity Card to verify their identity without using vulnerable or overly complex passwords or the requirement to be on-site.
-Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Learn more about [account onboarding](/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
In this account onboarding scenario, Vu plays the Trusted ID proofing issuer role.
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Last updated 07/07/2022
Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
+> [!NOTE]
+> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+ ## Why use auto-upgrade Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
AKS follows a strict versioning window with regard to supportability. With prope
## Using auto-upgrade
-Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel.
+Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
The following upgrade channels are available:
az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrad
## Using auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window.
+
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
+
+For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
## Best practices for auto-upgrade
The following best practices will help maximize your success when using auto-upg
<!-- EXTERNAL LINKS --> [pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
+[release-tracker]: release-tracker.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
This article assumes that you have an existing AKS cluster with 1.21 or later ve
If you want to interact with Azure disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure disks][kubernetes-disks].
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+ ## Storage class static provisioning The following table describes the Storage Class parameters for the Azure disk CSI driver static provisioning:
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
Last updated 07/21/2022
A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster. > [!NOTE]
-> An Azure Disks can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
+> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
This article assumes that you have an existing AKS cluster with 1.21 or later ve
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+ ## Built-in storage classes A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
Either the load balancers and services IP address can be dynamically assigned, o
You can create both *internal* and *external* load balancers. Internal load balancers are only assigned a private IP address, so they can't be accessed from the Internet.
+Learn more about Services in the [Kubernetes docs][k8s-service].
+ ## Azure virtual networks In AKS, you can deploy a cluster that uses one of the following two network models:
For more information on core Kubernetes and AKS concepts, see the following arti
<!-- LINKS - External --> [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md [kubenet]: https://kubernetes.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet
+[k8s-service]: https://kubernetes.io/docs/concepts/services-networking/service/
<!-- LINKS - Internal --> [aks-http-routing]: http-application-routing.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Kubernetes typically treats individual pods as ephemeral, disposable resources.
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
+> [!NOTE]
+> The Azure Disks CSI driver has a limit of 32 volumes per node. Other Azure Storage services don't have an equivalent limit.
+ ### Azure Disks Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types includ
* Standard HDDs > [!TIP]
->For most production and development workloads, use Premium SSD.
+> For most production and development workloads, use Premium SSD.
-Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
+Because Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
### Azure Files
-Use *Azure Files* to mount a Server Message Block (SMB) version 3.1.1 share or Network File System (NFS) version 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
+Use [Azure Files][azure-files-volume] to mount a Server Message Block (SMB) version 3.1.1 share or Network File System (NFS) version 4.1 share backed by an Azure storage account to pods. Azure Files let you share data across multiple nodes and pods and can use:
* Azure Premium storage backed by high-performance SSDs * Azure Standard storage backed by regular HDDs
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Title: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster recommendations: false description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster-+
If you navigated away from the **Deployment is in progress** page, the following
1. Save aside the values for **Login server**, **Registry name**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard. 1. Navigate again to the resource group into which you deployed the resources. 1. In the **Settings** section, select **Deployments**.
-1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string **ibm**.
+1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string `ibm`.
1. In the left pane, select **Outputs**. 1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
- * **cmdToConnectToCluster**
-
+ * `cmdToConnectToCluster`
+ * `appDeploymentTemplateYaml`
+
+1. Paste the value of `appDeploymentTemplateYaml` into a Bash shell, append `| grep secretName`, and execute. This command will output the Ingress TLS secret name, such as `- secretName: secret785e2c`. Save aside the value for `secretName` from the output.
These values will be used later in this article. Note that several other useful commands are listed in the outputs.
java-app
Γö£ΓöÇ src/main/ Γöé Γö£ΓöÇ aks/ Γöé Γöé Γö£ΓöÇ db-secret.yaml
-Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication-agic.yaml
Γöé Γö£ΓöÇ docker/ Γöé Γöé Γö£ΓöÇ Dockerfile Γöé Γöé Γö£ΓöÇ Dockerfile-local
java-app
The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
-In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
+In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image.
In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
export DB_SERVER_NAME=<Server name>.database.windows.net
export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name> export DB_PASSWORD=<Server admin password>
+export INGRESS_TLS_SECRET=<Ingress TLS secret name>
mvn clean install ```
Use your local ide, or `liberty:run` command to run and test the project locally
cd <path-to-your-repo>/java-app mvn liberty:run ```
-
+ 1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working. 1. Press `Ctrl+C` to stop `liberty:run` mode.
After successfully running the app in the Liberty Docker container, you can run
```bash cd <path-to-your-repo>/java-app/target
-# If you are running with Open Liberty
+# If you're running with Open Liberty
docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
-# If you are running with WebSphere Liberty
+# If you're running with WebSphere Liberty
docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp . ```
The following steps deploy and test the application.
1. Connect to the AKS cluster.
- Paste the value of **cmdToConnectToCluster** into a bash shell.
+ Paste the value of **cmdToConnectToCluster** into a Bash shell and execute.
1. Apply the DB secret.
The following steps deploy and test the application.
1. Apply the deployment file. ```bash
- kubectl apply -f openlibertyapplication.yaml
+ kubectl apply -f openlibertyapplication-agic.yaml
``` 1. Wait for the pods to be restarted.
The following steps deploy and test the application.
You should see output similar to the following to indicate that all the pods are running. ```bash
- NAME READY STATUS RESTARTS AGE
- javaee-cafe-cluster-67cdc95bc-2j2gr 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-fgtt8 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-h47qm 1/1 Running 0 29s
+ NAME READY STATUS RESTARTS AGE
+ javaee-cafe-cluster-agic-67cdc95bc-2j2gr 1/1 Running 0 29s
+ javaee-cafe-cluster-agic-67cdc95bc-fgtt8 1/1 Running 0 29s
+ javaee-cafe-cluster-agic-67cdc95bc-h47qm 1/1 Running 0 29s
``` 1. Verify the results.
- 1. Get endpoint of the deployed service
+ 1. Get **ADDRESS** of the Ingress resource deployed with the application
```bash
- kubectl get service
+ kubectl get ingress
```
- 1. Go to `http://EXTERNAL-IP` to test the application.
-
+ Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.
+
+ 1. Go to `https://<ADDRESS>` to test the application.
## Clean up resources
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
If you'd like to increase the speed of upgrades, use the `--max-surge` value to
The following command sets the max surge value for performing a node image upgrade: ```azurecli
-az aks nodepool upgrade \
+az aks nodepool update \
--resource-group myResourceGroup \ --cluster-name myAKSCluster \ --name mynodepool \ --max-surge 33% \
- --node-image-only \
--no-wait ```
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
# Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system Pods on a VMSS instance and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
+Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system pods on a VMSS instance, and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
### Limitations
-When using Planned Maintenance, the following restrictions apply:
+When you use Planned Maintenance, the following restrictions apply:
- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical. - Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
The following example output shows the maintenance window from 1:00am to 2:00am
} ```
-To allow maintenance any time during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
+To allow maintenance anytime during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
```azurecli-interactive az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday
az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCl
Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedule your upgrades during your maintenance window automatically. For more details on about Cluster Auto-Upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
+ ## Next steps - To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
+> [!NOTE]
+> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+
+> [!NOTE]
+> Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
+ ## Before you begin * If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
By default, AKS configures upgrades to surge with one extra node. A default valu
For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You may wish to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
-AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the current node count at the time of upgrade, the current node count is used for the max surge value.
+AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade.
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[upgrade-cluster]: #upgrade-an-aks-cluster [planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md
+[release-tracker]: release-tracker.md
[specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) > [!Important]
-> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 04-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
+> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To use Application Insights, [create an instance of the Application Insights ser
> + A logger for all APIs. > > Specifying *both*:
-> + if they are different loggers, both of them will be used (multiplexing logs).
-> + if they are the same loggers with different settings, the single API logger (more granular level) will override the one for all APIs.
+> - By default, the single API logger (more granular level) will override the one for all APIs.
+> - If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support.
## What data is added to Application Insights
To improve performance issues, skip:
+ Learn more about [Azure Application Insights](/azure/application-insights/). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
-+ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
++ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
You have now configured a native client application that can request access your
### Daemon client application (service-to-service calls)
-Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
+Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant.
1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your daemon app registration.
Your application can acquire a token to call a Web API hosted in your App Servic
1. After the app registration is created, copy the value of **Application (client) ID**. 1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again.
-You can now [request an access token using the client ID and client secret](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#use-the-access-token-to-access-the-secured-resource), and App Service Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
+You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and App Service Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
At present, this allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must perform some additional configuration.
At present, this allows _any_ client application in your Azure AD tenant to requ
1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md). 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to click **Grant admin consent** to authorize the client application to request the permission.
-1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
+1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service Authentication / Authorization). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code). You have now configured a daemon client application that can access your App Service app using its own identity.
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
Content-Type: application/json
} ```
-This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you will then add the value of `access_token` to a client connection with the vault.
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#successful-response). To access Key Vault, you will then add the value of `access_token` to a client connection with the vault.
# [.NET](#tab/dotnet)
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
Follow these steps to create an Azure PostgreSQL database in your subscription.
--resource-group $RESOURCE_GROUP \ --name $DB_SERVER_NAME \ --location $LOCATION \
- --admin-user $DB_USERNAME \
- --admin-password $DB_PASSWORD \
+ --admin-user $ADMIN_USERNAME \
+ --admin-password $ADMIN_PASSWORD \
--sku-name GP_Gen5_2 ```
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022 + # Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
git clone https://github.com/Azure-Samples/Passwordless-Connections-for-Java-App
cd Passwordless-Connections-for-Java-Apps/Tomcat/ ```
-## Create an Azure Postgres DB
+## Create an Azure Database for PostgreSQL
Follow these steps to create an Azure Database for Postgres in your subscription. The Spring Boot app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
Follow these steps to create an Azure Database for Postgres in your subscription
az group create --name $RESOURCE_GROUP --location $LOCATION ```
-1. Create an Azure Postgres Database server. The server is created with an administrator account, but it won't be used because we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
+1. Create an Azure Database for PostgreSQL server. The server is created with an administrator account, but it won't be used because we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
### [Flexible Server](#tab/flexible)
Follow these steps to build a WAR file and deploy to Azure App Service on Tomcat
--type war ```
-## Connect Postgres Database with identity connectivity
-
-Next, connect your app to a Postgres Database with a system-assigned managed identity using Service Connector.
+## Connect the Postgres database with identity connectivity
### [Flexible Server](#tab/flexible)
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+
+Next, connect your app to a Postgres database with a system-assigned managed identity using Service Connector.
+ To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres-flexible) command. ```azurecli-interactive
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
Previously updated : 04/12/2022 Last updated : 11/09/2022
Custom error pages are supported for the following two scenarios:
- **Maintenance page** - This custom error page is sent instead of a 502 bad gateway page. It's shown when Application Gateway has no backend to route traffic to. For example, when there's scheduled maintenance or when an unforeseen issue effects backend pool access. - **Unauthorized access page** - This custom error page is sent instead of a 403 unauthorized access page. It's shown when the Application Gateway WAF detects malicious traffic and blocks it.
-If an error originates from the backend servers, then it's passed along unmodified back to the caller. A custom error page isn't displayed. Application gateway can display a custom error page when a request can't reach the backend.
+If an error originates from backend targets of your backend pool, the error is passed along unmodified back to the caller. Custom error pages will only be displayed when a request can't reach the backend or when WAF is in prevention mode and blocks the request.
Custom error pages can be defined at the global level and the listener level:
To create a custom error page, you must have:
- error page should be internet accessible and return 200 response. - error page should be in \*.htm or \*.html extension type. - error page size must be less than 1 MB.
+- error page must be hosted in Azure blob storage
-You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same location are currently not supported.
+You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using base64-encoded inline images, javascript, or CSS.
-After you specify an error page, the application gateway downloads it from the defined location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
+> [!Note]
+> Relative links with files in the same location are not supported.
+
+After you specify an error page, application gateway verifies internet connectivity to the file and will save the file to the local application gateway cache. The HTML page will be served by the application gateway, whereas externally referenced resources (such as images, javascript, css files) are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. Application gateway doesn't periodically check the blob location to fetch new versions.
## Portal configuration
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
If your app or business depends on the use of a Form Recognizer custom model, we
## Prerequisites 1. Two Form Recognizer Azure resources in different Azure regions. If you don't have them, go to the Azure portal and [create a new Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer).
-1. The key, endpoint URL, and subscription ID for your Form Recognizer resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://ms.portal.azure.com/#home).
+1. The key, endpoint URL, and subscription ID for your Form Recognizer resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://portal.azure.com/#home).
::: moniker-end
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
The Azure AD authority is the endpoint you use for acquiring an Azure AD token.
### Authentication libraries
-Azure provides a set of libraries, called Azure Active Directory Authentication Libraries, to simplify the process of acquiring an Azure AD token. Azure builds these libraries for multiple languages. For more information, see the [documentation](../active-directory/azuread-dev/active-directory-authentication-libraries.md).
+Microsoft Authentication Library (MSAL) helps to simplify the process of acquiring an Azure AD token. Azure builds these libraries for multiple languages. For more information, see the [documentation](../active-directory/develop/msal-overview.md).
## Errors
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 11/04/2022 Last updated : 11/10/2022 description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."
If you are experiencing issues connecting a cluster to Azure Arc, it's probably
Review this flowchart in order to diagnose your issue when attempting to connect a cluster to Azure Arc without a proxy server. More details about each step are provided below. ### Does the Azure identity have sufficient permissions?
When you [create your support request](/azure/azure-portal/supportability/how-to
If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Servers](servers/overview.md): Manage Windows and Linux physical servers and virtual machines hosted outside of Azure. * [Kubernetes clusters](kubernetes/overview.md): Attach and configure Kubernetes clusters running anywhere, with multiple supported distributions. * [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance
-and PostgreSQL server (preview) services are currently available.
+and PostgreSQL (preview) services are currently available.
* [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure. * Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
Some of the key scenarios that Azure Arc supports are:
* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL server, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
-* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
+* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled data services](./dat).
* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
The following Azure Arc control plane functionality is offered at no extra cost:
* Resource organization through Azure management groups and tags * Searching and indexing through Azure Resource Graph
-* Access and security through Azure RBAC and subscriptions
+* Access and security through Azure Role-based access control (RBAC)
* Environments and automation through templates and extensions * Update management
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). * Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview).
-* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines)
-* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md)
-* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
+* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md).
+* Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart).
+* Learn about best practices and design patterns trough the various [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 09/26/2022 Last updated : 11/09/2022
This article provides information on troubleshooting and resolving issues that m
### Logs
-For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the client machine from which you've deployed the Azure Arc resource bridge.
+For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same deployment machine that was used to run commands to deploy the Arc resource bridge. If there is a problem collecting logs, most likely the deployment machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the deployment machine to the Appliance VM.
-The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the client machine where the deployment of the appliance was performed from. To use a different client machine to run the Azure CLI command, you need to make sure the following files are copied to the new client machine:
+The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the deployment machine. To use a different machine to run the logs command, make sure the following files are copied to the machine in the same location:
```azurecli $HOME\.KVA\.ssh\logkey.pub $HOME\.KVA\.ssh\logkey ```
-To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
+To run the `az arcappliance logs` command, the Appliance VM IP, Control Plane IP, or kubeconfig can be passed in the corresponding parameter. If `az arcappliance deploy` was not completed, then the kubeconfig file may be empty, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs.
-If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 192.168.1.1", the command to use for logs collection would be:
```azurecli
-az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
-```
-
-To view the logs, run the following command:
-
-```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig>
-```
-
-To save the logs to a destination folder, run the following command:
-
-```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory>
+az arcappliance logs hci --ip 192.168.1.1 --out-dir c:\logs`
``` To specify the IP address of the Azure Arc resource bridge virtual machine, run the following command:
az arcappliance logs <provider> --out-dir <path to specified output directory> -
### Remote PowerShell is not supported
-If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [EOF error when using the `logs` command](#logs-command-fails-with-eof-error), or an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure).
+If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure) or another type of error.
Using `az arcappliance` commands from remote PowerShell is not currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
To resolve this error, the .wssd\python and .wssd\kva folders in the user profil
When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
-### `logs` command fails with EOF error
-
-When running the `az arcappliance logs` Azure CLI command, you may see an error: `Appliance logs command failed with error: EOF when reading a line.` This may occur in scenarios similar to the following:
-
-```azurecli
-az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
-+ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException
-+ FullyQualifiedErrorId : NativeCommandError
-
-Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line[v-Host1]: PS C:\Users\AzureStackAdminD\Documents> az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
-+ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException
-+ FullyQualifiedErrorId : NativeCommandError
-
-Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line
-```
-
-The `az arcappliance logs` CLI command runs in interactive mode, meaning that it prompts the user for parameters. If the command is run in a scenario where it can't prompt the user for parameters, this error will occur. This is especially common when trying to use remote PowerShell to run the command.
-
-To avoid this error, use Remote Desktop Protocol (RDP) or a console session to sign directly in to the node and locally run the `logs` command (or any `az arcappliance` command). Remote PowerShell is not currently supported by Azure Arc resource bridge.
-
-You can also avoid this error by pre-populating the values that the `logs` command prompts for, thus avoiding the prompt. The example below provides these values into a variable which is then passed to the `logs` command. Be sure to replace `$loginValues` with your cloudservice IP address and the full path to your token credentials.
-
-```azurecli
-$loginValues="192.168.200.2
-C:\kvatoken.tok"
-
-$user_in = ""
-foreach ($val in $loginValues) { $user_in = $user_in + $val + "`n" }
-
-$user_in | az arcappliance logs hci --kubeconfig C:\Users\AzureStackAdminD\.kube\config
-```
- ### Default host resource pools are unavailable for deployment When using the `az arcappliance createConfig` or `az arcappliance run` command, there will be an interactive experience which shows the list of the VMware entities where user can select to deploy the virtual appliance. This list will show all user-created resource pools along with default cluster resource pools, but the default host resource pools aren't listed.
When the appliance is deployed to a host resource pool, there is no high availab
### Restricted outbound connectivity
-Make sure the URLs listed below are added to your allowlist.
+Below is the list of firewall and proxy URLs that need to be allowlisted to enable communication from the host machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
#### Proxy URLs used by appliance agents and services
Make sure the URLs listed below are added to your allowlist.
|Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and Control Plane IP need outbound connection. | Manages identity and access control for Azure resources | |Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used for Kubernetes cluster configuration.| |Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and Control Plane IP need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-prem resources to Azure.|
-|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-premises resources to Azure.|
+|SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
|Resource bridge (appliance) Dataplane service| 443 | `https://*.dp.prod.appliances.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Communicate with resource provider in Azure.| |Resource bridge (appliance) container image download| 443 | `*.blob.core.windows.net, https://ecpacr.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
-|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Host machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
+|Resource bridge (appliance) image download| 80 | `*.dl.delivery.mp.microsoft.com`| Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
|Azure Arc for Kubernetes container image download| 443 | `https://azurearcfork8sdev.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. | |ADHS telemetry service | 443 | adhs.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Runs inside the appliance/mariner OS. Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any Kubernetes control plane. | |Microsoft events data service | 443 |v20.events.data.microsoft.com| Appliance VM IP and Control Plane IP need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
There are only two certificates that should be relevant when deploying the Arc r
### KVA timeout error
-Azure Arc resource bridge is a Kubernetes management cluster that is deployed in an appliance VM directly on the on-premises infrastructure. While trying to deploy Azure Arc resource bridge, a "KVA timeout error" may appear if there is a networking problem that doesn't allow communication of the Arc Resource Bridge appliance VM to the host, DNS, network or internet. This error is typically displayed for the following reasons:
+While trying to deploy Arc Resource Bridge, a "KVA timeout error" may appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the deployment machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
+
+For clarity, "deployment machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
+
+#### Top causes of the KVA timeout errorΓÇ»
+
+- Deployment machine is unable to communicate with Control Plane IP and Appliance VM IP.
+- Appliance VM is unable to communicate with the deployment machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).ΓÇ»
+- Appliance VM does not have internet access.
+- Appliance VM has internet access, but connectivity to one or more required URLs is being blocked, possibly due to a proxy or firewall.
+- Appliance VM is unable to reach a DNS server that can resolve internal names, such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses and container registry names.ΓÇ»
+- Proxy server configuration on the deployment machine or Arc resource bridge configuration files is incorrect. This can impact both the deployment machine and the Appliance VM. When the `az arcappliance prepare` command is run, the deployment machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images. 
+
+#### Troubleshoot KVA timeout error
+
+To resolve the error, one or more network misconfigurations may need to be addressed. Follow the steps below to address the most common reasons for this error.
+
+1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig may be empty if deploy command did not complete). Problems collecting logs are most likely due to the deployment machine being unable to reach the Appliance VM.
+
+ Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error.
+
+1. The deployment machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the deployment machine and verify there is a response from both IPs.
+
+ If a request times out, the deployment machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the deployment machine to the Control Plane IP and Appliance VM IP.
+
+1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script) for Arc resource bridge.
+
+1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
+
+1. In a non-proxy environment, the deployment machine must have external and internal DNS resolution. The deployment machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#restricted-outbound-connectivity), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the deployment machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#restricted-outbound-connectivity).
+
+ To test DNS resolution to an internal address from the deployment machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the deployment machine has internal DNS resolution in a non-proxy scenario. 
-- The appliance VM IP address doesn't have DNS resolution.-- The appliance VM IP address doesn't have internet access to download the required image.-- The host doesn't have routability to the appliance VM IP address.
+1. Appliance VM needs to be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to resolve external/internal addresses, such as Azure service addresses and container registry names for download of the Arc resource bridge container images from the cloud.
-To resolve this error, ensure that all IP addresses assigned to the Arc Resource Bridge appliance VM can be resolved by DNS and have access to the internet, and that the host can successfully route to the IP addresses.
+ Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.
## Azure-Arc enabled VMs on Azure Stack HCI issues
azure-cache-for-redis Cache Redis Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md
Some popular modules are available for use in the Enterprise tier of Azure Cache
|RediSearch | No | Yes | Yes (preview) | |RedisBloom | No | Yes | No | |RedisTimeSeries | No | Yes | No |
-|RedisJSON | No | Yes (preview) | Yes (preview) |
+|RedisJSON | No | Yes | Yes |
Currently, `RediSearch` is the only module that can be used concurrently with active geo-replication.
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Last updated 10/2/2022
# What's New in Azure Cache for Redis
+## November 2022
+
+Support for using the RedisJSON module has now reached General Availability (GA).
+
+For more information, see [Use Redis modules with Azure Cache for Redis](cache-redis-modules.md).
+ ## October 2022 ### Enhancements for passive geo-replication
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
description: Learn to use the Azure SQL input binding in Azure Functions.
Previously updated : 5/24/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
Isolated worker process isn't currently supported.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-java).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-java)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-java)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-java)
+
+The examples refer to a `ToDoItem` class (in a separate file `ToDoItem.java`) and a corresponding database table:
+
+```java
+package com.function;
+import java.util.UUID;
+
+public class ToDoItem {
+ public UUID Id;
+ public int order;
+ public String title;
+ public String url;
+ public boolean completed;
+
+ public ToDoItem() {
+ }
+
+ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) {
+ this.Id = Id;
+ this.order = order;
+ this.title = title;
+ this.url = url;
+ this.completed = completed;
+ }
+}
+```
++
+<a id="http-trigger-get-multiple-items-java"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a Java function that reads from a query and returns the results in the HTTP response.
+
+```java
+package com.function;
+
+import com.microsoft.azure.functions.HttpMethod;
+import com.microsoft.azure.functions.HttpRequestMessage;
+import com.microsoft.azure.functions.HttpResponseMessage;
+import com.microsoft.azure.functions.HttpStatus;
+import com.microsoft.azure.functions.annotation.AuthorizationLevel;
+import com.microsoft.azure.functions.annotation.FunctionName;
+import com.microsoft.azure.functions.annotation.HttpTrigger;
+import com.microsoft.azure.functions.sql.annotation.SQLInput;
+
+import java.util.Optional;
+
+public class GetToDoItems {
+ @FunctionName("GetToDoItems")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "SELECT * FROM dbo.ToDo",
+ commandType = "Text",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItems).build();
+ }
+}
+```
+
+<a id="http-trigger-look-up-id-from-query-string-java"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a Java function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+```java
+public class GetToDoItem {
+ @FunctionName("GetToDoItem")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "SELECT * FROM dbo.ToDo",
+ commandType = "Text",
+ parameters = "@Id={Query.id}",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ ToDoItem toDoItem = toDoItems[0];
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItem).build();
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-java"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a Java function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```java
+public class DeleteToDo {
+ @FunctionName("DeleteToDo")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @SQLInput(
+ commandText = "dbo.DeleteToDo",
+ commandType = "StoredProcedure",
+ parameters = "@Id={Query.id}",
+ connectionStringSetting = "SqlConnectionString")
+ ToDoItem[] toDoItems) {
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(toDoItems).build();
+ }
+}
+
+```
::: zone-end
module.exports = async function (context, req, todoItems) {
} ``` +++
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-powershell)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-powershell)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-powershell)
+
+The examples refer to a database table:
++
+<a id="http-trigger-get-multiple-items-powershell"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a SQL input binding in a function.json file and a PowerShell function that reads from a query and returns the results in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
+ "commandType": "Text",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($Request, $todoItems)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItems
+})
+```
+
+<a id="http-trigger-look-up-id-from-query-string-powershell"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a SQL input binding in a PowerShell function that reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $todoItem)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItem
+})
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-powershell"></a>
+### HTTP trigger, delete rows
+
+The following example shows a SQL input binding in a function.json file and a PowerShell function that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+++
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $todoItems)
+
+Write-Host "PowerShell function with SQL Input Binding processed a request."
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $todoItems
+})
+```
+ ::: zone pivot="programming-language-python"
def main(req: func.HttpRequest, todoItems: func.SqlRowList) -> func.HttpResponse
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
-
--
->
::: zone pivot="programming-language-csharp" ## Attributes
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlAttribute](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute to declare the SQL bindings on the function, which has the following properties:
| Attribute property |Description| |||
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
| **Parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | ::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
+ ::: zone pivot="programming-language-java" ## Annotations
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLInput` annotation (`com.microsoft.azure.functions.sql.annotation.SQLInput`) on parameters whose value would come from Azure SQL. This annotation supports the following elements:
| Element |Description| ||| | **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
-| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
-| **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is ["Text"](/dotnet/api/system.data.commandtype#fields) for a query and ["StoredProcedure"](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
::: zone-end >
+
## Configuration The following table explains the binding configuration properties that you set in the function.json file.
The following table explains the binding configuration properties that you set i
## Usage
-The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding.
Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.
## Next steps - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
description: Learn to use the Azure SQL output binding in Azure Functions.
Previously updated : 5/24/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
Isolated worker process isn't currently supported.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-java).
+
+This section contains the following examples:
+
+* [HTTP trigger, write a record to a table](#http-trigger-write-record-to-table-java)
+<!-- * [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-java) -->
+
+The examples refer to a `ToDoItem` class (in a separate file `ToDoItem.java`) and a corresponding database table:
+
+```java
+package com.function;
+import java.util.UUID;
+
+public class ToDoItem {
+ public UUID Id;
+ public int order;
+ public String title;
+ public String url;
+ public boolean completed;
+
+ public ToDoItem() {
+ }
+
+ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) {
+ this.Id = Id;
+ this.order = order;
+ this.title = title;
+ this.url = url;
+ this.completed = completed;
+ }
+}
+```
++
+<a id="http-trigger-write-record-to-table-java"></a>
+### HTTP trigger, write a record to a table
+
+The following example shows a SQL output binding in a Java function that adds a record to a table, using data provided in an HTTP POST request as a JSON body. The function takes an additional dependency on the [com.fasterxml.jackson.core](https://github.com/FasterXML/jackson) library to parse the JSON body.
+
+```xml
+<dependency>
+ <groupId>com.fasterxml.jackson.core</groupId>
+ <artifactId>jackson-databind</artifactId>
+ <version>2.13.4.1</version>
+</dependency>
+```
+
+```java
+package com.function;
+
+import java.util.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.sql.annotation.SQLOutput;
+import com.fasterxml.jackson.core.JsonParseException;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonMappingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import java.util.Optional;
+
+public class PostToDo {
+ @FunctionName("PostToDo")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
+ @SQLOutput(
+ commandText = "dbo.ToDo",
+ connectionStringSetting = "SqlConnectionString")
+ OutputBinding<ToDoItem> output) throws JsonParseException, JsonMappingException, JsonProcessingException {
+ String json = request.getBody().get();
+ ObjectMapper mapper = new ObjectMapper();
+ ToDoItem newToDo = mapper.readValue(json, ToDoItem.class);
+
+ newToDo.Id = UUID.randomUUID();
+ output.setValue(newToDo);
+
+ return request.createResponseBuilder(HttpStatus.CREATED).header("Content-Type", "application/json").body(output).build();
+ }
+}
+```
+
+<!-- commented out until issue with java library resolved
+
+<a id="http-trigger-write-to-two-tables-java"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a JavaS function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings. The function takes an additional dependency on the [com.fasterxml.jackson.core](https://github.com/FasterXML/jackson) library to parse the JSON body.
+
+```xml
+<dependency>
+ <groupId>com.fasterxml.jackson.core</groupId>
+ <artifactId>jackson-databind</artifactId>
+ <version>2.13.4.1</version>
+</dependency>
+```
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+and Java class in `RequestLog.java`:
+
+```java
+package com.function;
+
+import java.util.Date;
+
+public class RequestLog {
+ public int Id;
+ public Date RequestTimeStamp;
+ public int ItemCount;
+
+ public RequestLog() {
+ }
+
+ public RequestLog(int Id, Date RequestTimeStamp, int ItemCount) {
+ this.Id = Id;
+ this.RequestTimeStamp = RequestTimeStamp;
+ this.ItemCount = ItemCount;
+ }
+}
+```
+
+```java
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger and SQL output binding function processed a request.');
+ context.log(req.body);
+
+ const newLog = {
+ RequestTimeStamp = Date.now(),
+ ItemCount = 1
+ }
+
+ if (req.body) {
+ context.bindings.todoItems = req.body;
+ context.bindings.requestLog = newLog;
+ context.res = {
+ body: req.body,
+ mimetype: "application/json",
+ status: 201
+ }
+ } else {
+ context.res = {
+ status: 400,
+ body: "Error reading request body"
+ }
+ }
+}
+``` -->
++ ::: zone pivot="programming-language-javascript"
The examples refer to a database table:
<a id="http-trigger-write-records-to-table-javascript"></a> ### HTTP trigger, write records to a table
-The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+The following example shows a SQL output binding in a function.json file and a JavaScript function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
The following is binding data in the function.json file:
module.exports = async function (context, req) {
<a id="http-trigger-write-to-two-tables-javascript"></a> ### HTTP trigger, write to two tables
-The following example shows a SQL input binding in a function.json file and a JavaScript function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+The following example shows a SQL output binding in a function.json file and a JavaScript function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
The second table, `dbo.RequestLog`, corresponds to the following definition:
module.exports = async function (context, req) {
::: zone-end +++
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-powershell)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-powershell)
+
+The examples refer to a database table:
+++
+<a id="http-trigger-write-records-to-table-powershell"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a SQL output binding in a function.json file and a PowerShell function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+
+```powershell
+using namespace System.Net
+
+param($Request)
+
+Write-Host "PowerShell function with SQL Output Binding processed a request."
+
+# Update req_body with the body of the request
+$req_body = $Request.Body
+
+# Assign the value we want to pass to the SQL Output binding.
+# The -Name value corresponds to the name property in the function.json for the binding
+Push-OutputBinding -Name todoItems -Value $req_body
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [HttpStatusCode]::OK
+ Body = $req_body
+})
+```
+
+<a id="http-trigger-write-to-two-tables-powershell"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a function.json file and a PowerShell function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($Request)
+
+Write-Host "PowerShell function with SQL Output Binding processed a request."
+
+# Update req_body with the body of the request
+$req_body = $Request.Body
+$new_log = @{
+ RequestTimeStamp = [DateTime]::Now
+ ItemCount = 1
+}
+
+Push-OutputBinding -Name todoItems -Value $req_body
+Push-OutputBinding -Name requestLog -Value $new_log
+
+Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+ StatusCode = [HttpStatusCode]::OK
+ Body = $req_body
+})
+```
++++ ::: zone pivot="programming-language-python" More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python).
The examples refer to a database table:
<a id="http-trigger-write-records-to-table-python"></a> ### HTTP trigger, write records to a table
-The following example shows a SQL input binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+The following example shows a SQL output binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
The following is binding data in the function.json file:
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow]) -> func.HttpRe
<a id="http-trigger-write-to-two-tables-python"></a> ### HTTP trigger, write to two tables
-The following example shows a SQL input binding in a function.json file and a Python function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+The following example shows a SQL output binding in a function.json file and a Python function that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
The second table, `dbo.RequestLog`, corresponds to the following definition:
def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow], requestLog: fu
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
-
--
->
::: zone pivot="programming-language-csharp" ## Attributes
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlAttribute](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute to declare the SQL bindings on the function, which has the following properties:
| Attribute property |Description| |||
In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https:
::: zone-end
-<!### Use these pivots when we get other non-C# languages added. ###
+ ::: zone pivot="programming-language-java" ## Annotations
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLOutput` annotation (`com.microsoft.azure.functions.sql.annotation.SQLOutput`) on parameters whose value would come from Azure SQL. This annotation supports the following elements:
| Element |Description| |||
-| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
::: zone-end > + ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file.
The following table explains the binding configuration properties that you set i
## Usage
-The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
-The output bindings uses the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
::: zone-end ## Next steps - [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
+
+ Title: Azure SQL trigger for Functions
+description: Learn to use the Azure SQL trigger in Azure Functions.
++ Last updated : 11/10/2022++
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Azure SQL trigger for Functions (preview)
+
+The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted.
+
+For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
+
+## Example usage
+<a id="example"></a>
++
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
++
+The example refers to a `ToDoItem` class and a corresponding database table:
+++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with 2 properties:
+- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
+
+# [In-process](#tab/in-process)
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table:
+
+```cs
+using System.Collections.Generic;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Extensions.Logging;
+using Microsoft.Azure.WebJobs.Extensions.Sql;
+
+namespace AzureSQL.ToDo
+{
+ public static class ToDoTrigger
+ {
+ [FunctionName("ToDoTrigger")]
+ public static void Run(
+ [SqlTrigger("[dbo].[ToDo]", ConnectionStringSetting = "SqlConnectionString")]
+ IReadOnlyList<SqlChange<ToDoItem>> changes,
+ ILogger logger)
+ {
+ foreach (SqlChange<ToDoItem> change in changes)
+ {
+ ToDoItem toDoItem = change.Item;
+ logger.LogInformation($"Change operation: {change.Operation}");
+ logger.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
+ }
+ }
+ }
+}
+```
+
+# [Isolated process](#tab/isolated-process)
+
+Isolated worker process isn't currently supported.
+
+<!-- Uncomment to support C# script examples.
+# [C# Script](#tab/csharp-script)
+
+-->
+++++
+> [!NOTE]
+> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md)
+++
+## Attributes
+
+The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
+
+| Attribute property |Description|
+|||
+| **TableName** | Required. The name of the table being monitored by the trigger. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+++
+## Configuration
+
+<!-- ### for another day ###
++
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+++
+In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger:
+
+| App Setting | Description|
+|||
+|**Sql_Trigger_BatchSize** |This controls the number of changes processed at once before being sent to the triggered function. The default value is 100.|
+|**Sql_Trigger_PollingIntervalMs**|This controls the delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).|
+|**Sql_Trigger_MaxChangesPerWorker**|This controls the upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.|
+++
+## Set up change tracking (required)
+
+Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [VS Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
+
+1. Enable change tracking on the SQL database, substituting `your database name` with the name of the database where the table to be monitored is located:
+
+ ```sql
+ ALTER DATABASE [your database name]
+ SET CHANGE_TRACKING = ON
+ (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+ ```
+
+ The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, it will only be able to catch the changes that occurred in past two days with the above query.
+
+ The `AUTO_CLEANUP` option is used to enable or disable the clean-up task that removes old change tracking information. If a temporary problem that prevents the trigger from running, turning off auto cleanup can be useful to pause the removal of information older than the retention period until the problem is resolved.
+
+ More information on change tracking options is available in the [SQL documentation](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server).
+
+2. Enable change tracking on the table, substituting `your table name` with the name of the table to be monitored (changing the schema if appropriate):
+
+ ```sql
+ ALTER TABLE [dbo].[your table name]
+ ENABLE CHANGE_TRACKING;
+ ```
+
+ The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger will have associated change tracking table and leases table in a schema `az_func`, which are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/triggerbindings/README.md#internal-state-tables).
++
+## Enable runtime-driven scaling
+
+Optionally, your functions can scale automatically based on the amount of changes that are pending to be processed in the user table. To allow your functions to scale properly on the Premium plan when using SQL triggers, you need to enable runtime scale monitoring.
+++
+## Next steps
+
+- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)
+- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
description: Understand how to use Azure SQL bindings in Azure Functions.
Previously updated : 6/3/2022 Last updated : 11/10/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL bindings for Azure Functions overview (preview)
-This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input and output bindings for the Azure SQL and SQL Server products.
+This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input bindings, output bindings, and a function trigger for the Azure SQL and SQL Server products.
| Action | Type | |||
+| Trigger a function when a change is detected on a SQL table | [SQL trigger](./functions-bindings-azure-sql-trigger.md) |
| Read data from a database | [Input binding](./functions-bindings-azure-sql-input.md) | | Save data to a database |[Output binding](./functions-bindings-azure-sql-output.md) |
You can install this version of the extension in your function app by registerin
::: zone-end ## Install bundle The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
-# [Preview Bundle v3.x](#tab/extensionv3)
-
-You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
-
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
- "version": "[3.*, 4.0.0)"
- }
-}
-```
# [Preview Bundle v4.x](#tab/extensionv4)
You can add the preview extension bundle by adding or replacing the following co
} ```
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
+ ::: zone-end
You can add the preview extension bundle by adding or replacing the following co
# [Preview Bundle v3.x](#tab/extensionv3)
-Python support isn't available with the SQL bindings extension in the v3 version of the functions runtime.
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
Support for Python durable functions with SQL bindings isn't yet available.
::: zone-end
-> [!NOTE]
-> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md), [JavaScript functions](functions-reference-node.md), and [Python functions](functions-reference-python.md).
+
+## Install bundle
+
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+
+# [Preview Bundle v4.x](#tab/extensionv4)
+
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+# [Preview Bundle v3.x](#tab/extensionv3)
+
+Azure SQL bindings for Azure Functions aren't available for the v3 version of the functions runtime.
+++
+## Update packages
+
+Add the Java library for SQL bindings to your functions project with an update to the `pom.xml` file in your Python Azure Functions project as seen in the following snippet:
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure.functions</groupId>
+ <artifactId>azure-functions-java-library-sql</artifactId>
+ <version>0.1.0</version>
+</dependency>
+```
::: zone-end ## SQL connection string
-Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true). Notable keywords include:
+Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). Notable keywords include:
- `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md) - `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds)
Azure SQL bindings for Azure Functions have a required property for connection s
## Considerations -- Because the Azure SQL bindings doesn't have a trigger, you need to use another supported trigger to start a function that reads from or writes to an Azure SQL database. -- Azure SQL binding supports version 2.x and later of the Functions runtime.
+- Azure SQL binding supports version 4.x and later of the Functions runtime.
- Source code for the Azure SQL bindings can be found in [this GitHub repository](https://github.com/Azure/azure-functions-sql-extension). - This binding requires connectivity to an Azure SQL or SQL Server database. - Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` aren't supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and aren't compatible with the `OPENJSON` function used by this Azure Functions binding.
Azure SQL bindings for Azure Functions have a required property for connection s
- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Run a function when data is changed in a SQL table (Trigger)](./functions-bindings-azure-sql-trigger.md)
- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/) - [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md) - [Use SQL bindings in Azure Stream Analytics](../stream-analytics/sql-database-upsert.md#option-1-update-by-key-with-the-azure-function-sql-binding)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Handling errors in Azure Functions is important to avoid lost data, missed event
This article describes general strategies for error handling and the available retry strategies. > [!IMPORTANT]
-> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in October 2022. For more information, see the [Retries section below](#retries).
+> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in December 2022. For more information, see the [Retries section below](#retries).
## Handling errors
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
The Kafka extension is part of an [extension bundle], which is specified in your
To allow your functions to scale properly on the Premium plan when using Kafka triggers and bindings, you need to enable runtime scale monitoring.
-# [Azure portal](#tab/portal)
-In the Azure portal, in your function app, choose **Configuration** and on the **Function runtime settings** tab turn **Runtime scale monitoring** to **On**.
--
-# [Azure CLI](#tab/azure-cli)
-
-Use the following Azure CLI command to enable runtime scale monitoring:
-
-```azurecli-interactive
-az resource update -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites
-```
-- ## host.json settings
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
To learn more about Azure Functions runtime support policy, please refer to this
### Running local on a specific version
-When running locally the Azure Functions runtime defaults to using PowerShell Core 6. To instead use PowerShell 7 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7, your local.settings.json file looks like the following example:
+Support for PowerShell 7.0 in Azure Functions is ending on 3 December 2022. To use PowerShell 7.2 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.2, your local.settings.json file looks like the following example:
```json {
When running locally the Azure Functions runtime defaults to using PowerShell Co
"Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "powershell",
- "FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"
+ "FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"
} } ``` ### Changing the PowerShell version
-Your function app must be running on version 3.x to be able to upgrade from PowerShell Core 6 to PowerShell 7. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
+Support for PowerShell 7.0 in Azure Functions is ending on 3 December 2022. Your function app must be running on version 4.x to be able to upgrade to PowerShell 7.2. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
Use the following steps to change the PowerShell version used by your function app. You can do this either in the Azure portal or by using PowerShell.
Use the following steps to change the PowerShell version used by your function a
1. In the [Azure portal](https://portal.azure.com), browse to your function app. 1. Under **Settings**, choose **Configuration**. In the **General settings** tab, locate the **PowerShell version**. -
- :::image type="content" source="media/functions-reference-powershell/change-powershell-version-portal.png" alt-text="Choose the PowerShell version used by the function app":::
-
+
+ ![image](https://user-images.githubusercontent.com/108835427/199586564-25600629-44c7-439c-91f9-a500ad2989c4.png)
+
1. Choose your desired **PowerShell Core version** and select **Save**. When warned about the pending restart choose **Continue**. The function app restarts on the chosen PowerShell version. # [PowerShell](#tab/powershell)
Set-AzResource -ResourceId "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RES
```
-Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with either `~6` or `~7`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table.
+Replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, and `<FUNCTION_APP>` with the ID of your Azure subscription, the name of your resource group and function app, respectively. Also, replace `<VERSION>` with `7.2`. You can verify the updated value of the `powerShellVersion` setting in `Properties` of the returned hash table.
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
The following table indicates key .NET classes used by Functions that could chan
| | | | | | `FunctionName` (attribute) | `FunctionName` (attribute) | `Function` (attribute) | `Function` (attribute) | | `HttpRequest` | `HttpRequest` | `HttpRequestData` | `HttpRequestData` |
-| `OkObjectResult` | `OkObjectResult` | `HttpResonseData` | `HttpResonseData` |
+| `OkObjectResult` | `OkObjectResult` | `HttpResponseData` | `HttpResponseData` |
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
The Azure FC allocates infrastructure resources to tenants and manages unidirect
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
-Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/develop/v2-oauth2-auth-code-flow.md).
:::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
The [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) allows you to r
Maps Creator service is a suite of web services that developers can use to create applications with map features based on indoor map data.
-Maps Creator provides three core
+Maps Creator provides the following
* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted Drawing package data. For information about Drawing package requirements, see Drawing package requirements.
Maps Creator provides three core
* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](http://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
-<!-* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
->
+* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
+ ### Elevation service The Azure Maps Elevation service is a web service that developers can use to retrieve elevation data from anywhere on the EarthΓÇÖs surface.
Stay up to date on Azure Maps:
[style editor]: https://azure.github.io/Azure-Maps-Style-Editor [FeatureState]: creator-indoor-maps.md#feature-statesets [WFS]: creator-indoor-maps.md#web-feature-service-api
-<!--[wayfinding-preview]: creator-indoor-maps.md# -->
+[wayfinding-preview]: creator-indoor-maps.md#wayfinding-preview
+[wayfind]: /rest/api/maps/v20220901preview/wayfinding
+[routeset]: /rest/api/maps/v20220901preview/routeset
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator
description: Facility Ontology that describes the feature class definitions for Azure Maps Creator Previously updated : 03/02/2022 Last updated : 11/08/2022
zone_pivot_groups: facility-ontology-schema
Facility ontology defines how Azure Maps Creator internally stores facility data in a Creator dataset. In addition to defining internal facility data structure, facility ontology is also exposed externally through the WFS API. When WFS API is used to query facility data in a dataset, the response format is defined by the ontology supplied to that dataset.
-At a high level, facility ontology divides the dataset into feature classes. All feature classes share a common set of properties, such as `ID` and `Geometry`. In addition to the common property set, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependent on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
- ## Changes and Revisions :::zone pivot="facility-ontology-v1"
Fixed the following constraint validation checks:
:::zone-end
+## Feature collection
++
+At a high level, the facility ontology consists of feature collections, each contains an array of feature objects. All feature objects have two fields in common, `ID` and `Geometry`. When importing a drawing package into Azure Maps Creator, these fields are automatically generated.
+++
+At a high level, the facility ontology consists of feature collections, each contains an array of feature objects. All feature objects have two fields in common, `ID` and `Geometry`.
+
+# [Drawing package](#tab/dwg)
+
+When importing a drawing package into Azure Maps Creator, these fields are automatically generated.
+
+# [GeoJSON package (preview)](#tab/geojson)
+
+Support for creating a [dataset][datasetv20220901] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator.
+
+When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object][feature object] in each GeoJSON file in the package.
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object][GeometryObject] in the GeoJSON (RFC 7946) format specification. |
+|`ID` | string | true | The value of this field can be alphanumeric characters (0-9, a-z, A-Z), dots (.), hyphens (-) and underscores (_). Maximum length allowed is 1,000 characters.|
++
+For more information, see [Create a dataset using a GeoJson package](how-to-dataset-geojson.md).
++++
+In addition to these common fields, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependent on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
+
+The remaining sections in this article define the different feature classes and their properties that make up the facility ontology in Microsoft Azure Maps Creator.
+ ## unit The `unit` feature class defines a physical and non-overlapping area that can be occupied and traversed by a navigating agent. A `unit` can be a hallway, a room, a courtyard, and so on.
The `unit` feature class defines a physical and non-overlapping area that can be
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--|--|-|--|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. |
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
-|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. |
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
+|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--|--|-|--|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID.<BR>When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined.<BR>Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`addressRoomNumber` | string | false | Room/Unit/Apartment/Suite number of the unit. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `unit` feature class defines a physical and non-overlapping area that can be
## structure
-The `structure` feature class defines a physical and non-overlapping area that cannot be navigated through. Can be a wall, column, and so on.
+The `structure` feature class defines a physical and non-overlapping area that can't be navigated through. Can be a wall, column, and so on.
**Geometry Type**: Polygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end ## zone
-The `zone` feature class defines a virtual area, like a WiFi zone or emergency assembly area. Zones can be used as destinations but are not meant for through traffic.
+The `zone` feature class defines a virtual area, like a WiFi zone or emergency assembly area. Zones can be used as destinations but aren't meant for through traffic.
**Geometry Type**: Polygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It is recommended that the `setId` is a GUID. Maximum length allowed is 1000.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## level
The `level` class feature defines an area of a building at a set elevation. For
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
-| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1000.|
-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
-| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## facility
The `facility` feature class defines the area of the site, building footprint, a
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
-|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
-|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
+|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.|
+|`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
+ ## verticalPenetration
The `verticalPenetration` class feature defines an area that, when used in a set
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
-|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
+|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters. |
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
| `isConnectedToVerticalPenetration` | boolean | false | Whether or not this feature is connected to a `verticalPenetration` feature on one of its sides. Default value is `false`. |
-|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
+|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. |
| `accessRightToLeft`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from right to left. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `accessLeftToRight`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from left to right. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `isEmergency` | boolean | false | If `true`, the opening is navigable only during emergencies. Default value is `false` |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] y that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
+| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `directoryInfo` object class feature defines the name, address, phone number
**Geometry Type**: None
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1000. |
-|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1000. |
-|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1000.|
-|`adminDivisions`| string| false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1000.|
-|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1000.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1000. |
-|`website` | string | false | Website URL. Maximum length allowed is 1000. |
-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1000. |
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. |
+|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
+|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1,000 characters.|
+|`adminDivisions`| array of strings | false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1,000 characters.|
+|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. |
+|`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1,000 characters. |
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. |
+|`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
+|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1,000 characters.|
+|`adminDivisions`| array of strings| false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1,000 characters.|
+|`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1,000 characters.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. |
+|`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification][Open Street Map specification]. Maximum length allowed is 1,000 characters. |
+ ## pointElement
The `pointElement` is a class feature that defines a point feature in a unit, su
**Geometry Type**: MultiPoint
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000.|
-| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.|
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+ ## lineElement
The `lineElement` is a class feature that defines a line feature in a unit, such
**Geometry Type**: LinearMultiString
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. |
-| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. |
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
-|`obstructionArea` | [Polygon](/rest/api/maps/v2/wfs/get-features#featuregeojson)| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
+|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+ ## areaElement
The `areaElement` is a class feature that defines a polygon feature in a unit, s
**Geometry Type**: MultiPolygon
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. |
-| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`obstructionArea` | geometry: ["Polygon","MultiPolygon" ]| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
-|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
-|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.|
-|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+++
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
+|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. |
+|`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.|
+|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
+|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+ ## category
The `category` class feature defines category names. For example: "room.conferen
:::zone pivot="facility-ontology-v1"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. |
| `routeThroughBehavior` | boolean | false | Determines whether a feature can be used for through traffic.|
-|`isRoutable` | boolean (Default value is `null`.) | false | Determines if a feature should be part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
+|`isRoutable` | boolean (Default value is `null`.) | false | Determines if a feature should be part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
:::zone-end :::zone pivot="facility-ontology-v2"
-| Property | Type | Required | Description |
-|--||-|-|
-|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
-|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. |
+| Property | Type | Required | Description |
+|-|--|-|-|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
+|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. |
:::zone-end+
+[conversion]: /rest/api/maps/v2/conversion
+[geojsonpoint]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
+[GeoJsonPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon
+[MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon
+[GeometryObject]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1
+[feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2
+[datasetv20220901]: /rest/api/maps/v20220901preview/dataset
+[Open Street Map specification]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Creator services create, store, and use various data types that are defined and
- Converted data - Dataset - Tileset-- Custom styles
+- style
+- Map configuration
- Feature stateset
+- Routeset
## Upload a Drawing package
Azure Maps Creator provides the following services that support map creation:
- [Dataset service](/rest/api/maps/v2/dataset). - [Tileset service](/rest/api/maps/v2/tileset). Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.-- Custom styles. Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
+- [Custom styling service](#custom-styling-preview). Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
- [Feature State service](/rest/api/maps/v2/feature-state). Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system.
+- [Wayfinding service](#wayfinding-preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
### Datasets
If a tileset becomes outdated and is no longer useful, you can delete the tilese
> >To reflect changes in a dataset, you must create new tilesets. Similarly, if you delete a tileset, the dataset isn't affected.
-### Custom styling (Preview)
+### Custom styling (preview)
A style defines the visual appearance of a map. It defines what data to draw, the order to draw it in, and how to style the data when drawing it. Azure Maps Creator styles support the MapLibre standard for [style layers][style layers] and [sprites][sprites].
An application can use a feature stateset to dynamically render features in a fa
>[!NOTE] >Like tilesets, changing a dataset doesn't affect the existing feature stateset, and deleting a feature stateset doesn't affect the dataset to which it's attached.
+### Wayfinding (preview)
+
+The [Wayfinding service][wayfind] enables you to provide your customers with the shortest path between two points within a facility. Once you've imported your indoor map data and created your dataset, you can use that to create a [routeset][routeset]. The routeset provides the data required to generate paths between two points. The wayfinding service takes into account things such as the minimum width of openings and may optionally exclude elevators or stairs when navigating between levels as a result.
+
+Creator wayfinding is powered by [Havok][havok].
+
+#### Wayfinding paths
+
+When a [wayfinding path][wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors.
+
+For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg will be the elevator, and then the final leg will be the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON.
+
+##### Structure
+
+For wayfinding to work, the facility data must contain a [structure][structures]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures.
+
+##### Vertical penetration
+
+If the selected origin and destination are on different floors, the wayfinding service determines what [vertical penetration][verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path will be used.
+
+The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding][wayfind] API documentation to learn about other factors that can impact the path selection between floor levels.
+
+For more information, see the [Indoor maps wayfinding service](how-to-creator-wayfinding.md) how-to article.
+ ## Using indoor maps ### Render V2-Get Map Tile API
Creator services such as Conversion, Dataset, Tileset and Feature State return a
### Indoor Maps module
-The [Azure Maps Web SDK](./index.yml) includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets, such as *floor picker*, that help users to visualize the different floors.
+The [Azure Maps Web SDK](./index.yml) includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets such as *floor picker* that help users to visualize the different floors.
You can use the Indoor Maps module to create web applications that integrate indoor map data with other [Azure Maps services](./index.yml). The most common application setups include adding knowledge from other maps - such as road, imagery, weather, and transit - to indoor maps.
The following example shows how to update a dataset, create a new tileset, and d
[basemap]: supported-map-styles.md [style]: /rest/api/maps/v20220901preview/style [tileset]: /rest/api/maps/v20220901preview/tileset
+[routeset]: /rest/api/maps/v20220901preview/routestset
+[wayfind]: /rest/api/maps/v20220901preview/wayfinding
+[wayfinding path] /rest/api/maps/v20220901preview/wayfinding/path
[style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md [map-config-api]: /rest/api/maps/v20220901preview/map-configuration [instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
+[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
+[havok]: https://www.havok.com/
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
+
+ Title: Indoor Maps wayfinding service
+
+description: How to use the wayfinding service to plot and display routes for indoor maps in Microsoft Azure Maps Creator
++ Last updated : 10/25/2022+++++
+# Indoor maps wayfinding service (preview)
+
+The Azure Maps Creator [wayfinding service][wayfinding service] allows you to navigate from place to place anywhere within your indoor map. The service utilizes stairs and elevators to navigate between floors and provides guidance to help you navigate around physical obstructions. This article describes how to generate a path from a starting point to a destination point in a sample indoor map.
+
+## Prerequisites
+
+- Understanding of [Creator concepts](creator-indoor-maps.md).
+- An Azure Maps Creator [dataset][dataset] and [tileset][tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps](tutorial-creator-indoor-maps.md) tutorial helpful.
+
+>[!IMPORTANT]
+>
+> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services][how to manage access to creator services].
+> - In the URL examples in this article you will need to:
+> - Replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status][check dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
+
+## Create a routeset
+
+A [routeset][routeset] is a collection of indoor map data that is used by the wayfinding service.
+
+A routeset is created from a dataset, but is independent from that dataset. This means that if the dataset is deleted, the routeset continues to exist.
+
+Once you've created a routeset, you can then use the wayfinding API to get a path from the starting point to the destination point within the facility.
+
+To create a routeset:
+
+1. Execute the following **HTTP POST request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+1. Copy the value of the **Operation-Location** key from the response header.
+
+This is the status URL that you'll use to check the status of the routeset creation in the next section.
+
+### Check the routeset creation status and retrieve the routesetId
+
+To check the status of the routeset creation process and retrieve the routesetId:
+
+1. Execute the following **HTTP GET request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routsets/operations/{operationId}?api-version=2022-09-01-preview0&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+ > [!NOTE]
+ > Get the `operationId` from the Operation-Location key in the response header when creating a new routeset.
+
+1. Copy the value of the **Resource-Location** key from the responses header. This is the resource location URL and contains the `routsetId`, as shown below:
+
+ > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview
+
+Make a note of the `routesetId`, it will be required parameter in all [wayfinding](#get-a-wayfinding-path) requests, and when your [Get the facility ID](#get-the-facility-id).
+
+### Get the facility ID
+
+The `facilityId`, a property of the routeset, is a required parameter when searching for a wayfinding path. Get the `facilityId` by querying the routeset.
+
+1. Execute the following **HTTP GET request**:
+
+ ```http
+ https://us.atlas.microsoft.com/routsets/{routesetId}?api-version=2022-09-01-preview0&subscription-key={Azure-Maps-Primary-Subscription-key}
+
+ ```
+
+1. The `facilityId` is a property of the `facilityDetails` object, which you can find in the response body of the routeset request, which is `FCL43` in the following example:
+
+```json
+{
+ "routeSetId": "675ce646-f405-03be-302e-0d22bcfe17e8",
+ "dataSetId": "eec3825c-620f-13e1-b469-85d2767c8a41",
+ "created": "10/10/2022 6:58:32 PM +00:00",
+ "facilityDetails": [
+ {
+ "facilityId": "FCL43",
+ "levelOrdinals": [
+ 0,
+ 1
+ ]
+ }
+ ],
+ "creationMode": "Wall",
+ "ontology": "facility-2.0"
+}
+```
+
+## Get a wayfinding path
+
+In this section, youΓÇÖll use the [wayfinding API][wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding][wayfinding] in the concepts article.
+
+To create a wayfinding query:
+
+1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status](#check-the-routeset-creation-status-and-retrieve-the-routesetid) section and the {facilityId} with the facilityId obtained in the [Get the facility ID](#get-the-facility-id) section):
+
+ ```http
+ https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Azure-Maps-Primary-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width}
+ ```
+
+ > [!TIP]
+ > The `AvoidFeatures` parameter can be used to specify something for the wayfinding service to avoid when determining the path, such as elevators or stairs.
+
+1. The details of the path and legs are displayed in the Body of the response.
+
+The summary displays the estimated travel time in seconds for the total journey. In addition, the estimated time for each section of the journey is displayed at the beginning of each leg.
+
+The wayfinding service calculates the path through specific intervening points. Each point is displayed, along with its latitude and longitude details.
+
+<!-- TODO: ## Implement the wayfinding service in your map (Refer to sample app once completed) -->
+
+[dataset]: creator-indoor-maps.md#datasets
+[tileset]: creator-indoor-maps.md#tilesets
+[routeset]: /rest/api/maps/v20220901preview/routeset
+[wayfinding]: creator-indoor-maps.md#wayfinding-preview
+[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
+[how to manage access to creator services]: how-to-manage-creator.md#access-to-creator-services
+[check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
+[wayfinding service]: creator-indoor-maps.md#wayfinding-preview
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage
description: Learn about Microsoft Azure Maps Weather services coverage Previously updated : 03/28/2022 Last updated : 11/08/2022
Azure Maps [Severe weather alerts][severe-weather-alerts] service returns severe
## Americas
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-||::|:-:|::|::|
-| Anguilla | Γ£ô | | | Γ£ô |
-| Antarctica | Γ£ô | | | Γ£ô |
-| Antigua & Barbuda | Γ£ô | | | Γ£ô |
-| Argentina | Γ£ô | | | Γ£ô |
-| Aruba | Γ£ô | | | Γ£ô |
-| Bahamas | Γ£ô | | | Γ£ô |
-| Barbados | Γ£ô | | | Γ£ô |
-| Belize | Γ£ô | | | Γ£ô |
-| Bermuda | Γ£ô | | | Γ£ô |
-| Bolivia | Γ£ô | | | Γ£ô |
-| Bonaire | Γ£ô | | | Γ£ô |
-| Brazil | Γ£ô | | Γ£ô | Γ£ô |
-| British Virgin Islands | Γ£ô | | | Γ£ô |
-| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Cayman Islands | Γ£ô | | | Γ£ô |
-| Chile | Γ£ô | | | Γ£ô |
-| Colombia | Γ£ô | | | Γ£ô |
-| Costa Rica | Γ£ô | | | Γ£ô |
-| Cuba | Γ£ô | | | Γ£ô |
-| Curaçao | ✓ | | | ✓ |
-| Dominica | Γ£ô | | | Γ£ô |
-| Dominican Republic | Γ£ô | | | Γ£ô |
-| Ecuador | Γ£ô | | | Γ£ô |
-| El Salvador | Γ£ô | | | Γ£ô |
-| Falkland Islands | Γ£ô | | | Γ£ô |
-| French Guiana | Γ£ô | | | Γ£ô |
-| Greenland | Γ£ô | | | Γ£ô |
-| Grenada | Γ£ô | | | Γ£ô |
-| Guadeloupe | Γ£ô | | | Γ£ô |
-| Guatemala | Γ£ô | | | Γ£ô |
-| Guyana | Γ£ô | | | Γ£ô |
-| Haiti | Γ£ô | | | Γ£ô |
-| Honduras | Γ£ô | | | Γ£ô |
-| Jamaica | Γ£ô | | | Γ£ô |
-| Martinique | Γ£ô | | | Γ£ô |
-| Mexico | Γ£ô | | | Γ£ô |
-| Montserrat | Γ£ô | | | Γ£ô |
-| Nicaragua | Γ£ô | | | Γ£ô |
-| Panama | Γ£ô | | | Γ£ô |
-| Paraguay | Γ£ô | | | Γ£ô |
-| Peru | Γ£ô | | | Γ£ô |
-| Puerto Rico | Γ£ô | | Γ£ô | Γ£ô |
-| Saint Barthélemy | ✓ | | | ✓ |
-| Saint Kitts & Nevis | Γ£ô | | | Γ£ô |
-| Saint Lucia | Γ£ô | | | Γ£ô |
-| Saint Martin | Γ£ô | | | Γ£ô |
-| Saint Pierre & Miquelon | Γ£ô | | | Γ£ô |
-| Saint Vincent & the Grenadines | Γ£ô | | | Γ£ô |
-| Sint Eustatius | Γ£ô | | | Γ£ô |
-| Sint Maarten | Γ£ô | | | Γ£ô |
-| South Georgia & South Sandwich Islands | Γ£ô | | | Γ£ô |
-| Suriname | Γ£ô | | | Γ£ô |
-| Trinidad & Tobago | Γ£ô | | | Γ£ô |
-| Turks & Caicos Islands | Γ£ô | | | Γ£ô |
-| U.S. Outlying Islands | Γ£ô | | | Γ£ô |
-| U.S. Virgin Islands | Γ£ô | | Γ£ô | Γ£ô |
-| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Uruguay | Γ£ô | | | Γ£ô |
-| Venezuela | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+||::|:--:|::|::|
+| Anguilla | Γ£ô | Γ£ô | | Γ£ô |
+| Antarctica | Γ£ô | | | Γ£ô |
+| Antigua & Barbuda | Γ£ô | Γ£ô | | Γ£ô |
+| Argentina | Γ£ô | Γ£ô | | Γ£ô |
+| Aruba | Γ£ô | Γ£ô | | Γ£ô |
+| Bahamas | Γ£ô | Γ£ô | | Γ£ô |
+| Barbados | Γ£ô | Γ£ô | | Γ£ô |
+| Belize | Γ£ô | Γ£ô | | Γ£ô |
+| Bermuda | Γ£ô | | | Γ£ô |
+| Bolivia | Γ£ô | Γ£ô | | Γ£ô |
+| Bonaire | Γ£ô | Γ£ô | | Γ£ô |
+| Brazil | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| British Virgin Islands | Γ£ô | Γ£ô | | Γ£ô |
+| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cayman Islands | Γ£ô | Γ£ô | | Γ£ô |
+| Chile | Γ£ô | Γ£ô | | Γ£ô |
+| Colombia | Γ£ô | Γ£ô | | Γ£ô |
+| Costa Rica | Γ£ô | Γ£ô | | Γ£ô |
+| Cuba | Γ£ô | Γ£ô | | Γ£ô |
+| Curaçao | ✓ | ✓ | | ✓ |
+| Dominica | Γ£ô | Γ£ô | | Γ£ô |
+| Dominican Republic | Γ£ô | Γ£ô | | Γ£ô |
+| Ecuador | Γ£ô | Γ£ô | | Γ£ô |
+| El Salvador | Γ£ô | Γ£ô | | Γ£ô |
+| Falkland Islands | Γ£ô | Γ£ô | | Γ£ô |
+| French Guiana | Γ£ô | Γ£ô | | Γ£ô |
+| Greenland | Γ£ô | | | Γ£ô |
+| Grenada | Γ£ô | Γ£ô | | Γ£ô |
+| Guadeloupe | Γ£ô | Γ£ô | | Γ£ô |
+| Guatemala | Γ£ô | Γ£ô | | Γ£ô |
+| Guyana | Γ£ô | Γ£ô | | Γ£ô |
+| Haiti | Γ£ô | Γ£ô | | Γ£ô |
+| Honduras | Γ£ô | Γ£ô | | Γ£ô |
+| Jamaica | Γ£ô | Γ£ô | | Γ£ô |
+| Martinique | Γ£ô | Γ£ô | | Γ£ô |
+| Mexico | Γ£ô | Γ£ô | | Γ£ô |
+| Montserrat | Γ£ô | Γ£ô | | Γ£ô |
+| Nicaragua | Γ£ô | Γ£ô | | Γ£ô |
+| Panama | Γ£ô | Γ£ô | | Γ£ô |
+| Paraguay | Γ£ô | Γ£ô | | Γ£ô |
+| Peru | Γ£ô | Γ£ô | | Γ£ô |
+| Puerto Rico | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Saint Barthélemy | ✓ | ✓ | | ✓ |
+| Saint Kitts & Nevis | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Lucia | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Martin | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Pierre & Miquelon | Γ£ô | | | Γ£ô |
+| Saint Vincent & the Grenadines | Γ£ô | Γ£ô | | Γ£ô |
+| Sint Eustatius | Γ£ô | | | Γ£ô |
+| Sint Maarten | Γ£ô | Γ£ô | | Γ£ô |
+| South Georgia & South Sandwich Islands | Γ£ô | | | Γ£ô |
+| Suriname | Γ£ô | Γ£ô | | Γ£ô |
+| Trinidad & Tobago | Γ£ô | Γ£ô | | Γ£ô |
+| Turks & Caicos Islands | Γ£ô | Γ£ô | | Γ£ô |
+| U.S. Outlying Islands | Γ£ô | | | Γ£ô |
+| U.S. Virgin Islands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Uruguay | Γ£ô | Γ£ô | | Γ£ô |
+| Venezuela | Γ£ô | Γ£ô | | Γ£ô |
## Asia Pacific
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|--|::|:-:|::|::|
-| Afghanistan | Γ£ô | | | Γ£ô |
-| American Samoa | Γ£ô | | Γ£ô | Γ£ô |
-| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bangladesh | Γ£ô | | | Γ£ô |
-| Bhutan | Γ£ô | | | Γ£ô |
-| British Indian Ocean Territory | Γ£ô | | | Γ£ô |
-| Brunei | Γ£ô | | | Γ£ô |
-| Cambodia | Γ£ô | | | Γ£ô |
-| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Christmas Island | Γ£ô | | | Γ£ô |
-| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô |
-| Cook Islands | Γ£ô | | | Γ£ô |
-| Fiji | Γ£ô | | | Γ£ô |
-| French Polynesia | Γ£ô | | | Γ£ô |
-| Guam | Γ£ô | | Γ£ô | Γ£ô |
-| Heard Island & McDonald Islands | Γ£ô | | | Γ£ô |
-| Hong Kong SAR | Γ£ô | | | Γ£ô |
-| India | Γ£ô | | | Γ£ô |
-| Indonesia | Γ£ô | | | Γ£ô |
-| Japan | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Kazakhstan | Γ£ô | | | Γ£ô |
-| Kiribati | Γ£ô | | | Γ£ô |
-| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Kyrgyzstan | Γ£ô | | | Γ£ô |
-| Laos | Γ£ô | | | Γ£ô |
-| Macao SAR | Γ£ô | | | Γ£ô |
-| Malaysia | Γ£ô | | | Γ£ô |
-| Maldives | Γ£ô | | | Γ£ô |
-| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô |
-| Micronesia | Γ£ô | | Γ£ô | Γ£ô |
-| Mongolia | Γ£ô | | | Γ£ô |
-| Myanmar | Γ£ô | | | Γ£ô |
-| Nauru | Γ£ô | | | Γ£ô |
-| Nepal | Γ£ô | | | Γ£ô |
-| New Caledonia | Γ£ô | | | Γ£ô |
-| New Zealand | Γ£ô | | Γ£ô | Γ£ô |
-| Niue | Γ£ô | | | Γ£ô |
-| Norfolk Island | Γ£ô | | | Γ£ô |
-| North Korea | Γ£ô | | | Γ£ô |
-| Northern Mariana Islands | Γ£ô | | Γ£ô | Γ£ô |
-| Pakistan | Γ£ô | | | Γ£ô |
-| Palau | Γ£ô | | Γ£ô | Γ£ô |
-| Papua New Guinea | Γ£ô | | | Γ£ô |
-| Philippines | Γ£ô | | Γ£ô | Γ£ô |
-| Pitcairn Islands | Γ£ô | | | Γ£ô |
-| Samoa | Γ£ô | | | Γ£ô |
-| Singapore | Γ£ô | | | Γ£ô |
-| Solomon Islands | Γ£ô | | | Γ£ô |
-| Sri Lanka | Γ£ô | | | Γ£ô |
-| Taiwan | Γ£ô | | | Γ£ô |
-| Tajikistan | Γ£ô | | | Γ£ô |
-| Thailand | Γ£ô | | | Γ£ô |
-| Timor-Leste | Γ£ô | | | Γ£ô |
-| Tokelau | Γ£ô | | | Γ£ô |
-| Tonga | Γ£ô | | | Γ£ô |
-| Turkmenistan | Γ£ô | | | Γ£ô |
-| Tuvalu | Γ£ô | | | Γ£ô |
-| Uzbekistan | Γ£ô | | | Γ£ô |
-| Vanuatu | Γ£ô | | | Γ£ô |
-| Vietnam | Γ£ô | | | Γ£ô |
-| Wallis & Futuna | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|--|::|:--:|::|::|
+| Afghanistan | Γ£ô | Γ£ô | | Γ£ô |
+| American Samoa | Γ£ô | | Γ£ô | Γ£ô |
+| Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bangladesh | Γ£ô | Γ£ô | | Γ£ô |
+| Bhutan | Γ£ô | Γ£ô | | Γ£ô |
+| British Indian Ocean Territory | Γ£ô | | | Γ£ô |
+| Brunei | Γ£ô | Γ£ô | | Γ£ô |
+| Cambodia | Γ£ô | Γ£ô | | Γ£ô |
+| China | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Christmas Island | Γ£ô | | | Γ£ô |
+| Cocos (Keeling) Islands | Γ£ô | | | Γ£ô |
+| Cook Islands | Γ£ô | | | Γ£ô |
+| Fiji | Γ£ô | | | Γ£ô |
+| French Polynesia | Γ£ô | | | Γ£ô |
+| Guam | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Heard Island & McDonald Islands | Γ£ô | | | Γ£ô |
+| Hong Kong SAR | Γ£ô | Γ£ô | | Γ£ô |
+| India | Γ£ô | Γ£ô | | Γ£ô |
+| Indonesia | Γ£ô | Γ£ô | | Γ£ô |
+| Japan | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kazakhstan | Γ£ô | Γ£ô | | Γ£ô |
+| Kiribati | Γ£ô | | | Γ£ô |
+| Korea | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Kyrgyzstan | Γ£ô | Γ£ô | | Γ£ô |
+| Laos | Γ£ô | Γ£ô | | Γ£ô |
+| Macao SAR | Γ£ô | Γ£ô | | Γ£ô |
+| Malaysia | Γ£ô | Γ£ô | | Γ£ô |
+| Maldives | Γ£ô | | | Γ£ô |
+| Marshall Islands | Γ£ô | | Γ£ô | Γ£ô |
+| Micronesia | Γ£ô | | Γ£ô | Γ£ô |
+| Mongolia | Γ£ô | | | Γ£ô |
+| Myanmar | Γ£ô | | | Γ£ô |
+| Nauru | Γ£ô | | | Γ£ô |
+| Nepal | Γ£ô | Γ£ô | | Γ£ô |
+| New Caledonia | Γ£ô | | | Γ£ô |
+| New Zealand | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Niue | Γ£ô | | | Γ£ô |
+| Norfolk Island | Γ£ô | | | Γ£ô |
+| North Korea | Γ£ô | Γ£ô | | Γ£ô |
+| Northern Mariana Islands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Pakistan | Γ£ô | Γ£ô | | Γ£ô |
+| Palau | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Papua New Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Philippines | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Pitcairn Islands | Γ£ô | | | Γ£ô |
+| Samoa | Γ£ô | | | Γ£ô |
+| Singapore | Γ£ô | Γ£ô | | Γ£ô |
+| Solomon Islands | Γ£ô | | | Γ£ô |
+| Sri Lanka | Γ£ô | Γ£ô | | Γ£ô |
+| Taiwan | Γ£ô | Γ£ô | | Γ£ô |
+| Tajikistan | Γ£ô | Γ£ô | | Γ£ô |
+| Thailand | Γ£ô | Γ£ô | | Γ£ô |
+| Timor-Leste | Γ£ô | Γ£ô | | Γ£ô |
+| Tokelau | Γ£ô | | | Γ£ô |
+| Tonga | Γ£ô | | | Γ£ô |
+| Turkmenistan | Γ£ô | Γ£ô | | Γ£ô |
+| Tuvalu | Γ£ô | | | Γ£ô |
+| Uzbekistan | Γ£ô | Γ£ô | | Γ£ô |
+| Vanuatu | Γ£ô | | | Γ£ô |
+| Vietnam | Γ£ô | Γ£ô | | Γ£ô |
+| Wallis & Futuna | Γ£ô | | | Γ£ô |
## Europe
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|-|::|:-:|::|::|
-| Albania | Γ£ô | | | Γ£ô |
-| Andorra | Γ£ô | | Γ£ô | Γ£ô |
-| Armenia | Γ£ô | | | Γ£ô |
-| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Azerbaijan | Γ£ô | | | Γ£ô |
-| Belarus | Γ£ô | | | Γ£ô |
-| Belgium | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bosnia & Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Bulgaria | Γ£ô | | Γ£ô | Γ£ô |
-| Croatia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Cyprus | Γ£ô | | Γ£ô | Γ£ô |
-| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Estonia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Faroe Islands | Γ£ô | | | Γ£ô |
-| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Georgia | Γ£ô | | | Γ£ô |
-| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Gibraltar | Γ£ô | Γ£ô | | Γ£ô |
-| Greece | Γ£ô | | Γ£ô | Γ£ô |
-| Guernsey | Γ£ô | | | Γ£ô |
-| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Iceland | Γ£ô | | Γ£ô | Γ£ô |
-| Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Isle of Man | Γ£ô | | | Γ£ô |
-| Italy | Γ£ô | | Γ£ô | Γ£ô |
-| Jan Mayen | Γ£ô | | | Γ£ô |
-| Jersey | Γ£ô | | | Γ£ô |
-| Kosovo | Γ£ô | | Γ£ô | Γ£ô |
-| Latvia | Γ£ô | | Γ£ô | Γ£ô |
-| Liechtenstein | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Lithuania | Γ£ô | | Γ£ô | Γ£ô |
-| Luxembourg | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| North Macedonia | Γ£ô | | Γ£ô | Γ£ô |
-| Malta | Γ£ô | | Γ£ô | Γ£ô |
-| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Monaco | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Montenegro | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Netherlands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Norway | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Poland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Portugal | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Romania | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Russia | Γ£ô | | Γ£ô | Γ£ô |
-| San Marino | Γ£ô | | Γ£ô | Γ£ô |
-| Serbia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Slovakia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Slovenia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Spain | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Svalbard | Γ£ô | | | Γ£ô |
-| Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | | | Γ£ô |
-| Ukraine | Γ£ô | | | Γ£ô |
-| United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Vatican City | Γ£ô | | Γ£ô | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|-|::|:--:|::|::|
+| Albania | Γ£ô | Γ£ô | | Γ£ô |
+| Andorra | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Armenia | Γ£ô | Γ£ô | | Γ£ô |
+| Austria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Azerbaijan | Γ£ô | Γ£ô | | Γ£ô |
+| Belarus | Γ£ô | Γ£ô | | Γ£ô |
+| Belgium | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bosnia & Herzegovina | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Bulgaria | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Croatia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Cyprus | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Czechia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Denmark | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Estonia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Faroe Islands | Γ£ô | | | Γ£ô |
+| Finland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| France | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Georgia | Γ£ô | Γ£ô | | Γ£ô |
+| Germany | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Gibraltar | Γ£ô | Γ£ô | | Γ£ô |
+| Greece | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Guernsey | Γ£ô | | | Γ£ô |
+| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Iceland | Γ£ô | | Γ£ô | Γ£ô |
+| Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Isle of Man | Γ£ô | | | Γ£ô |
+| Italy | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Jan Mayen | Γ£ô | | | Γ£ô |
+| Jersey | Γ£ô | | | Γ£ô |
+| Kosovo | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Latvia | Γ£ô | | Γ£ô | Γ£ô |
+| Liechtenstein | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Lithuania | Γ£ô | | Γ£ô | Γ£ô |
+| Luxembourg | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| North Macedonia | Γ£ô | | Γ£ô | Γ£ô |
+| Malta | Γ£ô | | Γ£ô | Γ£ô |
+| Moldova | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Monaco | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Montenegro | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Netherlands | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Norway | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Poland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Portugal | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Romania | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Russia | Γ£ô | 1 | Γ£ô | Γ£ô |
+| San Marino | Γ£ô | | Γ£ô | Γ£ô |
+| Serbia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovakia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Slovenia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Spain | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Svalbard | Γ£ô | | | Γ£ô |
+| Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Turkey | Γ£ô | Γ£ô | | Γ£ô |
+| Ukraine | Γ£ô | Γ£ô | | Γ£ô |
+| United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Vatican City | Γ£ô | | Γ£ô | Γ£ô |
+
+1 Partial coverage includes Moscow and Saint Petersburg
## Middle East & Africa
-| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
-|-|::|:-:|::|::|
-| Algeria | Γ£ô | | | Γ£ô |
-| Angola | Γ£ô | | | Γ£ô |
-| Bahrain | Γ£ô | | | Γ£ô |
-| Benin | Γ£ô | | | Γ£ô |
-| Botswana | Γ£ô | | | Γ£ô |
-| Bouvet Island | Γ£ô | | | Γ£ô |
-| Burkina Faso | Γ£ô | | | Γ£ô |
-| Burundi | Γ£ô | | | Γ£ô |
-| Cameroon | Γ£ô | | | Γ£ô |
-| Cape Verde | Γ£ô | | | Γ£ô |
-| Central African Republic | Γ£ô | | | Γ£ô |
-| Chad | Γ£ô | | | Γ£ô |
-| Comoros | Γ£ô | | | Γ£ô |
-| Congo (DRC) | Γ£ô | | | Γ£ô |
-| C├┤te d'Ivoire | Γ£ô | | | Γ£ô |
-| Djibouti | Γ£ô | | | Γ£ô |
-| Egypt | Γ£ô | | | Γ£ô |
-| Equatorial Guinea | Γ£ô | | | Γ£ô |
-| Eritrea | Γ£ô | | | Γ£ô |
-| eSwatini | Γ£ô | | | Γ£ô |
-| Ethiopia | Γ£ô | | | Γ£ô |
-| French Southern Territories | Γ£ô | | | Γ£ô |
-| Gabon | Γ£ô | | | Γ£ô |
-| Gambia | Γ£ô | | | Γ£ô |
-| Ghana | Γ£ô | | | Γ£ô |
-| Guinea | Γ£ô | | | Γ£ô |
-| Guinea-Bissau | Γ£ô | | | Γ£ô |
-| Iran | Γ£ô | | | Γ£ô |
-| Iraq | Γ£ô | | | Γ£ô |
-| Israel | Γ£ô | | Γ£ô | Γ£ô |
-| Jordan | Γ£ô | | | Γ£ô |
-| Kenya | Γ£ô | | | Γ£ô |
-| Kuwait | Γ£ô | | | Γ£ô |
-| Lebanon | Γ£ô | | | Γ£ô |
-| Lesotho | Γ£ô | | | Γ£ô |
-| Liberia | Γ£ô | | | Γ£ô |
-| Libya | Γ£ô | | | Γ£ô |
-| Madagascar | Γ£ô | | | Γ£ô |
-| Malawi | Γ£ô | | | Γ£ô |
-| Mali | Γ£ô | | | Γ£ô |
-| Mauritania | Γ£ô | | | Γ£ô |
-| Mauritius | Γ£ô | | | Γ£ô |
-| Mayotte | Γ£ô | | | Γ£ô |
-| Morocco | Γ£ô | | | Γ£ô |
-| Mozambique | Γ£ô | | | Γ£ô |
-| Namibia | Γ£ô | | | Γ£ô |
-| Niger | Γ£ô | | | Γ£ô |
-| Nigeria | Γ£ô | | | Γ£ô |
-| Oman | Γ£ô | | | Γ£ô |
-| Palestinian Authority | Γ£ô | | | Γ£ô |
-| Qatar | Γ£ô | | | Γ£ô |
-| Réunion | ✓ | | | ✓ |
-| Rwanda | Γ£ô | | | Γ£ô |
-| Saint Helena, Ascension, Tristan da Cunha | Γ£ô | | | Γ£ô |
-| São Tomé & Príncipe | ✓ | | | ✓ |
-| Saudi Arabia | Γ£ô | | | Γ£ô |
-| Senegal | Γ£ô | | | Γ£ô |
-| Seychelles | Γ£ô | | | Γ£ô |
-| Sierra Leone | Γ£ô | | | Γ£ô |
-| Somalia | Γ£ô | | | Γ£ô |
-| South Africa | Γ£ô | | | Γ£ô |
-| South Sudan | Γ£ô | | | Γ£ô |
-| Sudan | Γ£ô | | | Γ£ô |
-| Syria | Γ£ô | | | Γ£ô |
-| Tanzania | Γ£ô | | | Γ£ô |
-| Togo | Γ£ô | | | Γ£ô |
-| Tunisia | Γ£ô | | | Γ£ô |
-| Uganda | Γ£ô | | | Γ£ô |
-| United Arab Emirates | Γ£ô | | | Γ£ô |
-| Yemen | Γ£ô | | | Γ£ô |
-| Zambia | Γ£ô | | | Γ£ô |
-| Zimbabwe | Γ£ô | | | Γ£ô |
+| Country/Region | Infrared satellite & Radar tiles | Minute forecast | Severe weather alerts | Other* |
+|-|::|:--:|::|::|
+| Algeria | Γ£ô | Γ£ô | | Γ£ô |
+| Angola | Γ£ô | Γ£ô | | Γ£ô |
+| Bahrain | Γ£ô | Γ£ô | | Γ£ô |
+| Benin | Γ£ô | Γ£ô | | Γ£ô |
+| Botswana | Γ£ô | Γ£ô | | Γ£ô |
+| Bouvet Island | Γ£ô | | | Γ£ô |
+| Burkina Faso | Γ£ô | Γ£ô | | Γ£ô |
+| Burundi | Γ£ô | Γ£ô | | Γ£ô |
+| Cameroon | Γ£ô | Γ£ô | | Γ£ô |
+| Cape Verde | Γ£ô | Γ£ô | | Γ£ô |
+| Central African Republic | Γ£ô | Γ£ô | | Γ£ô |
+| Chad | Γ£ô | Γ£ô | | Γ£ô |
+| Comoros | Γ£ô | Γ£ô | | Γ£ô |
+| Congo (DRC) | Γ£ô | Γ£ô | | Γ£ô |
+| C├┤te d'Ivoire | Γ£ô | Γ£ô | | Γ£ô |
+| Djibouti | Γ£ô | Γ£ô | | Γ£ô |
+| Egypt | Γ£ô | Γ£ô | | Γ£ô |
+| Equatorial Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Eritrea | Γ£ô | Γ£ô | | Γ£ô |
+| Eswatini | Γ£ô | Γ£ô | | Γ£ô |
+| Ethiopia | Γ£ô | Γ£ô | | Γ£ô |
+| French Southern Territories | Γ£ô | | | Γ£ô |
+| Gabon | Γ£ô | Γ£ô | | Γ£ô |
+| Gambia | Γ£ô | Γ£ô | | Γ£ô |
+| Ghana | Γ£ô | Γ£ô | | Γ£ô |
+| Guinea | Γ£ô | Γ£ô | | Γ£ô |
+| Guinea-Bissau | Γ£ô | Γ£ô | | Γ£ô |
+| Iran | Γ£ô | Γ£ô | | Γ£ô |
+| Iraq | Γ£ô | Γ£ô | | Γ£ô |
+| Israel | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Jordan | Γ£ô | Γ£ô | | Γ£ô |
+| Kenya | Γ£ô | Γ£ô | | Γ£ô |
+| Kuwait | Γ£ô | Γ£ô | | Γ£ô |
+| Lebanon | Γ£ô | Γ£ô | | Γ£ô |
+| Lesotho | Γ£ô | Γ£ô | | Γ£ô |
+| Liberia | Γ£ô | Γ£ô | | Γ£ô |
+| Libya | Γ£ô | Γ£ô | | Γ£ô |
+| Madagascar | Γ£ô | Γ£ô | | Γ£ô |
+| Malawi | Γ£ô | Γ£ô | | Γ£ô |
+| Mali | Γ£ô | Γ£ô | | Γ£ô |
+| Mauritania | Γ£ô | Γ£ô | | Γ£ô |
+| Mauritius | Γ£ô | Γ£ô | | Γ£ô |
+| Mayotte | Γ£ô | Γ£ô | | Γ£ô |
+| Morocco | Γ£ô | | | Γ£ô |
+| Mozambique | Γ£ô | Γ£ô | | Γ£ô |
+| Namibia | Γ£ô | Γ£ô | | Γ£ô |
+| Niger | Γ£ô | Γ£ô | | Γ£ô |
+| Nigeria | Γ£ô | Γ£ô | | Γ£ô |
+| Oman | Γ£ô | Γ£ô | | Γ£ô |
+| Palestinian Authority | Γ£ô | Γ£ô | | Γ£ô |
+| Qatar | Γ£ô | Γ£ô | | Γ£ô |
+| Réunion | ✓ | ✓ | | ✓ |
+| Rwanda | Γ£ô | Γ£ô | | Γ£ô |
+| Saint Helena, Ascension, Tristan da Cunha | Γ£ô | Γ£ô | | Γ£ô |
+| São Tomé & Príncipe | ✓ | ✓ | | ✓ |
+| Saudi Arabia | Γ£ô | Γ£ô | | Γ£ô |
+| Senegal | Γ£ô | Γ£ô | | Γ£ô |
+| Seychelles | Γ£ô | Γ£ô | | Γ£ô |
+| Sierra Leone | Γ£ô | Γ£ô | | Γ£ô |
+| Somalia | Γ£ô | Γ£ô | | Γ£ô |
+| South Africa | Γ£ô | Γ£ô | | Γ£ô |
+| South Sudan | Γ£ô | Γ£ô | | Γ£ô |
+| Sudan | Γ£ô | Γ£ô | | Γ£ô |
+| Syria | Γ£ô | Γ£ô | | Γ£ô |
+| Tanzania | Γ£ô | Γ£ô | | Γ£ô |
+| Togo | Γ£ô | Γ£ô | | Γ£ô |
+| Tunisia | Γ£ô | Γ£ô | | Γ£ô |
+| Uganda | Γ£ô | Γ£ô | | Γ£ô |
+| United Arab Emirates | Γ£ô | Γ£ô | | Γ£ô |
+| Yemen | Γ£ô | Γ£ô | | Γ£ô |
+| Zambia | Γ£ô | Γ£ô | | Γ£ô |
+| Zimbabwe | Γ£ô | Γ£ô | | Γ£ô |
## Next steps
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 11/3/2022 Last updated : 11/9/2022
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Agent diagnostics logs | | | X | | **Data sent to** | | | | | | | Azure Monitor Logs | X | X | |
-| | Azure Monitor Metrics<sup>1</sup> | X | | X |
+| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | X (Public preview) |
| | Azure Storage | | | X | | | Event Hub | | | X | | **Services and features supported** | | | | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | File based logs | X (Public preview) | | | | | **Data sent to** | | | | | | | | Azure Monitor Logs | X | X | | |
-| | Azure Monitor Metrics<sup>1</sup> | X | | | X |
+| | Azure Monitor Metrics<sup>1</sup> | X (Public preview) | | | X (Public preview) |
| | Azure Storage | | | X | | | | Event Hub | | | X | | | **Services and features supported** | | | | | |
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing Azure Monitor Agent on Azure virtual machines
Previously updated : 09/22/2022 Last updated : 11/9/2022
The following prerequisites must be met prior to installing Azure Monitor Agent.
| Built-in role | Scopes | Reason | |:|:|:| | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets,</li><li>Azure Arc-enabled servers</li></ul> | To deploy the agent |
- | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy Azure Resource Manager templates |
+ | Any role that includes the action *Microsoft.Resources/deployments/** (for example, [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy agent extension via Azure Resource Manager templates (also used by Azure Policy) |
- **Non-Azure**: To install the agent on physical servers and virtual machines hosted *outside* of Azure (that is, on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first, at no added cost. - **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both user-assigned and system-assigned managed identities are supported. - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to Azure Monitor Agent via extension settings:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/05/2021 Last updated : 11/09/2022 ms.devlang: csharp
What follows is our step-by-step troubleshooting guide for extension/agent-based
# [Linux](#tab/linux)
-1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`.
+1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~3`.
1. Browse to `https://your site name.scm.azurewebsites.net/ApplicationInsights`. 1. Within this site, confirm: * The status source exists and looks like `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
When filtering down to a particular resource in the Change Analysis standalone p
1. In the Azure portal, select **All resources**. 1. Select the actual resource you want to view. 1. In that resource's left side menu, select **Diagnose and solve problems**.
-1. Select **Change details**.
+1. In the Change Analysis card, select **View change details**.
+
+ :::image type="content" source="./media/change-analysis/change-details-card.png" alt-text="Screenshot of viewing change details from the Change Analysis card in Diagnose and solve problems tool.":::
From here, you'll be able to view all of the changes for that one resource.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
In addition to the methods below, you may be given the option to create a new Az
Use the following command to create an Azure Monitor workspace using Azure CLI. ```azurecli
-az resource create --resource-group divyaj-test --namespace microsoft.monitor --resource-type accounts --name testmac0929 --location westus2 --properties {}
+az resource create --resource-group <resource-group-name> --namespace microsoft.monitor --resource-type accounts --name <azure-monitor-workspace-name> --location <location> --properties {}
``` ### [Resource Manager](#tab/resource-manager)
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Previously updated : 07/10/2022 Last updated : 07/27/2022 #Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated. # Migrate from diagnostic settings storage retention to Azure Storage lifecycle management
-This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
+The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics use Azure Storage Lifecycle Management.
+
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+
+> [!IMPORTANT]
+> **Deprecation Timeline.**
+> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them.
+> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
+> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
++ ## Prerequisites
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 11/08/2022 Last updated : 11/09/2022 + # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
## Configurable network features
- Register for the [**configurable network features**](configure-network-features.md) to create volumes with standard network features. You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
+ You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features. For more information, see [Configure network features](configure-network-features.md).
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
Azure NetApp Files Standard network features are supported for the following reg
* Australia Central 2 * Australia East * Australia Southeast
+* Brazil South
* Canada Central * Central US * East Asia
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 10/18/2022 Last updated : 11/10/2022 # Solution architectures using Azure NetApp Files
This section provides references for High Performance Computing (HPC) solutions.
### Analytics * [SAS on Azure architecture guide - Azure Architecture Center | Azure NetApp Files](/azure/architecture/guide/sas/sas-overview#azure-netapp-files-nfs)
+* [Deploy SAS Grid 9.4 on Azure NetApp Files](/azure/architecture/guide/hpc/netapp-files-sas)
+* [Best Practices for Using Microsoft Azure with SAS®](https://communities.sas.com/t5/Administration-and-Deployment/Best-Practices-for-Using-Microsoft-Azure-with-SAS/m-p/676833#M19680)
* [Azure NetApp Files: A shared file system to use with SAS Grid on Microsoft Azure](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/705192) * [Azure NetApp Files: A shared file system to use with SAS Grid on MS Azure – RHEL8.3/nconnect UPDATE](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/722261#M21648) * [Best Practices for Using Microsoft Azure with SAS®](https://communities.sas.com/t5/Administration-and-Deployment/Best-Practices-for-Using-Microsoft-Azure-with-SAS/m-p/676833#M19680)
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 09/29/2022 Last updated : 11/09/2022
Two settings are available for network features:
* Conversion between Basic and Standard networking features in either direction is not currently supported.
-## Register the feature
-
-Follow the registration steps if you're using the feature for the first time.
-
-1. Register the feature by running the following commands:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSDNAppliance
-
- Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPoliciesOnBareMetal
- ```
-
-2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSDNAppliance
-
- Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPoliciesOnBareMetal
- ```
-
-You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- ## Set the Network Features option This section shows you how to set the Network Features option.
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
na Previously updated : 04/21/2021 Last updated : 11/09/2022 # Manage disaster recovery using cross-region replication
After disaster recovery, you can reactivate the source volume by performing a re
> [!IMPORTANT] > The reverse resync operation synchronizes the source and destination volumes by incrementally updating the source volume with the latest updates from the destination volume, based on the last available common snapshots. This operation avoids the need to synchronize the entire volume in most cases because only changes to the destination volume *after* the most recent common snapshot will have to be replicated to the source volume. >
-> The reverse resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.
+> ***The reverse resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.***
> > In case the source volume did not survive the disaster and therefore no common snapshot exists, all data in the destination will be resynchronized to a newly created source volume.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/07/2022 Last updated : 11/09/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
- Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it.
+ Standard network features now includes Global VNet peering.
Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
Previously updated : 10/05/2022 Last updated : 11/10/2022 # Retirement of Azure Percept DK
+**Update November 9, 2022**: A firmware update that enables the Vision SoM and Audio SOM to retain their functionality with the DK beyond the retirement date, will be made available before the retirement date.
+ The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023. ## How does this change affect me?
If you have questions regarding Azure Percept DK, please refer to the below **FA
| When is this change occurring? | On March 30, 2023. Until this date your DK and Studio will function as-is and updates and customer support will be offered. After this date, all updates and customer support will stop. | | Will my projects be deleted? | Your projects remain in the underlying Azure Services they were created in (example: Custom Vision, Speech Studio, etc.). They won't be deleted due to this retirement. You can no longer modify or use your project with Percept Studio. | | Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. |
-| Will my device still power on? | The various backend services that allow the DK and Audio Accessory to fully function will be shut down upon retirement, rending the DK and Audio Accessory effectively unusable. The SoMs, such as the camera and Audio Accessory, will no longer be identified by the DK after retirement and thus effectively unusable. |
+
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
description: Describes the functions to use in a Bicep file to retrieve deployme
Previously updated : 06/27/2022 Last updated : 11/09/2022 # Deployment functions for Bicep
The preceding example returns the following object when deployed to global Azure
"resourceManager": "https://management.azure.com/", "authentication": { "loginEndpoint": "https://login.microsoftonline.com/",
- "audiences": [
- "https://management.core.windows.net/",
- "https://management.azure.com/"
- ],
+ "audiences": [ "https://management.core.windows.net/", "https://management.azure.com/" ],
"tenant": "common", "identityProvider": "AAD" },
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep description: Use loops to iterate over collections in Bicep Previously updated : 12/02/2021 Last updated : 11/09/2022 # Iterative loops in Bicep This article shows you how to use the `for` syntax to iterate over items in a collection. This functionality is supported starting in v0.3.1 onward. You can use loops to define multiple copies of a resource, module, variable, property, or output. Use loops to avoid repeating syntax in your Bicep file and to dynamically set the number of copies to create during deployment. To go through a quickstart, see [Quickstart: Create multiple instances](./quickstart-loops.md).
+To use loops to create multiple resources or modules, each instance must have a unique value for the name property. You can use the index value or unique values in arrays or collections to create the names.
+ ### Training resources If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/).
module stgModule './storageAccount.bicep' = [for i in range(0, storageCount): {
## Array elements
-The following example creates one storage account for each name provided in the `storageNames` parameter.
+The following example creates one storage account for each name provided in the `storageNames` parameter. Note the name property for each resource instance must be unique.
```bicep param location string = resourceGroup().location
resource storageAcct 'Microsoft.Storage/storageAccounts@2021-06-01' = [for name
}] ```
-The next example iterates over an array to define a property. It creates two subnets within a virtual network.
+The next example iterates over an array to define a property. It creates two subnets within a virtual network. Note the subnet names must be unique.
::: code language="bicep" source="~/azure-docs-bicep-samples/samples/loops/loopproperty.bicep" highlight="23-28" :::
output deployedNSGs array = [for (name, i) in orgNames: {
## Dictionary object
-To iterate over elements in a dictionary object, use the [items function](bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
+To iterate over elements in a dictionary object, use the [items function](bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects. Note the nsg resource names must be unique.
```bicep param nsgValues object = {
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 08/23/2022 Last updated : 11/10/2022 # Azure Resource Manager template specs in Bicep
To learn more about template specs, and for hands-on guidance, see [Publish libr
To create a template spec, you need **write** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`.
-To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. You also need **write** access to any resources deployed by the template spec, and access to `Microsoft.Resources/deployments/*`.
+To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
## Why use template specs?
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 01/12/2022 Last updated : 11/10/2022
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md).
+After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md). In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a linked template in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md
Title: Template test cases for test toolkit description: Describes the template tests that are run by the Azure Resource Manager template test toolkit. Previously updated : 07/30/2021 Last updated : 11/09/2022
This test finds parameters that aren't used in the template or parameters that a
To reduce confusion in your template, delete any parameters that are defined but not used. Eliminating unused parameters simplifies template deployments because you don't have to provide unnecessary values.
+In Bicep, use [Linter rule - no unused parameters](../bicep/linter-rule-no-unused-parameters.md).
+ The following example **fails** because the expression that references a parameter is missing the leading square bracket (`[`). ```json
You use the types `secureString` or `secureObject` on parameters that contain se
When you provide a default value, that value is discoverable by anyone who can access the template or the deployment history.
+In Bicep, use [Linter rule - secure parameter default](../bicep/linter-rule-secure-parameter-default.md).
+ The following example **fails**. ```json
Test name: **DeploymentTemplate Must Not Contain Hardcoded Uri**
Don't hard-code environment URLs in your template. Instead, use the [environment](template-functions-deployment.md#environment) function to dynamically get these URLs during deployment. For a list of the URL hosts that are blocked, see the [test case](https://github.com/Azure/arm-ttk/blob/master/arm-ttk/testcases/deploymentTemplate/DeploymentTemplate-Must-Not-Contain-Hardcoded-Uri.test.ps1).
+In Bicep, use [Linter rule - no hardcoded environment URL](../bicep/linter-rule-no-hardcoded-environment-urls.md).
+ The following example **fails** because the URL is hard-coded. ```json
Template users may have limited access to regions where they can create resource
By providing a `location` parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
+In Bicep, use [Linter rule - no location expressions outside of parameter default values](../bicep/linter-rule-no-loc-expr-outside-params.md).
+ The following example **fails** because the resource's `location` is set to `resourceGroup().location`. ```json
Test name: **Resources Should Have Location**
The location for a resource should be set to a [template expression](template-expressions.md) or `global`. The template expression would typically use the `location` parameter described in [Location uses parameter](#location-uses-parameter).
+In Bicep, use [Linter rule - no hardcoded locations](../bicep/linter-rule-no-hardcoded-location.md).
+ The following example **fails** because the `location` isn't an expression or `global`. ```json
When you include parameters for `_artifactsLocation` and `_artifactsLocationSasT
- `_artifactsLocationSasToken` can only have an empty string for its default value. - `_artifactsLocationSasToken` can't have a default value in a nested template.
+In Bicep, use [Linter rule - artifacts parameters](../bicep/linter-rule-artifacts-parameters.md).
+ ## Declared variables must be used Test name: **Variables Must Be Referenced**
This test finds variables that aren't used in the template or aren't used in a v
Variables that use the `copy` element to iterate values must be referenced. For more information, see [Variable iteration in ARM templates](copy-variables.md).
+In Bicep, use [Linter rule - no unused variables](../bicep/linter-rule-no-unused-variables.md).
+ The following example **fails** because the variable that uses the `copy` element isn't referenced. ```json
A warning that an API version wasn't found only indicates the version isn't incl
Learn more about the [toolkit cache](https://github.com/Azure/arm-ttk/tree/master/arm-ttk/cache).
+In Bicep, use [Linter rule - use recent API versions](../bicep/linter-rule-use-recent-api-versions.md).
+ The following example **fails** because the API version is more than two years old. ```json
When specifying a resource ID, use one of the resource ID functions. The allowed
- [tenantResourceId](template-functions-resource.md#tenantresourceid) - [extensionResourceId](template-functions-resource.md#extensionresourceid)
-Don't use the concat function to create a resource ID. The following example **fails**.
+Don't use the concat function to create a resource ID.
+
+In Bicep, use [Linter rule - use resource ID functions](../bicep/linter-rule-use-resource-id-functions.md).
+
+The following example **fails**.
```json "networkSecurityGroup": {
When setting the deployment dependencies, don't use the [if](template-functions-
The `dependsOn` element can't begin with a [concat](template-functions-array.md#concat) function.
+In Bicep, use [Linter rule - no unnecessary dependsOn entries](../bicep/linter-rule-no-unnecessary-dependson.md).
+ The following example **fails** because it contains an `if` function. ```json
Test name: **adminUsername Should Not Be A Literal**
When setting an `adminUserName`, don't use a literal value. Create a parameter for the user name and use an expression to reference the parameter's value.
+In Bicep, use [Linter rule - admin user name should not be literal](../bicep/linter-rule-admin-username-should-not-be-literal.md).
+ The following example **fails** with a literal value. ```json
This test is disabled, but the output shows that it passed. The best practice is
If your template includes a virtual machine with an image, make sure it's using the latest version of the image.
+In Bicep, use [Linter rule - use stable VM image](../bicep/linter-rule-use-stable-vm-image.md).
+ ## Use stable VM images Test name: **Virtual Machines Should Not Be Preview**
Virtual machines shouldn't use preview images. The test checks the `storageProfi
For more information about the `imageReference` property, see [Microsoft.Compute virtualMachines](/azure/templates/microsoft.compute/virtualmachines#imagereference-object) and [Microsoft.Compute virtualMachineScaleSets](/azure/templates/microsoft.compute/virtualmachinescalesets#imagereference-object).
+In Bicep, use [Linter rule - use stable VM image](../bicep/linter-rule-use-stable-vm-image.md).
+ The following example **fails** because `imageReference` is a string that contains _preview_. ```json
Don't include any values in the `outputs` section that potentially exposes secre
The output from a template is stored in the deployment history, so a malicious user could find that information.
+In Bicep, use [Linter rule - outputs should not contain secrets](../bicep/linter-rule-outputs-should-not-contain-secrets.md).
+ The following example **fails** because it includes a secure parameter in an output value. ```json
For resources with type `CustomScript`, use the encrypted `protectedSettings` wh
Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Windows]( /azure/virtual-machines/extensions/custom-script-windows), or [Linux](../../virtual-machines/extensions/custom-script-linux.md).
+In Bicep, use [Linter rule - use protectedSettings for commandToExecute secrets](../bicep/linter-rule-protect-commandtoexecute-secrets.md).
+ The following example **fails** because `settings` uses `commandToExecute` with a secure parameter. ```json
Use the nested template's `expressionEvaluationOptions` object with `inner` scop
For more information about nested templates, see [Microsoft.Resources deployments](/azure/templates/microsoft.resources/deployments) and [Expression evaluation scope in nested templates](linked-templates.md#expression-evaluation-scope-in-nested-templates).
+In Bicep, use [Linter rule - secure params in nested deploy](../bicep/linter-rule-secure-params-in-nested-deploy.md).
+ The following example **fails** because `expressionEvaluationOptions` uses `outer` scope to evaluate secure parameters or `list*` functions. ```json
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
Last updated 08/15/2022
-# How to Configure a custom domain for Azure SignalR Service
+# How to configure a custom domain for Azure SignalR Service
In addition to the default domain provided with Azure SignalR Service, you can also add a custom DNS domain to your service. In this article, you'll learn how to add a custom domain to your SignalR Service.
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md
Title: Secure Azure SignalR outbound traffic through Shared Private Endpoints
+ Title: Secure Azure SignalR outbound traffic through shared private endpoints
-description: How to secure outbound traffic through Shared Private Endpoints to avoid traffic go to public network
+description: How to secure outbound traffic through shared private endpoints to avoid traffic go to public network
Last updated 07/08/2021
-# Secure Azure SignalR outbound traffic through Shared Private Endpoints
+# Secure Azure SignalR outbound traffic through shared private endpoints
When you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you can create outbound [private endpoint connections](../private-link/private-endpoint-overview.md) to an upstream service.
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
Title: Comparison of Azure Video Indexer and Azure Media Services v3 presets description: This article compares Azure Video Indexer capabilities and Azure Media Services v3 presets. Previously updated : 02/24/2020 Last updated : 11/10/2022
This article compares the capabilities of **Azure Video Indexer(AVI) APIs** and **Media Services v3 APIs**.
-Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
+Currently, there is an overlap between features offered by the [Azure Video Indexer APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). Azure Media Services have [announced deprecation](https://learn.microsoft.com/azure/media-services/latest/release-notes#retirement-of-the-azure-media-redactor-video-analyzer-and-face-detector-on-september-14-2023) of their Video Analysis preset starting September 2023. It is advised to use Azure Video Indexer Video Analysis going forward, which is general available and offers more functionality.
+
+The following table offers the current guideline for understanding the differences and similarities.
## Compare
azure-video-indexer Compliance Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compliance-privacy-security.md
- Title: Azure Video Indexer compliance, privacy and security
-description: This article discusses Azure Video Indexer compliance, privacy and security.
- Previously updated : 08/18/2022---
-# Compliance, Privacy and Security
-
-As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
-
-## Next steps
-
-[Azure Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Azure Video Indexer analyzes the video and audio content by running 30+ AI model
To start extracting insights with Azure Video Indexer, see the [how can I get started](#how-can-i-get-started-with-azure-video-indexer) section below.
-## Compliance, Privacy and Security
-
-> [!Important]
-> Before you continue with Azure Video Indexer, read [Compliance, privacy and security](compliance-privacy-security.md).
- ## What can I do with Azure Video Indexer? Azure Video Indexer's insights can be applied to many scenarios, among them are:
Learn how to [get started with Azure Video Indexer](video-indexer-get-started.md
Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
+## Compliance, Privacy and Security
+
+As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+
+To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+ ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution
+ Title: What's new in Azure VMware Solution
description: Learn about the platform updates to Azure VMware Solution. -+ Previously updated : 09/15/2022 Last updated : 11/09/2022
-# Platform updates for Azure VMware Solution
+# What's new in Azure VMware Solution
Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
-## July 8, 2022
+## November 2022
+AV36P and AV52 node sizes available in Azure VMware Solution.
+The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
-HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+## July 2022
-HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections.
+ - HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.
+ HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
-For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
+ - All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## July 7, 2022
-
-All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-
-Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## June 7, 2022
+## June 2022
All new Azure VMware Solution private clouds in regions (East US2, Canada Central, North Europe, and Japan East), are now deployed in with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c. Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-## May 23, 2022
-
-All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+## May 2022
-Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ - All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
+ - All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-## May 9, 2022
-
-All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
-
-Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
-
-You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-
-## February 18, 2022
+## February 2022
Per VMware security advisory [VMSA-2022-0004](https://www.vmware.com/security/advisories/VMSA-2022-0004.html), multiple vulnerabilities in VMware ESXi have been reported to VMware.
For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release E
No further action is required.
-## December 22, 2021
-
-Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j.
-The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX.
-We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
-
-We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
-
-If you need any assistance or have questions, please [contact us](https://portal.azure.com/#home).
---
-## December 12, 2021
-
-VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228.
+## December 2021
-Azure VMware Solution is actively monitoring this issue. We are addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available.
+ - Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances.
+ We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads.
+ If you need any assistance or have questions, please [contact us](https://portal.azure.com/#home).
-Please note that you may experience intermittent connectivity to these components when we apply a fix.
+ - VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We are addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available. Please note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for any additional VMware products that you may have deployed in Azure VMware Solution. If you need any assistance or have questions, please [contact us](https://portal.azure.com).
-We strongly recommend that you read the advisory and patch or apply the recommended workarounds for any additional VMware products that you may have deployed in Azure VMware Solution.
-
-If you need any assistance or have questions, please [contact us](https://portal.azure.com).
-
-## November 23, 2021
+## November 2021
Per VMware security advisory [VMSA-2021-0027](https://www.vmware.com/security/advisories/VMSA-2021-0027.html), multiple vulnerabilities in VMware vCenter Server have been reported to VMware.
For more information, see [VMware vCenter Server 6.7 Update 3p Release Notes](ht
No further action is required.
-## September 21, 2021
-Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware.
-
-To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o.
-
-For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html)
-
-No further action is required.
-
-## September 10, 2021
-
-All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523).
-ESXi hosts in existing private clouds have been patched to this version.
-
-For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
--
+## September 2021
+ - Per VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), multiple vulnerabilities in the VMware vCenter Server have been reported to VMware. To address the vulnerabilities (CVE-2021-21991, CVE-2021-21992, CVE-2021-21993, CVE-2021-22005, CVE-2021-22006, CVE-2021-22007, CVE-2021-22008, CVE-2021-22009, CVE-2021-22010, CVE-2021-22011, CVE-2021-22012,CVE-2021-22013, CVE-2021-22014, CVE-2021-22015, CVE-2021-22016, CVE-2021-22017, CVE-2021-22018, CVE-2021-22019, CVE-2021-22020) reported in VMware security advisory [VMSA-2021-0020](https://www.vmware.com/security/advisories/VMSA-2021-0020.html), vCenter Server has been updated to 6.7 Update 3o in all Azure VMware Solution private clouds. All new Azure VMware Solution private clouds are deployed with vCenter Server version 6.7 Update 3o. For more information, see [VMware vCenter Server 6.7 Update 3o Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3o-release-notes.html). No further action is required.
+ - All new Azure VMware Solution private clouds are now deployed with ESXi version ESXi670-202103001 (Build number: 17700523). ESXi hosts in existing private clouds have been patched to this version. For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202103001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202103001.html).
-## July 23, 2021
+## July 2021
All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September, 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release.
You'll receive an email with the planned maintenance date and time. You can resc
For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
+## May 2021
+ - Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware. To address the vulnerabilities ([CVE-2021-21985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21985) and [CVE-2021-21986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21986)) reported in VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), vCenter Server has been updated in all Azure VMware Solution private clouds. No further action is required.
+ - Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud. During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud. There is no impact to workloads running in your private cloud.
-
-## May 25, 2021
-Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware.
-
-To address the vulnerabilities ([CVE-2021-21985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21985) and [CVE-2021-21986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21986)) reported in VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), vCenter Server has been updated in all Azure VMware Solution private clouds.
-
-No further action is required.
-
-## May 21, 2021
-
-Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud.
-
-During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud.
-
-There is no impact to workloads running in your private cloud.
--
-## April 26, 2021
+## April 2021
All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 2.5.2. We're not using NSX-T Data Center 3.1.1 for new private clouds because of an identified issue in NSX-T Data Center 3.1.1 that impacts customer VM connectivity. The VMware recommended mitigation was applied to all existing private clouds currently running NSX-T Data Center 3.1.1 on Azure VMware Solution. The workaround has been confirmed that there's no impact to customer VM connectivity.
-## March 24, 2021
-All new Azure VMware Solution private clouds are deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above.
+## March 2021
+ - All new Azure VMware Solution private clouds are deployed with VMware vCenter Server version 6.7U3l and NSX-T Data Center version 3.1.1. Any existing private clouds will be updated and upgraded **through June 2021** to the releases mentioned above. You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes.
-You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. An hour before the upgrade, you'll receive a notification and then again when it finishes.
+ - Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter Server in your private cloud to vCenter Server 6.7 Update 3l version.
+ VMware vCenter Server will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
+ For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
-## March 15, 2021
+ - Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) to existing privates **through March 15, 2021**.
-- Azure VMware Solution service will do maintenance work **through March 19, 2021,** to update the vCenter Server in your private cloud to vCenter Server 6.7 Update 3l version.--- VMware vCenter Server will be unavailable during this time, so you can't manage your VMs (stop, start, create, delete) or private cloud scaling (adding/removing servers and clusters). However, VMware High Availability (HA) will continue to operate to protect existing VMs.
+ - Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**.
-For more information on this vCenter version, see [VMware vCenter Server 6.7 Update 3l Release Notes](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html).
-
-## March 4, 2021
--- Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) to existing privates **through March 15, 2021**.--- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**.-
->[!NOTE]
->This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
+ >[!NOTE]
+ >This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses (preview) description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 05/26/2022 Last updated : 11/08/2022
To restrict access to these nodes and reduce the discoverability of these nodes
- If you plan to use a [private endpoint with Batch accounts](private-connectivity.md), you must disable private endpoint network policies. Run the following Azure CLI command:
- `az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies`
+```azurecli-interactive
+az network vnet subnet update \
+ --vnet-name <vnetname> \
+ -n <subnetname> \
+ --resource-group <resourcegroup> \
+ --disable-private-endpoint-network-policies
+```
- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. To allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)) either:
- - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This is the preferred method.
+ - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This solution is the preferred method.
- Alternatively, provide your own internet outbound access support (see [Outbound access to the internet](#outbound-access-to-the-internet)).
To restrict access to these nodes and reduce the discoverability of these nodes
1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you're creating. 1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
If you're familiar with using ARM templates, select the **Deploy to Azure** butt
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatch-pool-no-public-ip%2Fazuredeploy.json) > [!NOTE]
-> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and you've already opted in with [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region and opt in your Batch account, then retry the deployment.
+> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and your pool is using [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region, specify `simplified` node communiction mode for the pool, and then retry the deployment.
## Outbound access to the internet
-In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
+In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
-Another way to provide outbound connectivity is to use a user-defined route (UDR). This lets you route traffic to a proxy machine that has public internet access, for example [Azure Firewall](../firewall/overview.md).
+Another way to provide outbound connectivity is to use a user-defined route (UDR). This method lets you route traffic to a proxy machine that has public internet access, for example [Azure Firewall](../firewall/overview.md).
> [!IMPORTANT] > There is no extra network resource (load balancer, network security group) created for simplified node communication pools without public IP addresses. Since the compute nodes in the pool are not bound to any load balancer, Azure may provide [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md). However, Default Outbound Access is not suitable for production workloads, so it is strongly recommended to bring your own Internet outbound access.
You can follow the guide [Connect to compute nodes](error-handling.md#connect-to
## Migration from previous preview version of No Public IP pools
-For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md). To migrate the pool, follow the [opt-in process for simplified node communication](simplified-compute-node-communication.md):
+For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md).
-1. Opt in to use simplified node communication.
1. Create a [private endpoint for Batch node management](private-connectivity.md) in the virtual network.
+1. Update the pool's node communication mode to [simplified](simplified-compute-node-communication.md).
1. Scale down the pool to zero nodes. 1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
cloud-services-extended-support Feature Support Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md
+
+ Title: Feature Analysis Cloud Services vs Virtual Machine Scale Sets
+description: Learn about the feature set available in Cloud Services and Virtual Machine Scale Sets
+++++ Last updated : 11/8/2022++
+# Feature Analysis: Cloud Services (extended support) and Virtual Machine Scale Sets
+This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, please visit the documentation [here](https://learn.microsoft.com/azure/virtual-machine-scale-sets/overview)
++
+## Basic setup
+
+| Feature | CSES | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Virtual machine type|Basic Azure PaaS VM (Microsoft.compute/cloudServices)|Standard Azure IaaS VM (Microsoft.compute/virtualmachines)|Scale Set specific VMs (Microsoft.compute /virtualmachinescalesets/virtualmachines)|
+|Maximum Instance Count (with FD guarantees)|1100|1000|3000 (1000 per Availability Zone)|
+|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported|All SKUs|
+|Full control over VM, NICs, Disks|Limited control over NICs and VM via CS-ES APIs. No support for Disks|Yes|Limited control with virtual machine scale sets VM API|
+|RBAC Permissions Required|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write|
+|Accelerated networking|Yes|Yes|Yes|
+|Spot instances and pricing|No|Yes, you can have both Spot and Regular priority instances|Yes, instances must either be all Spot or all Regular|
+|Mix operating systems|Extremely limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system|
+|Disk Types|No Disk Support|Managed disks only, all storage types|Managed and unmanaged disks, All Storage Types
+|Disk Server Side Encryption with Customer Managed Keys|No|Yes| |
+|Write Accelerator|No|No|Yes|
+|Proximity Placement Groups|No|Yes, read Proximity Placement Groups documentation|Yes|
+|Azure Dedicated Hosts|No|No|Yes|
+|Managed Identity|No|User Assigned Identity Only|System Assigned or User Assigned|
+|Azure Instance Metadata Service|No|Yes|Yes|
+|Add/remove existing VM to the group|No|No|No|
+|Service Fabric|No|No|Yes|
+|Azure Kubernetes Service (AKS) / AKE|No|No|Yes|
+|UserData|No|Yes|Yes|
++
+## Autoscaling and instance orchestration
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|List VMs in Set|No|Yes|Yes|
+|Automatic Scaling (manual, metrics based, schedule based)|Yes|Yes|Yes|
+|Auto-Remove NICs and Disks when deleting VM instances|Yes|Yes|Yes|
+|Upgrade Policy (VM scale sets)|AutoUD and ManualUD policies. No support for Rolling. Cloud Services - Create Or Update - REST API (Azure Compute) | Microsoft Learn|No, upgrade policy must be null or [] during create|Automatic, Rolling, Manual|
+|Automatic OS Updates|Yes|No|Yes|
+|Customer Defined OS Images|No|Yes|Yes|
+|In Guest Security Patching|No|Yes|No|
+|Terminate Notifications (VM scale sets)|No|Yes, read Terminate Notifications documentation|Yes|
+|Monitor Application Health|No|Application health extension|Application health extension or Azure Load balancer probe|
+|Instance Repair (VM scale sets)|No|Yes, read Instance Repair documentation|Yes|
+|Instance Protection|No|No, use Azure resource lock|Yes|
+|Scale In Policy|No|No|Yes|
+|Get Instance View|Yes|No|Yes|
+|VM Batch Operations (Start all, Stop all, delete subset, etc.)|Yes|Partial, Batch delete is supported. Other operations can be triggered on each instance using VM API)|Yes|
+
+## High availability
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Availability SLA|[SLA](https://azure.microsoft.com/support/legal/sla/cloud-services/v1_5/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|
+|Availability Zones|No|Specify instances land across 1, 2 or 3 availability zones|Specify instances land across 1, 2 or 3 availability zones|
+|Assign VM to a Specific Availability Zone|No|Yes|No|
+|Fault Domain ΓÇô Max Spreading (Azure will maximally spread instances)|Yes|Yes|Yes|
+|Fault Domain ΓÇô Fixed Spreading|5 update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments|
+|Assign VM to a Specific Fault Domain|No|Yes|No|
+|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|5 update domains|
+|Perform Maintenance|No|Trigger maintenance on each instance using VM API|Yes|
+|VM Deallocation|No|Yes|Yes|
+
+## Networking
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Default outbound connectivity|Yes|No, must have explicit outbound connectivity|Yes|
+|Azure Load Balancer Standard SKU|No|Yes|Yes|
+|Application Gateway|No|Yes|Yes|
+|Infiniband Networking|No|No|Yes, single placement group only|
+|Azure Load Balancer Basic SKU|Yes|No|Yes|
+|Network Port Forwarding|Yes (NAT Pool for role instance input endpoints)|Yes (NAT Rules for individual instances)|Yes (NAT Pool)|
+|Edge Sites|No|Yes|Yes|
+|Ipv6 Support|No|Yes|Yes|
+|Internal Load Balancer|No |Yes|Yes|
+
+## Backup and recovery
+
+| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) |
+|||||
+|Azure Backup|No |Yes|No|
+|Azure Site Recovery|No|Yes (via PowerShell)|No|
+|Azure Alerts|Yes|Yes|Yes|
+|VM Insights|No|Can be installed into individual VMs|Yes|
++
+## Next steps
+- View the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
+- View [frequently asked questions](faq.yml) for Cloud Services (extended support).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Previously updated : 10/27/2022 Last updated : 11/10/2022
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
-1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances for the default style. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models. 1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model. 1. Select **Next**.
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select one or more preset speaking styles to train. 1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select **Next**.
-1. Optionally, you can add up to 10 custom speaking styles. Select **Add a custom style** and enter a custom style name of your choice. Select style samples as training data.
+1. Optionally, you can add up to 10 custom speaking styles:
+ 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md#adjust-speaking-styles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
+ 1. Select style samples as training data.
1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Styles, style degree, and roles are supported for a subset of neural voices. If
| Attribute | Description | Required or optional | | - | - | -- |
-| `style` | Specifies the speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored. |
+| `style` | Specifies the [prebuilt](language-support.md?tabs=stt-tts#voice-styles-and-roles) or [custom](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model) speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored.|
| `styledegree` | Specifies the intensity of the speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute, the `styledegree` attribute is ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.| | `role`| Specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`. |
Styles, style degree, and roles are supported for a subset of neural voices. If
You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
-For a list of supported styles per neural voice, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
+For a list of supported styles for prebuilt neural voices, see [supported voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).
+
+To use your [custom style](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model), specify the style name that you entered in Speech Studio.
**Syntax**
All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3
> [!NOTE] > If an element is not recognized, it will be ignored, and the child elements within it will still be processed.
-The MathML entities are not supported by XML syntax, so you must use the their corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
+The MathML entities are not supported by XML syntax, so you must use the corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
## Viseme element
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Containerization is an approach to software distribution in which an application
## Features and benefits -- **Immutable infrastructure**: Enable DevOps teams' to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
+- **Immutable infrastructure**: Enable DevOps teams to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
- **Control over data**: Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments ΓÇô across data, management, identity, and security. - **Control over model updates**: Flexibility in versioning and updating of models deployed in their solutions. - **Portable architecture**: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to [Azure Kubernetes Service](../aks/index.yml), [Azure Container Instances](../container-instances/index.yml), or to a [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
To use Azure RBAC, you must enable Azure Active Directory authentication. You ca
## Add role assignment to Language resource Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**.
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
1. Select **Cognitive Services**, and navigate to your specific Language resource. > [!NOTE] > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/tag-data.md
Previously updated : 05/05/2022 Last updated : 11/10/2022
Before creating a custom text classification model, you need to have labeled dat
Before you can label data, you need: * [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* Documents containing text data that have [been uploaded](design-schema.md#data-preparation) to your storage account.
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
cognitive-services Migrate Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-knowledge-base.md
Last updated 11/02/2021
-# Move projects and question answer sources
+# Move projects and question answer pairs
> [!NOTE]
-> This article deals with the process to move projects and knowledge bases from one Language resource to another.
+> This article deals with the process to export and move projects and sources from one Language resource to another.
-You may want to create a copy of your project for several reasons:
+You may want to create copies of your projects or sources for several reasons:
* To implement a backup and restore process * Integrate with your CI/CD pipeline
You may want to create a copy of your project for several reasons:
## Prerequisites * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and language resource name you selected when you created the resource.
+* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and the Language resource name you selected when you created the resource.
## Export a project
-Exporting a project allows you to move or back up all the sources question answer sources that are contained within a single project.
+Exporting a project allows you to back up all the question answer sources that are contained within a single project.
1. Sign in to the [Language Studio](https://language.azure.com/).
-1. Select the language resource you want to move a project from.
-1. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing all of your knowledge bases.
+1. Select the Language resource you want to move a project from.
+1. Go to Custom Question Answering service. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing the contents of your project.
+2. You can export only one project at a time.
## Import a project
-1. Select the language resource, which will be the destination for your previously exported project.
-1. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
+1. Select the Language resource, which will be the destination for your previously exported project.
+1. Go to Custom Question Answering service. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
-## Export question and answers
+## Export sources
-1. Select the language resource you want to move an individual question answer source from.
-1. Select the project that contains the question and answer source you wish to export.
+1. Select the Language resource you want to move an individual source from.
+1. Go to Custom Question Answering. Select the project that contains the source you wish to export.
1. On the Edit knowledge base page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to export in either Excel or TSV. ## Import question and answers
-1. Select the language resource, which will be the destination for your previously exported question and answer source.
-1. Select the project where you want to import a question and answer source.
+1. Select the Language resource, which will be the destination for your previously exported source.
+1. Go to Custom Question Answering. Select the project where you want to import a question and answer source.
1. On the Edit knowledge base page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to import either an Excel or TSV file. 1. Browse to the local location of the file with the **Choose File** option and select **Done**. <!-- TODO: Replace Link--> ### Test
-**Test** the question answer source by selecting the **Test** option from the toolbar in the **Edit knowledge base** page which will launch the test panel. Learn how to [test your knowledge base](../../../qnamaker/How-To/test-knowledge-base.md).
+**Test** the source by selecting the **Test** option from the toolbar in the **Edit knowledge base** page which will launch the test panel. Learn how to [test your knowledge base](../../../qnamaker/How-To/test-knowledge-base.md).
### Deploy <!-- TODO: Replace Link-->
-**Deploy** the knowledge base and create a chat bot. Learn how to [deploy your knowledge base](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
+**Deploy** the project and create a chat bot. Learn how to [deploy your knowledge base](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
## Chat logs
-There is no way to move chat logs with projects or knowledge bases. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
+There is no way to move chat logs with the projects. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
## Next steps
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 10 |
+| Requests per second per deployment | 15 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Last updated 10/25/2022
-# Context and Actions
+# Context and actions
Personalizer works by learning what your application should show to users in a given context. These are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from.
-## Table of Contents
-
-* [Context](#context) Information about the current user or state of the system
-* [Actions](#actions) A list of options to choose from
-* [Features](#features) Attributes describing the Context and Actions
-* [Feature Engineering](#feature-engineering) Tips for constructing impactful features
-* [Namespaces](#namespaces) Grouping Features
-* [Examples](#json-examples) Examples of Context and Action features in JSON format
-- ## Context Information for the _context_ depends on each application and use case, but it typically may include information such as:
Information for the _context_ depends on each application and use case, but it t
* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc. * Information extracted from mobile applications, such as location, movement, or battery level. * Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
-* Information about the state of the system.
+* Information about the state of the system.
Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API. - ## Actions Actions represent a list of options. Don't send in more than 50 actions when Ranking actions. These may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward (for example, the user will add to the basket) for the current context. - ### Examples of actions The actions you send to the Rank API will depend on what you are trying to personalize.
Here are some examples:
|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.| |Choose what to show at the top of a list of search results|Each action is one of the top few search results.| - ### Load actions from the client application Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information. - ### Prevent actions from being ranked In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked is by adding it to the [Excluded Actions](https://learn.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.personalizer.models.rankrequest.excludedactions) list, or not passing it to the Rank Request.
-In some cases, you might not want events to be trained on by default, i.e., you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you will render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events which the end user did not have a chance to interact with.
-
+In some cases, you might not want events to be trained on by default. In other words, you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you will render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events which the end user did not have a chance to interact with.
## Features
Personalizer does not prescribe, limit, or fix what features you can send for ac
It's ok and natural for features to change over time. However, keep in mind that Personalizer's machine learning model adapts based on the features it sees. If you send a request containing all new features, Personalizer's model will not be able to leverage past events to select the best action for the current event. Having a 'stable' feature set (with recurring features) will help the performance of Personalizer's machine learning algorithms.
-### Context Features
+### Context features
* Some context features may only be available part of the time. For example, if a user is logged into the online grocery store website, the context will contain features describing purchase history. These features will not be available for a guest user. * There must be at least one context feature. Personalizer does not support an empty context. * If the context features are identical for every request, Personalizer will choose the globally best action.
-### Action Features
+### Action features
* Not all actions need to contain the same features. For example, in the online grocery store scenario, microwavable popcorn will have a "cooking time" feature, while a cucumber will not.
-* Features for a certain action ID may be available one day, but later on become unavailable.
+* Features for a certain action ID may be available one day, but later on become unavailable.
Examples:
The following are good examples for action features. These will depend a lot on
Personalizer supports features of string, numeric, and boolean types. It's very likely that your application will mostly use string features, with a few exceptions.
-### How feature types affects the Machine Learning in Personalizer
+### How feature types affect machine learning in Personalizer
-* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (e.g. category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model.
+* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (for example, category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model).
* **Numeric**: Only use numeric values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as categorical strings. For example Age could be encoded as "Age":"0-5", "Age":"6-10", etc. Height could be bucketed as "Height": "<5'0", "Height": "5'0-5'4", "Height": "5'5-5'11", "Height":"6'0-6-4", "Height":">6'4". * **Boolean**
-* **Arrays** ONLY numeric arrays are supported.
-
+* **Arrays** Only numeric arrays are supported.
-## Feature Engineering
+## Feature engineering
-* Use categorical and string types for features that are not a magnitude.
+* Use categorical and string types for features that are not a magnitude.
* Make sure there are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed. * There are features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
Having features of high density helps Personalizer extrapolate learning from one
* **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if non-PII) will likely add more noise to the model and is not recommended. * **Sending unique values that will rarely occur more than a few times**. It's recommended to bucket your features to a higher level-of-detail. For example, having features such as `"Context.TimeStamp.Day":"Monday"` or `"Context.TimeStamp.Hour":13` can be useful as there are only 7 and 24 unique values, respectively. However, `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is very precise and has an extremely large number of unique values, which makes it very difficult for Personalizer to learn from it.
-### Improve feature sets
+### Improve feature sets
Analyze the user behavior by running a [Feature Evaluation Job](how-to-feature-evaluation.md). This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further. ### Expand feature sets with artificial intelligence and cognitive services
-Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
+Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
By preprocessing your items using artificial intelligence services, you can automatically extract information that is likely to be relevant for personalization. For example:
-* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
+* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
* Images can be run through object detection, faces through sentiment, etc.
-* Information in text can be augmented by extracting entities, sentiment, expanding entities with Bing knowledge graph, etc.
+* Information in text can be augmented by extracting entities, sentiment, and expanding entities with Bing knowledge graph.
You can use several other [Azure Cognitive Services](https://www.microsoft.com/cognitive-services), like
You can use several other [Azure Cognitive Services](https://www.microsoft.com/c
* [Emotion](../face/overview.md) * [Computer Vision](../computer-vision/overview.md)
-### Use Embeddings as Features
+### Use embeddings as features
Embeddings from various Machine Learning models have proven to be affective features for Personalizer * Embeddings from Large Language Models * Embeddings from Computer Vision Models - ## Namespaces
-Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
+Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
The following are examples of feature namespaces used by applications:
The following are examples of feature namespaces used by applications:
* The following characters cannot be used: codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140. * All namespaces starting with an underscore `_` will be ignored. -
-## JSON Examples
+## JSON examples
### Actions When calling Rank, you will send multiple actions to choose from:
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
```json {
JSON objects can include nested JSON objects and simple property/values. An arra
Context is expressed as a JSON object that is sent to the Rank API:
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
```JSON {
JSON objects can include nested JSON objects and simple property/values. An arra
### Namespaces
-In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
+In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
> [!Note] > We strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions. - ```JSON { "contextFeatures": [
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/recording-logs.md
+
+ Title: Azure Communication Services - Recording Analytics Public Preview
+
+description: About using Log Analytics for recording logs
++++ Last updated : 10/27/2021+++++
+# Call Recording Summary Log
+Call recording summary logs provide details about the call duration, media content (e.g., Audio-Video, Unmixed, Transcription, etc.), the format types used for the recording (e.g., WAV, MP4, etc.), as well as the reason of why the recording ended.
+
+Recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot) or ended due to a system failure.
+
+> [!IMPORTANT]
+
+> Please note the call recording logs will be published once the call recording is ready to be downloaded. The log will be published within the standard latency time for Azure Monitor Resource logs see [Log data ingestion time in Azure Monitor](../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log)
++
+## Properties Description
+
+| Field Name | DataType | Description |
+|- |--|--|
+|timeGenerated|DateTime|The timestamp (UTC) of when the log was generated|
+|operationName| String | The operation associated with log record|
+|correlationId |String |`CallID` is used to correlate events between multiple tables|
+|recordingID| String | The ID given to the recording this log refers to|
+|category| String | The log category of the event. Logs with the same log category and resource type will have the same properties fields|
+|resultType| String| The status of the operation |
+|level |String |The severity level of the operation |
+|chunkCount |Integer|The total number of chunks created for the recording|
+|channelType| String |The recording's channel type, i.e., mixed, unmixed|
+|recordingStartTime| DateTime|The time that the recording started |
+|contentType| String | The recording's content, i.e., Audio Only, Audio - Video, Transcription, etc.|
+|formatType| String | The recording's file format |
+|recordingLength| Double | Duration of the recording in seconds |
+|audioChannelsCount| Integer | Total number of audio channels in the recording|
+|recordingEndReason| String | The reason why the recording ended |
++
+## Call recording and sample data
+```json
+"operationName": "Call Recording Summary",
+"operationVersion": "1.0",
+"category": "RecordingSummaryPUBLICPREVIEW",
+
+```
+A call can have one recording or many recordings depending on how many times a recording event is triggered.
+
+For example, if an agent initiates an outbound call in a recorded line and the call drops due to poor network signal, the `callid` will have one `recordingid`. If the agent calls back the customer, the system will generate a new `callid` as well as a new `recordingid`.
++
+#### Example1: Call recording for "One call to one recording"
+```json
+"properties"
+{
+ "TimeGenerated":"2022-08-17T23:18:26.4332392Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "zzzzzz-cada-4164-be10-0000000000",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBvaW5xxxxxxxxFmNjkwxxxxxxxxxxxxSZXNvdXJjZVNwZWNpZmljSWQiOiJiZGU5YzE3Ni05M2Q3LTRkMWYtYmYwNS0yMTMwZTRiNWNlOTgifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-16T09:07:54.0000000Z",
+ "RecordingLength": "73872.94",
+ "ChunkCount": 6,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+If the agent initiated a recording and stopped and restarted the recording multiple times while the call is still on, the `callid` will have many `recordingid` depending on how many times the recording events were triggered.
+
+#### Example2: Call recording for "One call to many recordings"
+```json
+
+{
+ "TimeGenerated": "2022-08-17T23:55:46.6304762Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBxxxxxxxxxxxxjkwMC05MmEwLTRlZDYtOTcxYS1kYzZlZTkzNjU0NzciLCJSxxxxxNwZWNpZmljSWQiOiI5ZmY2ZTY2Ny04YmQyLTQ0NzAtYmRkYy00ZTVhMmUwYmNmOTYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:43.3304762Z",
+ "RecordingLength": 3.34,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+{
+ "TimeGenerated": "2022-08-17T23:55:56.7664976Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuxxxxxxiOiI4NDFmNjkwMC1mMjBiLTQzNmQtYTg0Mi1hODY2YzE4M2Y0YTEiLCJSZXNvdXJjZVNwZWNpZmljSWQiOiI2YzRlZDI4NC0wOGQ1LTQxNjEtOTExMy1jYWIxNTc3YjM1ODYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:54.0664976Z",
+ "RecordingLength": 2.7,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+See also call recording for more info
+[Azure Communication Services Call Recording overview](../../../communication-services/concepts/voice-video-calling/call-recording.md)
+
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
+
+ Title: Call Automation overview
+
+description: Learn about Azure Communication Services Call Automation.
++++ Last updated : 09/06/2022+++
+# Call Automation Overview
++
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
+
+> [!NOTE]
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+
+## Common use cases
+
+Some of the common use cases that can be built using Call Automation include:
+
+- Program VoIP or PSTN calls for transactional workflows such as click-to-call and appointment reminders to improve customer service.
+- Build interactive interaction workflows to self-serve customers for use cases like order bookings and updates, using Play (Audio URL) and Recognize (DTMF) actions.
+- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing.
+- Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform.
+- Increase engagement by building automated customer outreach programs for marketing and customer service.
+- Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
+
+ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+
+![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png)
+
+## Capabilities
+
+The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs.
+
+| Feature Area | Capability | .NET | Java |
+| -| -- | | -- |
+| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ |
+| | Answer a group call | ✔️ | ✔️ |
+| | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Reject an incoming call | ✔️ | ✔️ |
+| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ✔️ | ✔️ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
+| | Blind Transfer* a call to another endpoint | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ |
+| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
+
+*Transfer of VoIP call to a phone number is currently not supported.
+
+## Architecture
+
+Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions will have corresponding events that are triggered when the action completes successfully or fails.
+
+Azure Communication Services uses Event Grid to deliver the [IncomingCall event](./incoming-call-notification.md) and HTTPS Webhooks for all mid-call action callbacks.
+
+![Screenshot of flow for incoming call and actions.](./media/action-architecture.png)
+
+## Call actions
+
+### Pre-call actions
+
+These actions are performed before the destination endpoint listed in the IncomingCall event notification is connected. Web hook callback events only communicate the ΓÇ£answerΓÇ¥ pre-call action, not for reject or redirect actions.
+
+**Answer** ΓÇô Using the IncomingCall event from Event Grid and Call Automation SDK, a call can be answered by your application. This action allows for IVR scenarios where an inbound PSTN call can be answered programmatically by your application. Other scenarios include answering a call on behalf of a user.
+
+**Reject** ΓÇô To reject a call means your application can receive the IncomingCall event and prevent the call from being connected to the destination endpoint.
+
+**Redirect** ΓÇô Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. This means the call isn't answered by your application, it's simply ΓÇÿredirectedΓÇÖ to another destination endpoint to be answered.
+
+**Make Call** - Make Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
+
+### Mid-call actions
+
+These actions can be performed on the calls that are answered or placed using Call Automation SDKs. Each mid-call action has a corresponding success or failure web hook callback event.
+
+**Add/Remove participant(s)** ΓÇô One or more participants can be added in a single request with each participant being a variation of supported destination endpoints. A web hook callback is sent for every participant successfully added to the call.
+
+**Play** - When your application answers a call or places an outbound call, you can play an audio prompt for the caller. This audio can be looped if needed in scenarios like playing hold music. To learn more, view our [concepts](./play-action.md) and how-to guide for [Customizing voice prompts to users with Play action](../../how-tos/call-automation/play-action.md).
+
+**Recognize input** - After your application has played an audio prompt, you can request user input to drive business logic and navigation in your application. To learn more, view our [concepts](./recognize-action.md) and how-to guide for [Gathering user input](../../how-tos/call-automation/recognize-action.md).
+
+**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that call can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+
+**Record** - You decide when to start/pause/resume/stop recording based on your application business logic, or you can grant control to the end user to trigger those actions. To learn more, view our [concepts](./../voice-video-calling/call-recording.md) and [quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+
+**Hang-up** ΓÇô When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call.
+
+**Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
++
+## Events
+
+The following table outlines the current events emitted by Azure Communication Services. The two tables below show events emitted by Event Grid and from the Call Automation as webhook events.
+
+### Event Grid events
+
+Most of the events sent by Event Grid are platform agnostic meaning they're emitted regardless of the SDK (Calling or Call Automation). While you can create a subscription for any event, we recommend you use the IncomingCall event for all Call Automation use-cases where you want to control the call programmatically. Use the other events for reporting/telemetry purposes.
+
+| Event | Description |
+| -- | |
+| IncomingCall | Notification of a call to a communication user or phone number |
+| CallStarted | A call is established (inbound or outbound) |
+| CallEnded | A call is terminated and all participants are removed |
+| ParticipantAdded | A participant has been added to a call |
+| ParticipantRemoved| A participant has been removed from a call |
+| RecordingFileStatusUpdated| A recording file is available |
+
+Read more about these events and payload schema [here](../../../event-grid/communication-services-voice-video-events.md)
+
+### Call Automation webhook events
+
+The Call Automation events are sent to the web hook callback URI specified when you answer or place a new outbound call.
+
+| Event | Description |
+| -- | |
+| CallConnected | Your applicationΓÇÖs call leg is connected (inbound or outbound) |
+| CallDisconnected | Your applicationΓÇÖs call leg is disconnected |
+| CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint |
+| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed |
+| AddParticipantSucceeded| Your application added a participant |
+|AddParticipantFailed | Your application was unable to add a participant |
+| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
+| PlayCompleted| Your application successfully played the audio file provided |
+| PlayFailed| Your application failed to play audio |
+| RecognizeCompleted | Recognition of user input was successfully completed |
+| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|
++
+To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows.
+
+## Known issues
+
+1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers.
+2. Taking a pre-call action like Answer/Reject on the original call after redirected it gives a 200 success instead of failing on 'call not found'.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Call Automation](./../../quickstarts/call-automation/Callflows-for-customer-interactions.md)
+
+Here are some articles of interest to you:
+- Understand how your resource will be [charged for various calling use cases](../pricing.md) with examples.
+- Learn how to [manage an inbound phone call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md).
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
+
+ Title: Incoming call concepts
+
+description: Learn about Azure Communication Services IncomingCall notification
++++ Last updated : 09/26/2022++++
+# Incoming call concepts
++
+Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call.
+
+## Calling scenarios
+
+First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number will trigger an `IncomingCall` event. The following are examples of these resources:
+
+1. An Azure Communication Services identity
+2. A PSTN phone number owned by your Azure Communication Services resource
+
+Given the above examples, the following scenarios will trigger an `IncomingCall` event sent to Event Grid:
+
+| Source | Destination | Scenario(s) |
+| | -- | -- |
+| Azure Communication Services identity | Azure Communication Services identity | Call, Redirect, Add Participant, Transfer |
+| Azure Communication Services identity | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
+| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer
+
+> [!NOTE]
+> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic.
+
+## Receiving an incoming call notification from Event Grid
+
+Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service.
+
+This architecture has the following benefits:
+
+- Using Event Grid subscription filters, you can route the `IncomingCall` notification to specific applications.
+- PSTN number assignment and routing logic can exist in your application versus being statically configured online.
+- As identified in the above [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
+
+To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
+
+## Call routing in Call Automation or Event Grid
+
+You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN.
+
+## Number assignment
+
+Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+
+## Next steps
+- [Build a Call Automation application](../../quickstarts/call-automation/callflows-for-customer-interactions.md) to simulate a customer interaction.
+- [Redirect an inbound PSTN call](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) to your resource.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
+
+ Title: Playing audio in call
+
+description: Conceptual information about playing audio in call using Call Automation.
+++ Last updated : 09/06/2022++++
+# Playing audio in call
++
+The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication.
+
+> [!NOTE]
+> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../cognitive-services/Speech-Service/how-to-audio-content-creation.md).
+
+The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
+
+## Common use cases
+
+The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+
+### Announcements
+Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
+
+### Self-serve customers
+
+In scenarios with IVRs and virtual assistants, you can use your application or bots to play audio prompts to callers, this prompt can be in the form of a menu to guide the caller through their interaction.
+
+### Hold music
+The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller.
+
+### Playing compliance messages
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+
+## Sample architecture for playing audio in a call
+
+![Screenshot of flow for play action.](./media/play-action.png)
+
+## Known limitations
+- Play action isn't enabled to work with Teams Interoperability.
++
+## What's coming up next for Play action
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities, you can improve customer interactions to create more personalized messages.
+
+## Next Steps
+Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
+
+ Title: Gathering user input
+description: Conceptual information about using Recognize action to gather user input with Call Automation.
+++ Last updated : 09/16/2022++++
+# Gathering user input
++
+With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action.
+
+**DTMF**
+Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad.
+
+**DTMF events and their associated tones**
+
+|Event|Tone|
+| |--|
+|0|Zero|
+|1|One|
+|2|Two|
+|3|Three|
+|4|Four|
+|5|Five|
+|6|Six|
+|7|Seven|
+|8|Eight|
+|9|Nine|
+|A|A|
+|B|B|
+|C|C|
+|D|D|
+|*|Asterisk|
+|#|Pound|
+
+## Common use cases
+
+The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+
+### Improve user journey with self-service prompts
+
+- **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query.
+- **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+
+### Interrupt audio prompts
+
+**User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent.
++
+## Sample architecture for gathering user input in a call
+
+![Recognize Action](./media/recognize-flow.png)
+
+## What's coming up next for Recognize action
+
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the recognize action will add in new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Speech-To-Text. With these, you can improve customer interactions and recognize voice inputs from participants on the call.
+
+## Next steps
+
+- Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md).
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
# Pricing Scenarios
-Prices for Azure Communication Services are generally based on a pay-as-you-go model. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing.
+Prices for Azure Communication Services are based on a pay-as-you-go model. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing.
## Voice/Video calling and screen sharing
Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om
- One participant on the VoIP leg (Alice) from Omnichannel for Customer Service client application x 10 minutes x $0.004 per participant leg per minute = $0.04 - One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.-- Omnichannel for Customer Servicebot does not introduce additional ACS charges.
+- Omnichannel for Customer Service bot does not introduce additional ACS charges.
**Total cost for the call**: $0.04 + $0.04 = $0.08
Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
-**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $.29
+**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $0.29
+
+### Pricing example: Group call managed by Call Automation SDK
+
+Asha calls your US toll-free number (acquired from Communication Services) from her mobile. Your service application answers the call using Call Automation SDK and plays out an IVR menu using Play and Recognize actions. Your application then adds a human agent, David, to the call who answers the call through a custom application using Calling SDK.
+
+- Asha was on the call as a PSTN endpoint for a total of 10 minutes.
+- Your application was on the call for the entire 10 minutes of the call.
+- David was on the call for the last 5 minutes of the call using Calling JS SDK.
+
+**Cost calculations**
+
+- Inbound PSTN leg by Asha to toll-free number acquired from Communication Services x 10 minutes x $0.0220 per minute for receiving the call = $0.22
+- One participant on the VOIP leg (David) x 5 minutes x $0.004 per participant leg per minute = $0.02
+
+Note that the service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation.
+
+**Total cost for the call**: $0.22 + $0.02 = $0.24
## Call Recording
Bob starts a call with his financial advisor, Charlie.
## Chat
-With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
+With Communication Services, you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
### Price
Azure Communication Services allows for adding SMS messaging capabilities to you
### Pricing
-The SMS usage price is a per-message segment charge based on the destination of the message. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Please refer to the [SMS Pricing Page](./sms-pricing.md) for pricing details.
+The SMS usage price is a per-message segment charge based on the destination of the message. The carrier surcharge is calculated based on the destination of the message for sent messages and based on the sender of the message for received messages. Refer to the [SMS Pricing Page](./sms-pricing.md) for pricing details.
### Pricing example: 1:1 SMS sending
Contoso is a healthcare company with clinics in US and Canada. Contoso has a Pat
Contoso is a healthcare company with clinics in US and Canada. Contoso has a Patient Appointment Reminder application that sends out SMS appointment reminders to patients regarding upcoming appointments. Patients can respond to the messages with "Reschedule" and include their date/time preference to reschedule their appointments. - The application sends appointment reminders to 20 US patients and 30 Canada patients using a CA toll-free number.-- 6 US patients and 4 CA patients respond back to reschedule their appointments. Contoso receives 10 SMS responses in total.-- Message length of the reschedule messages is less than 1 message segment*. Hence, total messages received are 6 message segments for US and 4 message segments for CA.
+- Six US patients and four CA patients respond back to reschedule their appointments. Contoso receives 10 SMS responses in total.
+- Message length of the reschedule messages is less than one message segment*. Hence, total messages received are six message segments for US and four message segments for CA.
**Cost calculations** -- US - 6 message segments x $0.0075 per received message segment + 6 message segments x $0.0010 carrier surcharge per received message segment = $0.051-- CA - 4 message segments x $0.0075 per received message segment = $0.03
+- US - six message segments x $0.0075 per received message segment + 6 message segments x $0.0010 carrier surcharge per received message segment = $0.051
+- CA - four message segments x $0.0075 per received message segment = $0.03
**Total cost for receiving patient responses from 6 US patients and 4 CA patients**: $0.051 + $0.03 = $0.081 ## Telephony
-Please refer to the following links for details on Telephony pricing
+Refer to the following links for details on Telephony pricing
- [PSTN Pricing Details](./pstn-pricing.md)
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You
**Inbound calling with Dynamics 365 Omnichannel (OC)**
-Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
-
-**Inbound calling with Power Virtual Agents**
-
-*Coming soon*
+Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
**Inbound calling with ACS Call Automation SDK**
-[Available in private preview](../voice-video-calling/call-automation.md)
+Call Automation enables you to build custom calling workflows within your applications to optimize business processes and boost customer satisfaction. The library supports managing incoming calls to the phone numbers acquired from Communication Services and incoming calls via Direct Routing. You can also use Call Automation to place outbound calls from the phone numbers owned by your resource, among other capabilities.
+
+Learn more about [Call Automation](../voice-video-calling/call-automation.md), currently available in public preview.
**Inbound calling with Azure Bot Framework**
-Customers participating in Azure Bot Framework Telephony Channel preview can find the [instructions here](/azure/bot-service/bot-service-channel-connect-telephony)
+Customers participating in Azure Bot Framework Telephony Channel preview can find the [instructions here](/azure/bot-service/bot-service-channel-connect-telephony)
+
+**Inbound calling with Power Virtual Agents**
+
+*Coming soon*
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/plan-solution.md
Communication Services offers two types of phone numbers: **local** and **toll-f
Local (Geographic) numbers are 10-digit telephone numbers consisting of the local area codes in the United States. For example, `+1 (206) XXX-XXXX` is a local number with an area code of `206`. This area code is assigned to the city of Seattle. These phone numbers are generally used by individuals and local businesses. Azure Communication Services offers local numbers in the United States. These numbers can be used to place phone calls, but not to send SMS messages. ### Toll-free Numbers
-Toll-free numbers are 10-digit telephone numbers with distinct area codes that can be called from any phone number free of charge. For example, `+1 (800) XXX-XXXX` is a toll-free number in the North America region. These phone numbers are generally used for customer service purposes. Azure Communication Services offers toll-free numbers in the United states. These numbers can be used to place phone calls and to send SMS messages. Toll-free numbers cannot be used by people and can only be assigned to applications.
+Toll-free numbers are 10-digit telephone numbers with distinct area codes that can be called from any phone number free of charge. For example, `+1 (800) XXX-XXXX` is a toll-free number in the North America region. These phone numbers are generally used for customer service purposes. Azure Communication Services offers toll-free numbers in the United states. These numbers can be used to place phone calls and to send SMS messages. Toll-free numbers can't be used by people and can only be assigned to applications.
#### Choosing a phone number type
The following table shows you where you can acquire different types of phone num
| Toll-Free | US | US | US |US | US | *Currently, you can receive calls only to a Microsoft number that is assigned to a Telephony Channel bot. Read more about Telephony Channel [here](/azure/bot-service/bot-service-channel-connect-telephony)
-**For more details about call destinations and pricing, refer to the [pricing page](../pricing.md).
+**For more information about call destinations and pricing, see the [pricing page](../pricing.md).
## Next steps
The following table shows you where you can acquire different types of phone num
### Quickstarts - [Get a phone Number](../../quickstarts/telephony/get-phone-number.md)
+- [Manage inbound and outbound calls](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md) with Call Automation.
- [Place a call](../../quickstarts/voice-video-calling/getting-started-with-calling.md) - [Send an SMS](../../quickstarts/sms/send.md)
The following table shows you where you can acquire different types of phone num
- [Voice and video concepts](../voice-video-calling/about-call-types.md) - [Telephony concepts](./telephony-concept.md)
+- [Call Automation concepts](../voice-video-calling/call-automation.md)
- [Call Flows](../call-flows.md) - [Pricing](../pricing.md)
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Call ID**: This ID is used to identify Communication Services calls. * **SMS message ID**: This ID is used to identify SMS messages. * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application.
+* **Correlation ID**: This ID is used to identify requests made using Call Automation.
* **Call logs**: These logs contain detailed information that can be used to troubleshoot calling and network issues. Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations.
chat_client = ChatClient(
```
-## Access your server call ID
-When troubleshooting issues with the Call Automation SDK, like call recording and call management problems, you'll need to collect the Server Call ID. This ID can be collected using the ```getServerCallId``` method.
+## Access IDs required for Call Automation
+When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you'll need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
+- From the header of API response, locate the field `X-Ms-Skype-Chain-Id`.
+
+ ![Screenshot of response header showing X-Ms-Skype-Chain-Id.](media/troubleshooting/response-header.png)
+- From the callback events your application receives after executing an action e.g. `CallConnected` or `PlayFailed`, locate the correlationID.
-#### JavaScript
-```
-callAgent.on('callsUpdated', (e: { added: Call[]; removed: Call[] }): void => {
- e.added.forEach((addedCall) => {
- addedCall.on('stateChanged', (): void => {
- if (addedCall.state === 'Connected') {
- addedCall.info.getServerCallId().then(result => {
- dispatch(setServerCallId(result));
- }).catch(err => {
- console.log(err);
- });
- }
- });
- });
-});
-```
+ ![Screenshot of call disconnected event showing correlation ID.](media/troubleshooting/correlation-id-in-callback-event.png)
+In addition to one of these IDs, please provide the details on the failing use case and the timestamp for when the failure was observed.
## Access your client call ID
The Azure Communication Services Calling SDK uses the following error codes to h
| 500, 503, 504 | Communication Services infrastructure error. | File a support request through the Azure portal. | | 603 | Call globally declined by remote Communication Services participant | Expected behavior. |
+## Call Automation SDK error codes
+The below error codes are exposed by Call Automation SDK.
+
+| Error Code | Description | Actions to take |
+|--|--|--|
+| 400 | Bad request | The input request is invalid. Look at the error message to determine which input is incorrect.
+| 401 | Unauthorized | HMAC authentication failed. Verify whether the connection string used to create CallAutomationClient is correct.
+| 403 | Forbidden | Request is forbidden. Make sure that you can have access to the resource you are trying to access.
+| 404 | Resource not found | The call you are trying to act on doesn't exist. For example, transferring a call that has already disconnected.
+| 429 | Too many requests | Retry after a delay suggested in the Retry-After header, then exponentially backoff.
+| 500 | Internal server error | Retry after a delay. If it persists, raise a support ticket.
+| 502 | Bad gateway | Retry after a delay with a fresh http client.
+
+Consider the below tips when troubleshooting certain issues.
+- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription will be marked as succeeded if the validation was successful.
+- Getting the error 'The field CallbackUri is invalid': Call Automation does not support HTTP endpoints. Make sure the callback url you provide supports HTTPS.
+- PlayAudio action does not play anything: Currently only Wave file (.wav) format is supported for audio files. The audio content in the wave file must be mono (single-channel), 16-bit samples with a 16,000 (16KHz) sampling rate.
+- Actions on PSTN endpoints aren't working: CreateCall, Transfer, AddParticipant and Redirect to phone numbers require you to set the SourceCallerId in the action request. Unless you are using Direct Routing, the source caller ID should be a phone number owned by your Communication Services resource for the action to succeed.
+
+Refer to [this article](./known-issues.md) to learn about any known issues being tracked by the product team.
+ ## Chat SDK error codes The Azure Communication Services Chat SDK uses the following error codes to help you troubleshoot chat issues. The error codes are exposed through the `error.code` property in the error response.
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
+
+ Title: Azure Communication Services Call Automation how-to for managing calls with Call Automation
+
+description: Provides a how-to guide on using call actions to steer and manage a call with Call Automation.
+++++ Last updated : 11/03/2022++++
+zone_pivot_groups: acs-csharp-java
++
+# How to control and steer calls with Call Automation
++
+Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams will help you visualize how to program your service application with Call Automation.
+
+Call Automation supports various other actions to manage call media and recording that aren't included in this guide.
+
+> [!NOTE]
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+
+As a pre-requisite, we recommend you to read the below articles to make the most of this guide:
+1. Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
+2. Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+
+For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
+## [csharp](#tab/csharp)
+```csharp
+var client = new CallAutomationClient("<resource_connection_string>");
+```
+## [Java](#tab/java)
+```java
+ CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient();
+```
+--
+
+## Make an outbound call
+You can place a 1:1 or group call to a communication user or phone number (public or Communication Services owned number). Below sample makes an outbound call from your service application to a phone number.
+callerIdentifier is used by Call Automation as your application's identity when making an outbound a call. When calling a PSTN endpoint, you also need to provide a phone number that will be used as the source caller ID and shown in the call notification to the target PSTN endpoint.
+To place a call to a Communication Services user, you'll need to provide a CommunicationUserIdentifier object instead of PhoneNumberIdentifier.
+### [csharp](#tab/csharp)
+```csharp
+Uri callBackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+var callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CallSource callSource = new CallSource(callerIdentifier);
+callSource.CallerId = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeCalled = new List<CommunicationIdentifier>();
+listOfPersonToBeCalled.Add(callThisPerson);
+var createCallOptions = new CreateCallOptions(callSource, listOfPersonToBeCalled, callBackUri);
+CreateCallResult response = await client.CreateCallAsync(createCallOptions);
+```
+### [Java](#tab/java)
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567")));
+CommunicationUserIdentifier callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CreateCallOptions createCallOptions = new CreateCallOptions(callerIdentifier, targets, callbackUri)
+ .setSourceCallerId("+18001234567"); // This is the ACS provisioned phone number for the caller
+Response<CreateCallResult> response = client.createCallWithResponse(createCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the callee.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+![Sequence diagram for placing an outbound call.](media/make-call-flow.png)
++
+## Answer an incoming call
+Once you've subscribed to receive [incoming call notifications](../../concepts/call-automation/incoming-call-notification.md) to your resource, below is sample code on how to answer that call. When answering a call, it's necessary to provide a callback url. Communication Services will post all subsequent events about this call to that url.
+### [csharp](#tab/csharp)
+
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+Uri callBackUri = new Uri("https://<myendpoint_where_I_want_to_receive_callback_events");
+
+var answerCallOptions = new AnswerCallOptions(incomingCallContext, callBackUri);
+AnswerCallResult answerResponse = await client.AnswerCallAsync(answerCallOptions);
+CallConnection callConnection = answerResponse.CallConnection;
+```
+### [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+String callbackUri = "https://<myendpoint>/Events";
+
+AnswerCallOptions answerCallOptions = new AnswerCallOptions(incomingCallContext, callbackUri);
+Response<AnswerCallResult> response = client.answerCallWithResponse(answerCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the caller.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+
+![Sequence diagram for answering an incoming call.](media/answer-flow.png)
+
+## Reject a call
+You can choose to reject an incoming call as shown below. You can provide a reject reason: none, busy or forbidden. If nothing is provided, none is chosen by default.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var rejectOption = new RejectCallOptions(incomingCallContext);
+rejectOption.CallRejectReason = CallRejectReason.Forbidden;
+_ = await client.RejectCallAsync(rejectOption);
+```
+# [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+RejectCallOptions rejectCallOptions = new RejectCallOptions(incomingCallContext)
+ .setCallRejectReason(CallRejectReason.BUSY);
+Response<Void> response = client.rejectCallWithResponse(rejectCallOptions).block();
+```
+--
+No events are published for reject action.
+
+## Redirect a call
+You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call will remove your application's ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+var redirectOption = new RedirectCallOptions(incomingCallContext, target);
+_ = await client.RedirectCallAsync(redirectOption);
+```
+# [Java](#tab/java)
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+CommunicationIdentifier target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+RedirectCallOptions redirectCallOptions = new RedirectCallOptions(incomingCallContext, target);
+Response<Void> response = client.redirectCallWithResponse(redirectCallOptions).block();
+```
+--
+To redirect the call to a phone number, set the target to be PhoneNumberIdentifier.
+# [csharp](#tab/csharp)
+```csharp
+var target = new PhoneNumberIdentifier("+16041234567");
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+18001234567");
+```
+--
+No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it will generate a new IncomingCall event with 'to' field set to the target you specified.
+
+## Transfer a 1:1 call
+When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application from the call and hence remove its ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new CommunicationUserIdentifier("<user_id>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination);
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+When transferring to a phone number, it's mandatory to provide a source caller ID. This ID serves as the identity of your application(the source) for the destination endpoint.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new PhoneNumberIdentifier("+16041234567");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("+16471234567");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+The below sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
+![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png)
+
+## Add a participant to a call
+You can add one or more participants (Communication Services users or phone numbers) to an existing call. When adding a phone number, it's mandatory to provide source caller ID. This caller ID will be shown on call notification to the participant being added.
+# [csharp](#tab/csharp)
+```csharp
+var addThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeAdded = new List<CommunicationIdentifier>();
+listOfPersonToBeAdded.Add(addThisPerson);
+var addParticipantsOption = new AddParticipantsOptions(listOfPersonToBeAdded);
+addParticipantsOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+AddParticipantsResult result = await callConnection.AddParticipantsAsync(addParticipantsOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+16041234567");
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(target));
+AddParticipantsOptions addParticipantsOptions = new AddParticipantsOptions(targets)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<AddParticipantsResult> addParticipantsResultResponse = callConnectionAsync.addParticipantsWithResponse(addParticipantsOptions).block();
+```
+--
+To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
+
+AddParticipant will publish a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
+
+![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png)
+
+## Remove a participant from a call
+# [csharp](#tab/csharp)
+```csharp
+var removeThisUser = new CommunicationUserIdentifier("<user_id>");
+var listOfParticipantsToBeRemoved = new List<CommunicationIdentifier>();
+listOfParticipantsToBeRemoved.Add(removeThisUser);
+var removeOption = new RemoveParticipantsOptions(listOfParticipantsToBeRemoved);
+RemoveParticipantsResult result = await callConnection.RemoveParticipantsAsync(removeOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier removeThisUser = new CommunicationUserIdentifier("<user_id>");
+RemoveParticipantsOptions removeParticipantsOptions = new RemoveParticipantsOptions(new ArrayList<>(Arrays.asList(removeThisUser)));
+Response<RemoveParticipantsResult> removeParticipantsResultResponse = callConnectionAsync.removeParticipantsWithResponse(removeParticipantsOptions).block();
+```
+--
+RemoveParticipant only generates `ParticipantUpdated` event describing the latest list of participants in the call. The removed participant is excluded if remove operation was successful.
+![Sequence diagram for removing a participant from the call.](media/remove-participant-flow.png)
+
+## Hang up on a call
+Hang Up action can be used to remove your application from the call or to terminate a group call by setting forEveryone parameter to true. For a 1:1 call, hang up will terminate the call with the other participant by default.
+
+# [csharp](#tab/csharp)
+```csharp
+_ = await callConnection.HangUpAsync(true);
+```
+# [Java](#tab/java)
+```java
+Response<Void> response1 = callConnectionAsync.hangUpWithResponse(new HangUpOptions(true)).block();
+```
+--
+CallDisconnected event is published once the hangUp action has completed successfully.
+
+## Get information about a call participant
+# [csharp](#tab/csharp)
+```csharp
+CallParticipant participantInfo = await callConnection.GetParticipantAsync("<user_id>")
+```
+# [Java](#tab/java)
+```java
+CallParticipant participantInfo = callConnection.getParticipant("<user_id>").block();
+```
+--
+
+## Get information about all call participants
+# [csharp](#tab/csharp)
+```csharp
+List<CallParticipant> participantList = (await callConnection.GetParticipantsAsync()).Value.ToList();
+```
+# [Java](#tab/java)
+```java
+List<CallParticipant> participantsInfo = Objects.requireNonNull(callConnection.listParticipants().block()).getValues();
+```
+--
+
+## Get latest info about a call
+# [csharp](#tab/csharp)
+```csharp
+CallConnectionProperties thisCallsProperties = callConnection.GetCallConnectionProperties();
+```
+# [Java](#tab/java)
+```java
+CallConnectionProperties thisCallsProperties = callConnection.getCallProperties().block();
+```
+--
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
+
+ Title: Customize voice prompts to users with Play action
+
+description: Provides a quick start for playing audio to participants as part of a call.
+++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Customize voice prompts to users with Play action
++
+This guide will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
+- Learn more about [Gathering user input in a call](../../concepts/call-automation/recognize-action.md)
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
+
+ Title: Gather user input
+
+description: Provides a how-to guide for gathering user input from participants on a call.
+++ Last updated : 09/16/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Gather user input with Recognize action
++
+This guide will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK.
+++
+## Event codes
+
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|RecognizeCompleted|200|8531|Action completed, max digits received.|
+|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
+|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
+|RecognizeFailed|400|8510|Action failed, initial silence timeout reached|
+|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.|
+|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
+|RecognizeFailed|500|8512|Unknown internal server error.|
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next Steps
+
+- Learn more about [Gathering user input](../../concepts/call-automation/recognize-action.md)
+- Learn more about [Playing audio in call](../../concepts/call-automation/play-action.md)
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
communication-services Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/data-model.md
The UI Library makes it simple for developers to inject that user data model int
- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Localization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/localization.md
Learn how to set up the localization correctly using the UI Library in your appl
## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Theming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md
ACS UI Library uses components and icons from both [Fluent UI](https://developer
- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps -- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
- [Learn more about UI Library Design Kit](../../quickstarts/ui-library/get-started-ui-kit.md)
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/callflows-for-customer-interactions.md
+
+ Title: Build a customer interaction workflow using Call Automation
+
+description: Quickstart on how to use Call Automation to answer a call, recognize DTMF input, and add a participant to a call.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Build a customer interaction workflow using Call Automation
++
+In this quickstart, you'll learn how to build an application that uses the Azure Communication Services Call Automation SDK to handle the following scenario:
+- handling the `IncomingCall` event from Event Grid
+- answering a call
+- playing an audio file and recognizing input(DTMF) from caller
+- adding a communication user to the call such as a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services
+++
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+![Screenshot of portal page to create a new event subscription.](./media/event-susbcription.png)
+
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal.
+2. Your Event Grid subscription to the `IncomingCall` should execute and call your application that will request to answer the call.
+3. When the call is connected, a `CallConnected` event will be sent to your application's callback url. At this point, the application will request audio to be played and to receive input from the caller.
+4. From your phone, press any three number keys, or press one number key and then # key.
+5. When the input has been received and recognized, the application will make a request to add a participant to the call.
+6. Once the added user answers, you can talk to them.
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn how to [redirect inbound telephony calls](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) with Call Automation.
+- Learn more about [Play action](../../concepts/call-automation/play-action.md).
+- Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/redirect-inbound-telephony-calls.md
+
+ Title: Azure Communication Services Call Automation how-to for redirecting inbound PSTN calls
+
+description: Provides a how-to for redirecting inbound telephony calls with Call Automation.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Redirect inbound telephony calls with Call Automation
++
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+++
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+
+## Testing the application
+
+1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
+2. Your Event Grid subscription to the IncomingCall should execute and call your application.
+3. The call will be redirected to the endpoint(s) you specified in your application.
+
+Since this call flow involves a redirected call instead of answering it, pre-call web hook callbacks to notify your application the other endpoint accepted the call aren't published.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Last updated 06/30/2022
-zone_pivot_groups: acs-plat-android-web
+zone_pivot_groups: acs-plat-android-web-ios
[!INCLUDE [Raw media with Android](./includes/raw-medi)] ::: zone-end [!INCLUDE [Raw media with iOS](./includes/raw-medi)] ::: zone-end
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps - Learn more about [Media Streaming](../../concepts/voice-video-calling/media-streaming.md).-- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md).-- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md).
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn more about [Play action](../../concepts/call-automation/play-action.md).
+- Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/play-action.md
- Title: Play Audio-
-description: Provides a quick start for playing audio to participants as part of a call.
--- Previously updated : 09/06/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Quickstart: Play action
-
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-This quickstart will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK.
---
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
--- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)-- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/recognize-action.md
- Title: Recognize Action-
-description: Provides a quick start for recognizing user input from participants on a call.
--- Previously updated : 09/16/2022---
-zone_pivot_groups: acs-csharp-java
--
-# Quickstart: Recognize action
-
-> [!IMPORTANT]
-> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
-> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
-
-This quickstart will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK.
---
-## Event codes
-
-|Status|Code|Subcode|Message|
-|-|--|--|--|
-|RecognizeCompleted|200|8531|Action completed, max digits received.|
-|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
-|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
-|RecognizeFailed|400|8510|Action failed, initial silence timeout reached|
-|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.|
-|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
-|RecognizeFailed|500|8512|Unknown internal server error.|
--
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next Steps
--- Learn more about [Recognize action](../../concepts/voice-video-calling/recognize-action.md)-- Learn more about [Play action](../../concepts/voice-video-calling/play-action.md)-- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
container-apps Authentication Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md
You've now configured a native client application that can request access your c
### Daemon client application (service-to-service calls)
-Your application can acquire a token to call a Web API hosted in your container app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
+Your application can acquire a token to call a Web API hosted in your container app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) grant.
1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your daemon app registration.
Your application can acquire a token to call a Web API hosted in your container
1. After the app registration is created, copy the value of **Application (client) ID**. 1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again.
-You can now [request an access token using the client ID and client secret](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#use-the-access-token-to-access-the-secured-resource), and Container Apps Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
+You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and Container Apps Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
This process allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must adjust the configuration.
This process allows _any_ client application in your Azure AD tenant to request
1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md). 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to select **Grant admin consent** to authorize the client application to request the permission.
-1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
+1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
1. Within the target Container Apps code, you can now validate that the expected roles are present in the token. The validation steps aren't performed by the Container Apps auth layer. For more information, see [Access user claims](authentication.md#access-user-claims-in-application-code). You've now configured a daemon client application that can access your container app using its own identity.
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
The first time you attach GitHub Actions to your container app, you need to prov
az ad sp create-for-rbac \ --name <SERVICE_PRINCIPAL_NAME> \ --role "contributor" \
- --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME> \
- --sdk-auth
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
``` # [PowerShell](#tab/powershell)
az ad sp create-for-rbac \
az ad sp create-for-rbac ` --name <SERVICE_PRINCIPAL_NAME> ` --role "contributor" `
- --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME> `
- --sdk-auth
+ --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
``` As you interact with this example, replace the placeholders surrounded by `<>` with your values.
-The return value from this command is a JSON payload, which includes the service principal's `tenantId`, `clientId`, and `clientSecret`.
+The return values from this command includes the service principal's `appId`, `password` and `tenant`. You need to pass these values to the `az containerapp github-action add` command.
The following example shows you how to add an integration while using a personal access token.
az containerapp github-action add \
--registry-url <URL_TO_CONTAINER_REGISTRY> \ --registry-username <REGISTRY_USER_NAME> \ --registry-password <REGISTRY_PASSWORD> \
- --service-principal-client-id <CLIENT_ID> \
- --service-principal-client-secret <CLIENT_SECRET> \
- --service-principal-tenant-id <TENANT_ID> \
+ --service-principal-client-id <appId> \
+ --service-principal-client-secret <password> \
+ --service-principal-tenant-id <tenant> \
--token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN> ```
az containerapp github-action add `
--registry-url <URL_TO_CONTAINER_REGISTRY> ` --registry-username <REGISTRY_USER_NAME> ` --registry-password <REGISTRY_PASSWORD> `
- --service-principal-client-id <CLIENT_ID> `
- --service-principal-client-secret <CLIENT_SECRET> `
- --service-principal-tenant-id <TENANT_ID> `
+ --service-principal-client-id <appId> `
+ --service-principal-client-secret <password> `
+ --service-principal-tenant-id <tenant> `
--token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN> ```
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Content-Type: application/json
```
-This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you'll then add the value of `access_token` to a client connection with the vault.
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#successful-response). To access Key Vault, you'll then add the value of `access_token` to a client connection with the vault.
### REST endpoint reference
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
Last updated 09/26/2022
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](../postgresql/index.yml) database with a managed identity running on [Container Apps](overview.md).
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+ What you will learn: > [!div class="checklist"]
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)]
-Next, declare a variable to hold the VNET name.
+Register the `Microsoft.ContainerService` provider.
+
+# [Bash](#tab/bash)
+
+```bash
+az provider register --namespace Microsoft.ContainerService
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
+```
+++
+Declare a variable to hold the VNET name.
# [Bash](#tab/bash)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
WITH (num varchar(100)) AS [IntToFloat]
The full fidelity schema representation is designed to handle the full breadth of polymorphic schemas in the schema-agnostic operational data. In this schema representation, no items are dropped from the analytical store even if the well-defined schema constraints (that is no mixed data type fields nor mixed data type arrays) are violated.
-This is achieved by translating the leaf properties of the operational data into the analytical store with distinct columns based on the data type of values in the property. The leaf property names are extended with data types as a suffix in the analytical store schema such that they can be queries without ambiguity.
+This is achieved by translating the leaf properties of the operational data into the analytical store as JSON `key-value` pairs, where the datatype is the `key` and the property content is the `value`. This JSON object representation allows queries without ambiguity, and you can individually analyze each datatype.
-In the full fidelity schema representation, each datatype of each property will generate a column for that datatype. Each of them count as one of the 1000 maximum properties.
+In other words, in the full fidelity schema representation, each datatype of each property of each document will generate a `key-value`pair in a JSON object for that property. Each of them count as one of the 1000 maximum properties limit.
For example, let's take the following sample document in the transactional store:
salary: 1000000
} ```
-The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note it's a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
+The nested object `address` is a property in the root level of the document and will be represented as a column. Each leaf property in the `address` object will be represented as a JSON object: `{"object":{"streetNo":{"int32":15850},"streetName":{"string":"NE 40th St."},"zip":{"int32":98052}}}`.
-##### Data type to suffix map for full fidelity schema
+Unlike the well-defined schema representation, the full fidelity method allows variation in datatypes. If the next document in this collection of the example above has `streetNo` as a string, it will be represented in analytical store as `"streetNo":{"string":15850}`. In well-defined schema method, it wouldn't be represented.
-Here's a map of all the property data types and their suffix representations in the analytical store in full fidelity schema representation:
+
+##### Datatypes map for full fidelity schema
+
+Here's a map of all the property data types and their representations in the analytical store in full fidelity schema representation:
|Original data type |Suffix |Example | ||||
Here's a map of all the property data types and their suffix representations in
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
+##### Using full fidelity schema on Spark
+
+Spark will manage each datatype as a column when loading into a `DataFrame`. Let's assume a collection with the documents below.
+
+```json
+{
+ "_id" : "1" ,
+ "item" : "Pizza",
+ "price" : 3.49,
+ "rating" : 3,
+ "timestamp" : 1604021952.6790195
+},
+{
+ "_id" : "2" ,
+ "item" : "Ice Cream",
+ "price" : 1.59,
+ "rating" : "4" ,
+ "timestamp" : "2022-11-11 10:00 AM"
+}
+```
+
+While the first document has `rating` as a number and `timestamp` in utc format, the second document has `rating` and `timestamp` as strings. Assuming that this collection was loaded into `DataFrame` without any data transformation, the output of the `df.printSchema()` is:
+
+```JSON
+root
+ |-- _rid: string (nullable = true)
+ |-- _ts: long (nullable = true)
+ |-- id: string (nullable = true)
+ |-- _etag: string (nullable = true)
+ |-- _id: struct (nullable = true)
+ | |-- objectId: string (nullable = true)
+ |-- item: struct (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- price: struct (nullable = true)
+ | |-- float64: double (nullable = true)
+ |-- rating: struct (nullable = true)
+ | |-- int32: integer (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- timestamp: struct (nullable = true)
+ | |-- float64: double (nullable = true)
+ | |-- string: string (nullable = true)
+ |-- _partitionKey: struct (nullable = true)
+ | |-- string: string (nullable = true)
+ ```
+
+In well-defined schema representation, both `rating` and `timestamp` of the second document wouldn't be represented. In full fidelity schema, you can use the following examples to individually access to each value of each datatype.
+
+In the example below, we can use `PySpark` to run an aggregation:
+
+```PySpark
+df.groupBy(df.item.string).sum().show()
+```
+
+In the example below, we can use `PySQL` to run another aggregation:
+
+```PySQL
+df.createOrReplaceTempView("Pizza")
+sql_results = spark.sql("SELECT sum(price.float64),count(*) FROM Pizza where timestamp.string is not null and item.string = 'Pizza'")
+sql_results.show()
+```
+
+##### Using full fidelity schema on SQL
+
+Considering the same documents of the Spark example above, customers can use the following syntax example:
+
+```SQL
+SELECT rating,timestamp_string,timestamp_utc
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating integer '$.rating.int32',
+timestamp varchar(50) '$.timestamp.string',
+timestamp_utc float '$.timestamp.float64'
+) as HTAP
+WHERE timestamp is not null or timestamp_utc is not null
+```
+
+Starting from the query above, customers can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. Customers can also hide complex datatype structures by using views.
+
+```SQL
+create view MyView as
+SELECT MyRating=rating,MyTimestamp = convert(varchar(50),timestamp_utc)
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating integer '$.rating.int32',
+timestamp_utc float '$.timestamp.float64'
+) as HTAP
+WHERE timestamp_utc is not null
+union all
+SELECT MyRating=convert(integer,rating_string),MyTimestamp = timestamp_string
+FROM OPENROWSET(PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>',
+ OBJECT = '<your-collection-name>',
+ SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>')
+WITH (
+rating_string varchar(50) '$.rating.string',
+timestamp_string varchar(50) '$.timestamp.string'
+) as HTAP
+WHERE timestamp_string is not null
+```
++ ##### Working with the MongoDB `_id` field
-the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, `Full Fidelity Schema` will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
+the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
-###### Spark
+###### Working with the MongoDB `_id` field in Spark
```Python import org.apache.spark.sql.types._
df = spark.read.format("cosmos.olap")\
df.select("id", "_id.objectId").show() ```
-###### SQL
+###### Working with the MongoDB `_id` field in SQL
```SQL SELECT TOP 100 id=CAST(_id as VARBINARY(1000))
The schema representation type decision must be made at the same time that Synap
> In the command above, replace `create` with `update` for existing accounts. With the PowerShell:
- ```
+ ```PowerShell
New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity" ```
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
After the 10 seconds is over, the burst capacity has been used up. If the worklo
To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page. + Before submitting your request: - Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Change feed functionality is surfaced as change stream in API for MongoDB and Qu
Native Apache Cassandra provides change data capture (CDC), a mechanism to flag specific tables for archival as well as rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. The change feed feature in Azure Cosmos DB for Apache Cassandra enhances the ability to query the changes with predicate via CQL. To learn more about the implementation details, see [Change feed in the Azure Cosmos DB for Apache Cassandra](cassandr).
-## Measuing change feed request unit consumption
+## Measuring change feed request unit consumption
Use Azure Monitor to measure the request unit (RU) consumption of the change feed. For more information, see [monitor throughput or request unit usage in Azure Cosmos DB](monitor-request-unit-usage.md).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
To check whether an Azure Cosmos DB account is eligible for the preview, you can
:::image type="content" source="media/merge/throughput-and-scaling-category.png" alt-text="Screenshot of Throughput and Scaling content in Diagnose and solve issues page."::: ### How to identify containers to merge
Containers that meet both of these conditions are likely to benefit from merging
Condition 1 often occurs when you've previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state. Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions.
-#### Criteria 1
+#### Condition 1
To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
In the below example, we have an autoscale container provisioned with 5000 RU/s
:::image type="content" source="media/merge/RU-per-physical-partition-metric.png" alt-text="Screenshot of Azure Monitor metric Physical Partition Throughput in Azure portal.":::
-#### Criteria 2
+#### Condition 2
To determine the current average storage per physical partition, first find the overall storage (data + index) of the container.
Navigate to **Insights** > **Storage** > **Data & Index Usage**. The total stora
:::image type="content" source="media/merge/storage-per-container.png" alt-text="Screenshot of Azure Monitor storage (data + index) metric for container in Azure portal.":::
-Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have five physical partitions.
+Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Condition 1. In our example, we have five physical partitions.
Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / five physical partitions) = 14.8 GB per physical partition.
-Based on criteria 1 and 2, our container can potentially benefit from merging partitions.
+Based on conditions 1 and 2, our container can potentially benefit from merging partitions.
### Merging physical partitions
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
Title: Get started using Azure Cosmos DB for MongoDB and Python
-description: Presents a Python code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
--
+ Title: Quickstart - Azure Cosmos DB for MongoDB for Python with MongoDB driver
+description: Learn how to build a Python app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
+++ - Previously updated : 04/26/2022 ms.devlang: python-+ Last updated : 11/08/2022+
-# Quickstart: Get started using Azure Cosmos DB for MongoDB and Python
+# Quickstart: Azure Cosmos DB for MongoDB for Python with MongoDB driver
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](quickstart-python.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](quickstart-go.md)
->
+Get started with the PyMongo package to create databases, collections, and documents within your Azure Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+In this quickstart, you'll communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/). Also, you'll use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands), which are designed to help you create and obtain database resources that are specific to the [Azure Cosmos DB capacity model](/azure/cosmos-db/account-databases-containers-items).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Python 3.8+](https://www.python.org/downloads/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run `python --version` to check that you have a recent version of Python.
+* Run ``az --version`` (Azure CLI) or `Get-Module -ListAvailable Az*` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
-This [quickstart](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) demonstrates how to:
-1. Create an [Azure Cosmos DB for MongoDB account](introduction.md)
-2. Connect to your account using PyMongo
-3. Create a sample database and collection
-4. Perform CRUD operations in the sample collection
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the MongoDB npm package.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the API for MongoDB.
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
+++
-## Prerequisites to run the sample app
+### Get MongoDB connection string
-* [Python](https://www.python.org/downloads/) 3.9+ (It's best to run the [sample code](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) described in this article with this recommended version. Although it may work on older versions of Python 3.)
-* [PyMongo](https://pypi.org/project/pymongo/) installed on your machine
+#### [Azure CLI](#tab/azure-cli)
-<a id="create-account"></a>
-## Create a database account
+#### [PowerShell](#tab/azure-powershell)
-## Learn the object model
-Before you continue building the application, let's look into the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+#### [Portal](#tab/azure-portal)
-* Azure Cosmos DB for MongoDB account
-* Databases
-* Collections
-* Documents
+++
+### Create a new Python app
+
+1. Create a new empty folder using your preferred terminal and change directory to the folder.
+
+ > [!NOTE]
+ > If you just want the finished code, download or fork and clone the [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) repo that has the full example. You can also `git clone` the repo in Azure Cloud Shell to walk through the steps shown in this quickstart.
+
+2. Create a *requirements.txt* file that lists the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages.
+
+ ```text
+ # requirements.txt
+ pymongo
+ python-dotenv
+ ```
+
+3. Create a virtual environment and install the packages.
+
+ #### [Windows](#tab/venv-windows)
+
+ ```bash
+ # py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
+ py -3 -m venv .venv
+ source .venv/Scripts/activate
+ pip install -r requirements.txt
+ ```
+
+ #### [Linux / macOS](#tab/venv-linux+macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ pip install -r requirements.txt
+ ```
+
+
+
+### Configure environment variables
++
+## Object model
+
+Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
+
+* [Database](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - Azure Cosmos DB's API for MongoDB can support one or more independent databases.
+
+* [Collection](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - A database can contain one or more collections. A collection is a group of documents stored in MongoDB, and can be thought of as roughly the equivalent of a table in a relational database.
+
+* [Document](https://pymongo.readthedocs.io/en/stable/tutorial.html#documents) - A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection don't need to have the same set of fields or structure. And common fields in a collection's documents may hold different types of data.
To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
-## Get the code
+## Code examples
-Download the sample Python code [from the repository](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) or use git clone:
+* [Authenticate the client](#authenticate-the-client)
+* [Get database](#get-database)
+* [Get collection](#get-collection)
+* [Create an index](#create-an-index)
+* [Create a document](#create-a-document)
+* [Get an document](#get-a-document)
+* [Query documents](#query-documents)
-```shell
-git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started
-```
+The sample code described in this article creates a database named `adventureworks` with a collection named `products`. The `products` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier. The complete sample code is at https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started/tree/main/001-quickstart/.
-## Retrieve your connection string
+For the steps below, the database won't use sharding and shows a synchronous application using the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) driver. For asynchronous applications, use the [Motor](https://www.mongodb.com/docs/drivers/motor/) driver.
-When running the sample code, you have to enter your API for MongoDB account's connection string. Use the following steps to find it:
+### Authenticate the client
-1. In the [Azure portal](https://portal.azure.com/), select your Azure Cosmos DB account.
+1. In the project directory, create an *run.py* file. In your editor, add require statements to reference packages you'll use, including the PyMongo and python-dotenv packages.
-2. In the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the primary connection string.
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="package_dependencies":::
-> [!WARNING]
-> Never check passwords or other sensitive data into source code.
+2. Get the connection information from the environment variable defined in an *.env* file.
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="client_credentials":::
-## Run the code
+3. Define constants you'll use in the code.
-```shell
-python run.py
-```
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/001-quickstart/run.py" id="constant_values":::
-## Understand how it works
+### Connect to Azure Cosmos DBΓÇÖs API for MongoDB
-### Connecting
+Use the [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient) object to connect to your Azure Cosmos DB for MongoDB resource. The connect method returns a reference to the database.
-The following code prompts the user for the connection string. It's never a good idea to have your connection string in code since it enables anyone with it to read or write to your database.
-```python
-CONNECTION_STRING = getpass.getpass(prompt='Enter your primary connection string: ') # Prompts user for connection string
-```
+### Get database
-The following code creates a client connection to your API for MongoDB and tests to make sure it's valid.
+Check if the database exists with [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method. If the database doesn't exist, use the [create database extension command](/azure/cosmos-db/mongodb/custom-commands#create-database) to create it with a specified provisioned throughput.
-```python
-client = pymongo.MongoClient(CONNECTION_STRING)
-try:
- client.server_info() # validate connection string
-except pymongo.errors.ServerSelectionTimeoutError:
- raise TimeoutError("Invalid API for MongoDB connection string or timed out when attempting to connect")
-```
-### Resource creation
-The following code creates the sample database and collection that will be used to perform CRUD operations. When creating resources programmatically, it's recommended to use the API for MongoDB extension commands (as shown here) because these commands have the ability to set the resource throughput (RU/s) and configure sharding.
+### Get collection
-Implicitly creating resources will work but will default to recommended values for throughput and will not be sharded.
+Check if the collection exists with the [list_collection_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_names) method. If the collection doesn't exist, use the [create collection extension command](/azure/cosmos-db/mongodb/custom-commands#create-collection) to create it.
-```python
-# Database with 400 RU throughput that can be shared across the DB's collections
-db.command({'customAction': "CreateDatabase", 'offerThroughput': 400})
-```
-```python
- # Creates a unsharded collection that uses the DB s shared throughput
-db.command({'customAction': "CreateCollection", 'collection': UNSHARDED_COLLECTION_NAME})
-```
+### Create an index
+
+Create an index using the [update collection extension command](/azure/cosmos-db/mongodb/custom-commands#update-collection). You can also set the index in the create collection extension command. Set the index to `name` property in this example so that you can later sort with the cursor class [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort) method on product name.
++
+### Create a document
+
+Create a document with the *product* properties for the `adventureworks` database:
+
+* A *category* property. This property can be used as the logical partition key.
+* A *name* property.
+* An inventory *quantity* property.
+* A *sale* property, indicating whether the product is on sale.
++
+Create a document in the collection by calling the collection level operation [update_one](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.update_one). In this example, you'll *upsert* instead of *create* a new document. Upsert isn't necessary in this example because the product *name* is random. However, it's a good practice to upsert in case you run the code more than once and the product name is the same.
+
+The result of the `update_one` operation contains the `_id` field value that you can use in subsequent operations. The *_id* property was created automatically.
+
+### Get a document
+
+Use the [find_one](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.find_one) method to get a document.
++
+In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (`_id`) and a partition key.
+
+### Query documents
+
+After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [`Collection.find`](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.find) to get a [`Cursor`](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor) result, and then use [sort](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort).
-### Writing a document
-The following inserts a sample document we will continue to use throughout the sample. We get its unique _id field value so that we can query it in subsequent operations.
-```python
-"""Insert a sample document and return the contents of its _id field"""
-document_id = collection.insert_one({SAMPLE_FIELD_NAME: randint(50, 500)}).inserted_id
+Troubleshooting:
+
+* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+
+## Run the code
+
+This app creates an API for MongoDB database and collection and creates a document and then reads the exact same document back. Finally, the example issues a query that returns documents that match a specified product *category*. With each step, the example outputs information to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```console
+python run.py
```
-### Reading/Updating a document
-The following queries, updates, and again queries for the document that we previously inserted.
+The output of the app should be similar to this example:
++
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB for NoSQL account, you can delete the corresponding resource group.
-```python
-print("Found a document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+### [Azure CLI](#tab/azure-cli)
-collection.update_one({"_id": document_id}, {"$set":{SAMPLE_FIELD_NAME: "Updated!"}})
-print("Updated document with _id {}: {}".format(document_id, collection.find_one({"_id": document_id})))
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
```
-### Deleting a document
-Lastly, we delete the document we created from the collection.
-```python
-"""Delete the document containing document_id from the collection"""
-collection.delete_one({"_id": document_id})
+### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
```
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the [Azure portal](https://portal.azure.com).
+
+1. Select **Delete resource group**.
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+++ ## Next steps
-In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and perform CRUD operations.
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the PyMongo driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
+> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](/azure/cosmos-db/migration-choices)
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/distribute-throughput-across-partitions.md
If you aren't seeing 429 responses and your end to end latency is acceptable, th
To get started using distributed throughput across partitions, enroll in the preview by submitting a request for the **Azure Cosmos DB Throughput Redistribution Across Partitions** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page. + Before submitting your request: - Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to. - Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
The Azure Cosmos DB team will review your request and contact you via email to c
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Throughput redistribution across partition**. Run the **Check eligibility for throughput redistribution across partitions preview** diagnostic. ## Example scenario
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
Previously updated : 06/01/2022 Last updated : 11/09/2022 ms.devlang: csharp
The following classes have been replaced on the 3.0 SDK:
* `Microsoft.Azure.Documents.UriFactory`
-* `Microsoft.Azure.Documents.Document`
-
-* `Microsoft.Azure.Documents.Resource`
-
-The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
+The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
# [.NET SDK v3](#tab/dotnet-v3)
await client.CreateDocumentAsync(
+* `Microsoft.Azure.Documents.Document`
+ Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+* `Microsoft.Azure.Documents.Resource`
+
+There is no direct replacement for `Resource`, in cases where it was used for documents, follow the guidance for `Document`.
+
+* `Microsoft.Azure.Documents.AccessCondition`
+
+`IfNoneMatch` or `IfMatch` are now available on the `Microsoft.Azure.Cosmos.ItemRequestOptions` directly.
+ ### Changes to item ID generation Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
public Guid Id { get; set; }
### Changed default behavior for connection mode
-The SDK v3 now defaults to Direct + TCP connection modes compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
+The SDK v3 now defaults to [Direct + TCP connection modes](sdk-connection-modes.md) compared to the previous v2 SDK, which defaulted to Gateway + HTTPS connections modes. This change provides enhanced performance and scalability.
### Changes to FeedOptions (QueryRequestOptions in v3.0 SDK) The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions` in the SDK v3 and within the class, several properties have had changes in name and/or default value or been removed completely.
-`FeedOptions.MaxDegreeOfParallelism` has been renamed to `QueryRequestOptions.MaxConcurrency` and default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.
-
-`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
-
-`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
-
-`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
-
-The following properties have been removed:
-
-* `FeedOptions.DisableRUPerMinuteUsage`
-
-* `FeedOptions.EnableCrossPartitionQuery`
-
-* `FeedOptions.JsonSerializerSettings`
-
-* `FeedOptions.PartitionKeyRangeId`
-
-* `FeedOptions.PopulateQueryMetrics`
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`FeedOptions.MaxDegreeOfParallelism`|`QueryRequestOptions.MaxConcurrency` - Default value and associated behavior remains the same, operations run client side during parallel query execution will be executed serially with no-parallelism.|
+|`FeedOptions.PartitionKey`|`QueryRequestOptions.PartitionKey` - Behavior maintained. |
+|`FeedOptions.EnableCrossPartitionQuery`|Removed. Default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically. |
+|`FeedOptions.PopulateQueryMetrics`|Removed. It is now enabled by default and part of the [diagnostics](troubleshoot-dotnet-sdk.md#capture-diagnostics).|
+|`FeedOptions.RequestContinuation`|Removed. It is now promoted to the query methods themselves. |
+|`FeedOptions.JsonSerializerSettings`|Removed. Serialization can be customized through a [custom serializer](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializer) or [serializer options](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.serializeroptions).|
+|`FeedOptions.PartitionKeyRangeId`|Removed. Same outcome can be obtained from using [FeedRange](change-feed-pull-model.md#using-feedrange-for-parallelization) as input to the query method.|
+|`FeedOptions.DisableRUPerMinuteUsage`|Removed.|
### Constructing a client
catch (CosmosException cosmosException) {
### ConnectionPolicy
-Some settings in `ConnectionPolicy` have been renamed or replaced:
+Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClientOptions`:
| .NET v2 SDK | .NET v3 SDK | |-|-|
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-request-rate-too-large.md
If there's high percent of rate limited requests and no hot partition:
If there's high percent of rate limited requests and there's an underlying hot partition: - Long-term, for best cost and performance, consider **changing the partition key**. The partition key can't be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.-- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+- Short-term, you can temporarily increase the overall RU/s of the resource to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+- Short-term, you can use the [**throughput redistribution across partitions feature** (preview)](distribute-throughput-across-partitions.md) to assign more RU/s to the physical partition that is hot. This is recommended only when the hot physical partition is predictable and consistent.
> [!TIP] > When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation. Learn more about [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md).
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
As a result, we see in the following diagram that each physical partition gets 3
In general, if you have a starting number of physical partitions `P`, and want to set a desired RU/s `S`:
-Increase your RU/s to: `10,000 * P * 2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P)))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly.
+Increase your RU/s to: `10,000 * P * (2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P))))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly.
> [!NOTE] > When you increase the RU/s of a database or container, this can impact the minimum RU/s you can lower to in the future. Typically, the minimum RU/s is equal to MAX(400 RU/s, Current storage in GB * 10 RU/s, Highest RU/s ever provisioned / 100). For example, if the highest RU/s you've ever scaled to is 100,000 RU/s, the lowest RU/s you can set in the future is 1000 RU/s. Learn more about [minimum RU/s](concepts-limits.md#minimum-throughput-limits). #### Step 2: Lower your RU/s to the desired RU/s
-For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * 2 ^ (ROUND(LOG_2(150,000/(10,000 * 5)))` = 200,000 RU/s, and then lower to 150,000 RU/s.
+For example, suppose we have five physical partitions, 50,000 RU/s and want to scale to 150,000 RU/s. We should first set: `10,000 * 5 * (2 ^ (ROUND(LOG_2(150,000/(10,000 * 5))))` = 200,000 RU/s, and then lower to 150,000 RU/s.
When we scaled up to 200,000 RU/s, the lowest manual RU/s we can now set in the future is 2000 RU/s. The [lowest autoscale max RU/s](autoscale-faq.yml#lowering-the-max-ru-s) we can set is 20,000 RU/s (scales between 2000 - 20,000 RU/s). Since our target RU/s is 150,000 RU/s, we are not affected by the minimum RU/s.
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
The process of key rotation and regeneration is simple. First, make sure that **
1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
# [If your application is currently using the secondary key](#tab/using-secondary-key)
The process of key rotation and regeneration is simple. First, make sure that **
1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
- :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key." border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Azure Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that **
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key." border="true":::
Azure Cosmos DB RBAC is the ideal access control method in situations where:
See [Configure role-based access control for your Azure Cosmos DB account](how-to-setup-rbac.md) to learn more about Azure Cosmos DB RBAC.
-For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md).
+For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md).
## <a id="resource-tokens"></a> Resource tokens
Resource tokens provide access to the application resources within a database. R
- Are created when a [user](#users) is granted [permissions](#permissions) to a specific resource. - Are recreated when a permission resource is acted upon on by POST, GET, or PUT call. - Use a hash resource token specifically constructed for the user, resource, and permission.-- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of five hours.
+- Are time bound with a customizable validity period. The default valid time span is one hour. Token lifetime, however, may be explicitly specified, up to a maximum of 24 hours.
- Provide a safe alternative to giving out the primary key. - Enable clients to read, write, and delete resources in the Azure Cosmos DB account according to the permissions they've been granted.
As a database service, Azure Cosmos DB enables you to search, select, modify and
- To learn more about Azure Cosmos DB database security, see [Azure Cosmos DB Database security](database-security.md). - To learn how to construct Azure Cosmos DB authorization tokens, see [Access Control on Azure Cosmos DB Resources](/rest/api/cosmos-db/access-control-on-cosmosdb-resources). - For user management samples with users and permissions, see [.NET SDK v3 user management samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/UserManagement/UserManagementProgram.cs)-- For information and sample code to configure RBAC for the Azure Cosmso DB for MongoDB, see [Configure role-based access control for your Azure Cosmso DB for MongoDB](mongodb/how-to-setup-rbac.md)
+- For information and sample code to configure RBAC for the Azure Cosmos DB for MongoDB, see [Configure role-based access control for your Azure Cosmos DB for MongoDB](mongodb/how-to-setup-rbac.md)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md
Any container that is created in a serverless account is a serverless container.
- You can't create a shared throughput database in a serverless account and doing so returns an error. - Serverless containers can store a maximum of 50 GB of data and indexes.
-> [!NOTE]
-> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
+### Serverless 1 TB container preview
+Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md). After the request is approved, all existing and future serverless accounts in the subscription will be able to use containers with size up to 1 TB.
## Monitoring your consumption
If you have used Azure Cosmos DB in provisioned throughput mode before, you'll f
When browsing the **Metrics** pane of your account, you'll find a chart named **Request Units consumed** under the **Overview** tab. This chart shows how many Request Units your account has consumed: You can find the same chart when using Azure Monitor, as described [here](monitor-request-unit-usage.md). Azure Monitor enables the ability to configure [alerts](../azure-monitor/alerts/alerts-metric-overview.md), which can be used to notify you when your Request Unit consumption has passed a certain threshold.
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Title: Try Azure Cosmos DB free
-description: Try Azure Cosmos DB free of charge. No sign-up or credit card required. It's easy to test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time during your trial.
+ Title: |
+ Try Azure Cosmos DB free
+description: |
+ Try Azure Cosmos DB free. No credit card required. Test your apps, deploy, and run small workloads free for 30 days. Upgrade your account at any time.
-+ Previously updated : 11/02/2022 Last updated : 11/07/2022 # Try Azure Cosmos DB free [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table, PostgreSQL](includes/appliesto-nosql-mongodb-cassandra-gremlin-table-postgresql.md)]
-[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. If you're using the API for NoSQL, migrate your Try Azure Cosmos DB data to your upgraded account.
+[Try Azure Cosmos DB](https://aka.ms/trycosmosdb) makes it easy to try out Azure Cosmos DB for free before you commit. There's no credit card required to get started. Your account is free for 30 days. After expiration, a new sandbox account can be created. You can extend beyond 30 days for 24 hours. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period.
+
+If you're using the API for NoSQL or PostgreSQL, you can also migrate your Try Azure Cosmos DB data to your upgraded account before the trial ends.
This article walks you through how to create your account, limits, and upgrading your account. This article also walks through how to migrate your data from your Try Azure Cosmos DB sandbox to your own account using the API for NoSQL.
-## Try Azure Cosmos DB limits
+## Limits to free account
+
+### [NoSQL / Cassandra/ Gremlin / Table](#tab/nosql+cassandra+gremlin+table)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial. | Resource | Limit | | | |
-| Duration of the trial | 30 days (a new trial can be requested after expiration) After expiration, the information stored is deleted. Prior to expiration you can upgrade your account and migrate the information stored. |
-| Maximum containers per subscription (API for NoSQL, Gremlin, Table) | 1 |
-| Maximum containers per subscription (API for MongoDB) | 3 |
+| Duration of the trial | 30 days┬╣┬▓ |
+| Maximum containers per subscription | 1 |
| Maximum throughput per container | 5,000 | | Maximum throughput per shared-throughput database | 20,000 | | Maximum total storage per account | 10 GB |
-Try Azure Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+
+### [MongoDB](#tab/mongodb)
+
+The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial.
+
+| Resource | Limit |
+| | |
+| Duration of the trial | 30 days┬╣┬▓ |
+| Maximum containers per subscription | 3 |
+| Maximum throughput per container | 5,000 |
+| Maximum throughput per shared-throughput database | 20,000 |
+| Maximum total storage per account | 10 GB |
+
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+
+### [PostgreSQL](#tab/postgresql)
+
+The following table lists the limits for the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) for Free trial.
+
+| Resource | Limit |
+| | |
+| Duration of the trial | 30 days┬╣┬▓ |
+
+┬╣ A new trial can be requested after expiration.
+┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored.
+
+> [!NOTE]
+> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
++ ## Create your Try Azure Cosmos DB account
From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an
Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation.
-* [API for NoSQL Quickstart](nosql/quickstart-portal.md#create-container-database)
-* [API for PostgreSQL Quickstart](postgresql/quickstart-create-portal.md)
-* [API for MongoDB Quickstart](mongodb/quickstart-python.md#learn-the-object-model)
+* [API for NoSQL](nosql/quickstart-portal.md#create-container-database)
+* [API for PostgreSQL](postgresql/quickstart-create-portal.md)
+* [API for MongoDB](mongodb/quickstart-python.md#object-model)
* [API for Apache Cassandra](cassandr) * [API for Apache Gremlin](gremlin/quickstart-console.md#add-a-graph) * [API for Table](table/quickstart-dotnet.md)
-You can also get started with one of the learning resources in Data Explorer.
+You can also get started with one of the learning resources in the Data Explorer.
:::image type="content" source="media/try-free/data-explorer.png" lightbox="media/try-free/data-explorer.png" alt-text="Screenshot of the Azure Cosmos DB Data Explorer landing page.":::
You can also get started with one of the learning resources in Data Explorer.
Your account is free for 30 days. After expiration, a new sandbox account can be created. You can upgrade your active Try Azure Cosmos DB account at any time during the 30 day trial period. Here are the steps to start an upgrade.
-1. Select the option to upgrade your current account in the Dashboard page or from the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
+### Start upgrade
- :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+1. From either the Azure portal or the Try Azure Cosmos DB free page, select the option to **Upgrade** your account.
-1. Select **Sign up for Azure Account** & create an Azure Cosmos DB account.
+ :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Screenshot of the confirmation page for the account upgrade experience.":::
-You can migrate your database from Try Azure Cosmos DB to your new Azure account if you're utilizing the API for NoSQL after you've signed up for an Azure account. Here are the steps to migrate.
+1. Choose to either **Sign up for an Azure account** or **Sign in** and create a new Azure Cosmos DB account following the instructions in the next section.
-### Create an Azure Cosmos DB account
+### Create a new account
-
-Navigate back to the **Upgrade** page and select **Next** to move on to the third step and move your data.
+#### [NoSQL / MongoDB / Cassandra / Gremlin / Table](#tab/nosql+mongodb+cassandra+gremlin+table)
> [!NOTE]
-> You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+> While this example uses API for NoSQL, the steps are similar for the APIs for MongoDB, Cassandra, Gremlin, or Table.
-## Migrate your Try Azure Cosmos DB data
+#### [PostgreSQL](#tab/postgresql)
-If you're using the API for NoSQL, you can migrate your Try Azure Cosmos DB data to your upgraded account. HereΓÇÖs how to migrate your Try Azure Cosmos DB database to your new Azure Cosmos DB API for NoSQL account.
-### Prerequisites
+
-* Must be using the Azure Cosmos DB API for NoSQL.
-* Must have an active Try Azure Cosmos DB account and Azure account.
-* Must have an Azure Cosmos DB account using the API for NoSQL in your Azure subscription.
+### Move data to your new account
-### Migrate your data
+1. Navigate back to the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide. Select **Next** to move on to the third step and move your data.
-1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data.
+ :::image type="content" source="media/try-free/account-creation-options.png" lightbox="media/try-free/account-creation-options.png" alt-text="Screenshot of the sign-in/sign-up experience to upgrade your current account.":::
- 1. Go to your Azure Cosmos DB Account in the Azure portal.
+## Migrate your data
- 1. Find the connection string of your new Azure Cosmos DB account within the **Keys** page of your new account.
+### [NoSQL / MongoDB / Cassandra / Gremlin / Table](#tab/nosql+mongodb+cassandra+gremlin+table)
- :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+> [!NOTE]
+> While this example uses API for NoSQL, the steps are similar for the APIs for MongoDB, Cassandra, Gremlin, or Table.
+
+1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data. This information can be found within the **Keys** page of your new account.
+
+ :::image type="content" source="media/try-free/account-keys.png" lightbox="media/try-free/account-keys.png" alt-text="Screenshot of the Keys page for an Azure Cosmos DB account.":::
+
+1. Back in the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide, insert the connection string of the new Azure Cosmos DB account in the **Connection string** field.
+
+ :::image type="content" source="media/try-free/migrate-data.png" lightbox="media/try-free/migrate-data.png" alt-text="Screenshot of the migrate data options in the portal.":::
-1. Insert the connection string of the new Azure Cosmos DB account in the **Upgrade your account** page.
+1. Select **Next** to move the data to your account. Provide your email address to be notified by email once the migration has been completed.
+
+### [PostgreSQL](#tab/postgresql)
+
+1. Locate your **PostgreSQL connection URL** of the Azure Cosmos DB account you created for your data. This information can be found within the **Connection String** page of your new account.
+
+1. Back in the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide, insert the connection string of the new Azure Cosmos DB account in the **Connection string** field.
1. Select **Next** to move the data to your account.
-1. Provide your email address to be notified by email once the migration has been completed.
+ ## Delete your account There can only be one free Try Azure Cosmos DB account per Microsoft account. You may want to delete your account or to try different APIs, you'll have to create a new account. HereΓÇÖs how to delete your account.
-1. Go to the [Try AzureAzure Cosmos DB](https://aka.ms/trycosmosdb) page.
+1. Go to the [Try Azure Cosmos DB](https://aka.ms/trycosmosdb) page.
-1. Select Delete my account.
+1. Select **Delete my account**.
- :::image type="content" source="media/try-free/upgrade-account.png" lightbox="media/try-free/upgrade-account.png" alt-text="Confirmation page for the account upgrade experience.":::
+ :::image type="content" source="media/try-free/delete-account.png" lightbox="media/try-free/delete-account.png" alt-text="Screenshot of the confirmation page for the account deletion experience.":::
## Next steps
After you create a Try Azure Cosmos DB sandbox account, you can start building a
* Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-portal.md#create-container-database) * [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md)
- * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#learn-the-object-model)
+ * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#object-model)
* [Get started with Azure Cosmos DB for Cassandra](cassandr) * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md#add-a-graph) * [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 11/04/2022 Last updated : 11/08/2022
Users with a Microsoft Customer Agreement must always submit a request to Azure
> * An outstanding invoice is paid by your default payment method. In order to have it paid by check or wire transfer, you must change your default payment method to check or wire transfer after you've been approved. > * Currently, payment by check or wire transfer isn't supported for Global Azure in China. > * For Microsoft Online Services Program accounts, if you switch to pay by check or wire transfer, you can't switch back to paying by credit or debit card.
+> * Currently, only customers in the United States can get automatically approved to change their payment method to check/wire transfer. Support for other regions is being evaluated.
## Request to pay by check or wire transfer
+> [!NOTE]
+> Currently only customers in the United States can get automatically approved to change their payment method to check/wire transfer. Support for other regions is being evaluated. If you are not in the United States, you must [Submit a request to set up pay by check or wire transfer](#submit-a-request-to-set-up-pay-by-check-or-wire-transfer) to change your payment method.
+ 1. Sign in to the Azure portal. 1. Navigate to **Subscriptions** and then select the one that you want to set up check or wire transfer for. 1. In the left menu, select **Payment methods**.
Users with a Microsoft Customer Agreement must always submit a request to Azure
## Submit a request to set up pay by check or wire transfer
+Users in all regions can submit a request to pay by check or wire transfer through support. Currently, only customers in the United States can get automatically approved to change their payment method to check/wire transfer.
+ If you're not automatically approved, you can submit a request to Azure support to approve payment by check or wire transfer. If your request is approved, you can switch to pay by check or wire transfer in the Azure portal. 1. Sign in to the Azure portal to submit a support request. Search for and select **Help + support**.
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
Linked services can be created in the Azure Data Factory UX via the [management
You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), [Azure Resource Manager Template](quickstart-create-data-factory-resource-manager-template.md), and [Azure portal](quickstart-create-data-factory-portal.md).
+When creating a linked service, the user needs appropriate authorization to the designated service. If sufficient access is not granted, the user will not be able to see the available resources and will need to use manual entry option.
## Data store linked services
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
The SharePoint List Online connector uses service principal authentication to co
1. Open SharePoint Online site link e.g. `https://[your_site_url]/_layouts/15/appinv.aspx` (replace the site URL). 2. Search the application ID you registered, fill the empty fields, and click "Create".
- - App Domain: `localhost.com`
- - Redirect URL: `https://www.localhost.com`
+ - App Domain: `contoso.com`
+ - Redirect URL: `https://www.contoso.com`
- Permission Request XML: ```xml
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Here are details of the application's actions and arguments:
> [!NOTE] > Release Notes are available on the same [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717).
-## Service account for Self-hosted integration runtime
+## Service account for self-hosted integration runtime
-The default log on service account of Self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
+The default log on service account of the self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
When the processor and available RAM aren't well utilized, but the execution of
> > Data movement in transit from a self-hosted IR to other data stores always happens within an encrypted channel, regardless of whether or not this certificate is set.
-### Credential Sync
+### Credential sync
If you don't store credentials or secret values in an Azure Key Vault, the credentials or secret values will be stored in the machines where your self-hosted integration runtime locates. Each node will have a copy of credential with certain version. In order to make all nodes work together, the version number should be the same for all nodes. ## Proxy server considerations
If you see error messages like the following ones, the likely reason is improper
```output Unable to connect to the remote server
- A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted).
+ A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (self-hosted).
``` ### Enable remote access from an intranet
There are two ways to store the credentials when using self-hosted integration r
This is the recommended way to store your credentials in Azure. The self-hosted integration runtime can directly get the credentials from Azure Key Vault which can highly avoid some potential security issues or any credential in-sync problems between self-hosted integration runtime nodes. 2. Store credentials locally. The credentials will be push to the machine of your self-hosted integration runtime and be encrypted.
-When your self-hosted integration runtime is recovered from crash, you can either recover credential from the one you backup before or edit linked service and let the credential be pushed to self-hosted integration runtime again. Otherwise, the pipeline doesn't work due to the lack of credential when running via self-hosted integration runtime.
+When your self-hosted integration runtime is recovered from crash, you can either recover credential from the one you back up before or edit linked service and let the credential be pushed to self-hosted integration runtime again. Otherwise, the pipeline doesn't work due to the lack of credential when running via self-hosted integration runtime.
> [!NOTE] > If you prefer to store the credential locally, your need to put the domain for interactive authoring in the allowlist of your firewall > and open the port. This channel is also for the self-hosted integration runtime to get the credentials.
You can install the self-hosted integration runtime by downloading a Managed Ide
- Regularly back up the credentials associated with the self-hosted integration runtime. - To automate self-hosted IR setup operations, refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime).
+## Important considerations
+
+When installing a self-hosted integration runtime consider following
+
+- Keep it close to your data source but not necessarily on the same machine
+- Don't install it on the same machine as Power BI gateway
+- Windows Server only(FIPS-compliant encryption servers might cause jobs to fail)
+- Share across multiple data sources
+- Share across multiple data factories
+ ## Next steps For step-by-step instructions, see [Tutorial: Copy on-premises data to cloud](tutorial-hybrid-copy-powershell.md).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 10/14/2022 Last updated : 11/08/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
* [Data preview](#data-preview) [**Pipeline experimental view**](#pipeline-experimental-view)
- * [Adding activities](#adding-activities)
- * [Iteration & conditionals container view](#iteration-and-conditionals-container-view)
+ * [Dynamic content flyout](#dynamic-content-flyout)
[**Monitoring experimental view**](#monitoring-experimental-view)
- * [Simplified default monitoring view](#simplified-default-monitoring-view)
* [Error message relocation to Status column](#error-message-relocation-to-status-column)
+ * [Hierarchy view](#hierarchy-view)
+ * [Simplified default monitoring view](#simplified-default-monitoring-view)
### Dataflow data-first experimental view
Columns can be rearranged by dragging a column by its header. You can also sort
UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process.
-#### Adding activities to the canvas
-
-> [!NOTE]
-> This experience is now available in the default ADF settings.
-
-You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
-
-Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success.
--
-#### Iteration and conditionals container view
-
-> [!NOTE]
-> This experience is now available in the default ADF settings.
-
-You can now view the activities contained iteration and conditional activities.
-
+#### Dynamic content flyout
-##### Adding Activities
+A new flyout has been added to make it easier to set dynamic content in your pipeline activities without having to use the expression builder. The dynamic content flyout is currently supported in these activities and settings:
-You have two options to add activities to your iteration and conditional activities.
+| **Activity** | **Setting name** |
+| | |
+| Azure Function | Function Name |
+| Databricks-Notebook | Notebook path |
+| Databricks-Jar | Main class name |
+| Databricks-Python | Python file |
+| Fail | Fail message |
+| Fail | Error code |
+| Web | Url |
+| Webhook | Url |
+| Wait | Wait time in seconds |
+| Filter | Items |
+| Filter | Conditions |
+| ForEach | Items |
+| If/Switch/Until | Expression |
+
+In supported activities, you will see an icon next to the setting. Clicking this will open up the flyout where you can choose your dynamic content.
++
-1. Use the + button in your container to add an activity.
+### Monitoring experimental view
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new activity container with the add button highlighted on the left side of the center of the screen.":::
-
- Clicking this button will bring up a drop-down list of all activities that you can add.
+UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
+The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-13.png" alt-text="Screenshot of a drop-down list in the activity container with all the activities listed.":::
-
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
+#### Error message relocation to Status column
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-14.png" alt-text="Screenshot of the container with three activities in the center of the container.":::
+To make it easier for you to view errors when you see a **Failed** pipeline run, error messages have been relocated to the **Status** column.
-> [!NOTE]
-> If your container includes more than 5 activities, only the first 4 will be shown in the container preview.
+Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
-2. Use the edit button in your container to see everything within the container. You can use the canvas to edit or add to your pipeline.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-15.png" alt-text="Screenshot of the container with the edit button highlighted on the right side of a box in the center of the screen.":::
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-16.png" alt-text="Screenshot of the inside of the container with three activities linked together.":::
-
- Add additional activities by dragging new activities to the canvas or click the add button on the right-most activity to bring up a drop-down list of all activities.
+#### Hierarchy view
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-17.png" alt-text="Screenshot of the Add activity button in the bottom left corner of the right-most activity.":::
-
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right-most activity.":::
-
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
+When monitoring your pipeline run, you have the option to enable the hierarchy view, which will provide a consolidated view of the activities that ran.
+This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab.
-##### Adjusting activity size
+##### How to enable the hierarchy view in pipeline debug output
-Your containerized activities can be viewed in two sizes. In the expanded size, you will be able to see all the activities in the container.
+In the **Output** tab in your pipeline, there is a new dropdown to select your monitoring view.
-To save space on your canvas, you can also collapse the containerized view using the **Minimize** arrows found in the top right corner of the activity.
+Select **Hierarchy** to see the new hierarchy view. If you have iteration or conditional activities, the nested activities will be grouped under parent activity.
-This will shrink the activity size and hide the nested activities.
+Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view.
-If you have multiple container activities, you can save time by collapsing or expanding all activities at once by right clicking on the canvas. This will bring up the option to hide all nested activities.
+##### How to enable the hierarchy view in pipeline monitoring
+In the detailed view of your pipeline run, there is a new dropdown to select your monitoring view next to the Status filter.
-Click **Hide nested activities** to collapse all containerized activities. To expand all the activities, click **Show nested activities**, found in the same list of canvas options.
+Select **Hierarchy** to see the new hierarchy view. If you have iteration or conditional activities, the nested activities will be grouped under parent activity.
-### Monitoring experimental view
+Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view.
-UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
-The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
#### Simplified default monitoring view
The default monitoring view has been simplified with fewer default columns. You
| Error | If the pipeline failed, the run error | | Run ID | ID of the pipeline run | - You can edit your default view by clicking **Edit Columns**. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-21.png" alt-text="Screenshot of the Edit Columns button in the center of the top row.":::
Add columns by clicking **Add column** or remove columns by clicking the trashca
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-22.png" alt-text="Screenshot of the Add column button and trashcan icon to edit column view.":::
-#### Error message relocation to Status column
-
-Error messages have now been relocated to the **Status** column. This will allow you to easily view errors when you see a **Failed** pipeline run.
-
-Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
--- ## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Last updated 09/22/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service. To understand how to estimate pricing for any scenario, not just the examples here, refer to the article [Plan and manage costs for Azure Data Factory](plan-manage-costs.md).
For more details about pricing in Azure Data Factory, refer to the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/).
databox-online Azure Stack Edge Gpu 2207 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2207-release-notes.md
Previously updated : 08/04/2022 Last updated : 11/09/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2037.5375**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2038.5916**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
-The 2207 release has the following features and enhancements:
+The 2207 release has the following features and enhancements:
- **Kubernetes version update** - This release contains a Kubernetes version update from 1.20.9 to v1.22.6.
databox-online Azure Stack Edge Gpu 2209 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2209-release-notes.md
Previously updated : 09/21/2022 Last updated : 11/10/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2209** release, which maps to software version **2.2.2088.5593**. This software can be applied to your device if you're running at least **Azure Stack Edge 2207** (2.2.2307.5375).
+This article applies to the **Azure Stack Edge 2209** release, which maps to software version **2.2.2088.5593**. This software can be applied to your device if you're running at least **Azure Stack Edge 2207** (2.2.2038.5916).
> [!IMPORTANT] > Azure Stack Edge 2209 update contains critical security fixes. As with any new release, we strongly encourage customers to apply this update at the earliest opportunity.
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
Previously updated : 05/03/2022 Last updated : 11/04/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
Ordering through Azure Edge Hardware Center will create an Azure resource that w
[!INCLUDE [Create management resource](../../includes/azure-edge-hardware-center-create-management-resource.md)] - ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro 2 device with the resource. You can get this key now while you are in the Azure portal.
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 11/03/2022 Last updated : 11/09/2022
The following table lists the dimensions of the shipping package in millimeters
### Enclosure weight
-# [Model 642GT](#tab/sku-a)
+# [Model 64G2T](#tab/sku-a)
| Line # | Hardware | Weight lbs | |--|||
-| 1 | Model 642GT | 21.0 |
+| 1 | Model 64G2T | 21.0 |
| | | | | 2 | Shipping weight, with 4-post mount | 35.3 |
-| 3 | Model 642GT install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
+| 3 | Model 64G2T install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
| | | | | 4 | Shipping weight, with 2-post mount | 32.1 |
-| 5 | Model 642GT install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| 5 | Model 64G2T install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
| | | | | 6 | Shipping weight with wall mount | 31.1 |
-| 7 | Model 642GT install handling without bezel | 19.8 |
+| 7 | Model 64G2T install handling without bezel | 19.8 |
| | | |
-| 4 | 4-post in box | 6.28 |
-| 7 | 2-post in box | 3.08 |
+| 8 | 4-post in box | 6.28 |
+| 9 | 2-post in box | 3.08 |
| 10 | Wallmount as packaged | 2.16 | # [Model 128G4T1GPU](#tab/sku-b)
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
To remediate the issues:
1. For further details, and the list of affected machines, select an alert.
- The alerts page shows the more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
+ The security alerts page shows more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
:::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="The start time of adaptive application controls alerts is the time that adaptive application controls created the alert."::: > [!NOTE]
- > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
+ > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the security alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
## Move a machine from one group to another
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Use sample alerts to:
To create sample alerts:
-1. As a user with the role **Subscription Contributor**, from the toolbar on the alerts page, select **Create sample alerts**.
+1. As a user with the role **Subscription Contributor**, from the toolbar on the security alerts page, select **Sample alerts**.
1. Select the subscription. 1. Select the relevant Microsoft Defender plan/s for which you want to see alerts. 1. Select **Create sample alerts**.
You can simulate alerts for both of the control plane, and workload alerts with
1. Wait 30 minutes.
-1. In the Azure portal, navigate to the Defender for Cloud's alerts page.
+1. In the Azure portal, navigate to the Defender for Cloud's security alerts page.
1. On the relevant Kubernetes cluster, locate the following alert `Microsoft Defender for Cloud test alert for K8S (not a threat)`
You can simulate alerts for both of the control plane, and workload alerts with
1. Wait 10 minutes.
-1. In the Azure portal, navigate to the Defender for Cloud's alerts page.
+1. In the Azure portal, navigate to the Defender for Cloud's security alerts page.
1. On the relevant AKS cluster, locate the following alert `Microsoft Defender for Cloud test alert (not a threat)`.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 10/30/2022 Last updated : 11/09/2022 # Cloud Security Posture Management (CSPM)
Defender for Cloud continually assesses your resources, subscriptions, and organ
|Aspect|Details| |-|:-| |Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
+| Prerequisites | - **Foundational CSPM capabilities** - None <br> <br> - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled. |
|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. | ## Defender CSPM plan options
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
zone_pivot_groups: k8s-host Previously updated : 07/25/2022 Last updated : 10/30/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
The triggers for an image scan are:
- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image. -- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+- **On import** - Azure Container Registry has import tools to bring images to your registry from an existing registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
- **Continuous scan**- This trigger has two modes:
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate the
1. Sign in to the [Azure portal](https://portal.azure.com/) as a Subscription Contributor user.
-1. Navigate to the Alerts page.
+1. Navigate to the security alerts page.
-1. Select **Create sample alerts**.
+1. Select **Sample alerts**.
1. Select the subscription.
defender-for-cloud Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md
When Microsoft Defender for Cloud is enabled on your database, it detects anomal
- In the inbox of whoever in your organization has been [designated to receive email alerts](configure-email-notifications.md). > [!TIP]
-> A live tile on [Microsoft Defender for Cloud's overview dashboard](overview-page.md) tracks the status of active threats to all your resources including databases. Select the tile to launch the Defender for Cloud alerts page and get an overview of active threats detected on your databases.
+> A live tile on [Microsoft Defender for Cloud's overview dashboard](overview-page.md) tracks the status of active threats to all your resources including databases. Select the security alerts tile to go to the Defender for Cloud security alerts page and get an overview of active threats detected on your databases.
> > For detailed steps and the recommended method to respond to security alerts, see [Respond to a security alert](tutorial-security-incident.md#respond-to-a-security-alert).
When Microsoft Defender for Cloud is enabled on your database, it detects anomal
Defender for Cloud sends email notifications when it detects anomalous database activities. The email includes details of the suspicious security event such as the nature of the anomalous activities, database name, server name, application name, and event time. The email also provides information on possible causes and recommended actions to investigate and mitigate any potential threats to the database.
-1. From the email, select the **View the full alert** link to launch the Azure portal and show the alerts page, which provides an overview of active threats detected on the database.
+1. From the email, select the **View the full alert** link to launch the Azure portal and show the security alerts page, which provides an overview of active threats detected on the database.
:::image type="content" source="media/defender-for-databases-usage/suspected-brute-force-attack-notification-email.png" alt-text="Defender for Cloud's email notification about a suspected brute force attack.":::
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
When anomalous activities occur, Defender for Key Vault shows alerts and optiona
## Microsoft Defender for Key Vault alerts When you get an alert from Microsoft Defender for Key Vault, we recommend you investigate and respond to the alert as described in [Respond to Microsoft Defender for Key Vault](defender-for-key-vault-usage.md). Microsoft Defender for Key Vault protects applications and credentials, so even if you're familiar with the application or user that triggered the alert, it's important to check the situation surrounding every alert.
-The alerts appear in Key Vault's **Security** page, the Workload protections, and Defender for Cloud's alerts page.
+The alerts appear in Key Vault's **Security** page, the Workload protections, and Defender for Cloud's security alerts page.
:::image type="content" source="./media/defender-for-key-vault-intro/key-vault-security-page.png" alt-text="Azure Key Vault's security page":::
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Alerts are generated by unusual and potentially harmful attempts to access or ex
Microsoft Defender for SQL alerts are available in: -- The Defender for Cloud's alerts page
+- The Defender for Cloud's security alerts page
- The machine's security page - The [workload protections dashboard](workload-protections-dashboard.md) - Through the direct link in the alert emails
defender-for-cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents.md
In Defender for Cloud, a security incident is an aggregation of all alerts for a
## Managing security incidents
-1. On Defender for Cloud's alerts page, use the **Add filter** button to filter by alert name to the alert name **Security incident detected on multiple resources**.
+1. On Defender for Cloud's security alerts page, use the **Add filter** button to filter by alert name to the alert name **Security incident detected on multiple resources**.
- :::image type="content" source="media/incidents/locating-incidents.png" alt-text="Locating the incidents on the alerts page in Microsoft Defender for Cloud.":::
+ :::image type="content" source="media/incidents/locating-incidents.png" alt-text="Locating the incidents on the security alerts page in Microsoft Defender for Cloud.":::
The list is now filtered to show only incidents. Notice that security incidents have a different icon to security alerts.
- :::image type="content" source="media/incidents/incidents-list.png" alt-text="List of incidents on the alerts page in Microsoft Defender for Cloud.":::
+ :::image type="content" source="media/incidents/incidents-list.png" alt-text="List of incidents on the security alerts page in Microsoft Defender for Cloud.":::
1. To view details of an incident, select one from the list. A side pane appears with more details about the incident.
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The following table displays roles and allowed actions in Defender for Cloud.
| **Action** | [Security Reader](../role-based-access-control/built-in-roles.md#security-reader) / <br> [Reader](../role-based-access-control/built-in-roles.md#reader) | [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) / [Owner](../role-based-access-control/built-in-roles.md#owner) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | [Owner](../role-based-access-control/built-in-roles.md#owner) | |:-|:-:|:-:|:-:|:-:|:-:| | | | | **(Resource group level)** | **(Subscription level)** | **(Subscription level)** |
-| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | Γ£ö | Γ£ö |
+| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | - | Γ£ö |
| Edit security policy | - | Γ£ö | - | Γ£ö | Γ£ö | | Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö | | Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
defender-for-cloud Recommendations Reference Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md
+
+ Title: Reference table for all Microsoft Defender for Cloud recommendations for GCP resources
+description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your GCP resources.
+ Last updated : 11/09/2022++
+# Security recommendations for GCP resources - a reference guide
+
+This article lists the recommendations you might see in Microsoft Defender for Cloud if you've connected a
+GCP project from the **Environment settings** page. The recommendations shown in your environment depend
+on the resources you're protecting and your customized configuration.
+
+To learn about how to respond to these recommendations, see
+[Remediate recommendations in Defender for Cloud](implement-security-recommendations.md).
+
+Your secure score is based on the number of security recommendations you've completed. To
+decide which recommendations to resolve first, look at the severity of each one and its potential
+impact on your secure score.
+
+## <a name='recs-gcp-compute'></a> GCP Compute recommendations
++
+## <a name='recs-gcp-container'></a> GCP Container recommendations
++
+## <a name='recs-gcp-data'></a> GCP Data recommendations
++
+## <a name='recs-gcp-identityandaccess'></a> GCP IdentityAndAccess recommendations
++
+## <a name='recs-gcp-networking'></a> GCP Networking recommendations
++
+## Next steps
+
+For related information, see the following:
+
+- [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
+- [Review your security recommendations](review-security-recommendations.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in November include: - [Protect containers in your entire GKE organization with Defender for Containers](#protect-containers-in-your-entire-gke-organization-with-defender-for-containers)
+- [Validate Defender for Containers protections with sample alerts](#validate-defender-for-containers-protections-with-sample-alerts)
### Protect containers in your entire GKE organization with Defender for Containers
Now you can enable Defender for Containers for your GCP organization to protect
Learn more about [connecting GCP projects and organizations](quickstart-onboard-gcp.md#connect-your-gcp-project) to Defender for Cloud.
+### Validate Defender for Containers protections with sample alerts
+
+You can now create sample alerts also for Defender for Containers plan. The new sample alerts are presented as being from AKS, Arc-connected clusters, EKS, and GKE resources with different severities and MITRE tactics. You can use the sample alerts to validate security alert configurations, such as SIEM integrations, workflow automation, and email notifications.
+
+Learn more about [alert validation](alert-validation.md).
+ ## October 2022 Updates in October include:
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations
defender-for-iot How To Define Global User Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-define-global-user-access-control.md
Global access control is established through the creation of user access groups.
For example, allow security analysts from an Active Directory group to access all West European automotive and glass production lines, along with a plastics line in one region. Before you create access groups, we recommend that you:
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This procedure describes how to download a diagnostics log to send to support in
This feature is supported for the following sensor versions: - **22.1.1** - Download a diagnostic log from the sensor console-- **22.1.3** - For locally-managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
+- **22.1.3** - For locally managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
This feature is supported for the following sensor versions:
:::image type="content" source="media/release-notes/support-ticket-diagnostics.png" alt-text="Screenshot of the Backup & Restore pane showing the Support Ticket Diagnostics option." lightbox="media/release-notes/support-ticket-diagnostics.png":::
-1. For a locally-managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview).
+1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview).
+
+## Clearing sensor data
+
+In cases where the sensor needs to be relocated or erased, the sensor can be reset.
+
+Clearing data deletes all detected or learned data on the sensor. After clearing data on a cloud connected sensor, cloud inventory will be updated accordingly. Additionally, some actions on the corresponding cloud alerts such as downloading PCAPs or learning alerts will not be supported.
+
+> [!NOTE]
+> Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data.
+
+**To clear system data**:
+
+1. Sign in to the sensor as the **cyberx** user.
+
+1. Select **Support** > **Clear data**.
+
+1. In the confirmation dialog box, select **Yes** to confirm that you do want to clear all data from the sensor and reset it. For example:
+
+ :::image type="content" source="media/how-to-manage-individual-sensors/clear-system-data.png" alt-text="Screenshot of clearing system data on the support page in the sensor console.":::
+
+A confirmation message appears that the action was successful. All learned data, allowlists, policies, and configuration settings are cleared from the sensor.
## Next steps
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Sometimes ICS devices are configured with external IP addresses. These ICS devic
1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
-### Clearing sensor data to factory default
-
-In cases where the sensor needs to be relocated or erased, the sensor can be reset to factory default data.
-
-> [!NOTE]
-> Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data.
-
-**To clear system data**:
-1. Sign in to the sensor as the **cyberx** user.
-1. Select **Support** > **Clear system data**, and confirm that you do want to reset the sensor to factory default data.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/warning-screenshot.png" alt-text="Screenshot of warning message.":::
-
-All allowlists, policies, and configuration settings are cleared, and the sensor is restarted.
+### Clearing sensor data
+In cases where the sensor needs to be relocated or erased, all learned data can be cleared from the sensor.
+For more information on how to clear system data, see [Clearing sensor data](how-to-manage-individual-sensors.md#clearing-sensor-data).
## Troubleshoot an on-premises management console
devops-project Azure Devops Project Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-aks.md
Title: 'Deploy ASP.NET Core apps to Azure Kubernetes Service with Azure DevOps S
description: Azure DevOps Starter makes it easy to get started on Azure. With DevOps Starter, you can deploy your ASP.NET Core app with the Azure Kubernetes Service (AKS) in a few quick steps. ms.+ Last updated 03/24/2020
devops-project Azure Devops Project Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-aspnet-core.md
Title: 'Quickstart: Create a CI/CD pipeline for .NET with Azure DevOps Starter' description: Azure DevOps Starter makes it easy to get started on Azure. It helps you launch a .NET app on an Azure service of your choice in few quick steps.+ documentationcenter: vs-devops-build
devops-project Azure Devops Project Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-cosmos-db.md
Title: 'Tutorial: Deploy Node.js apps powered by Azure Cosmos DB with Azure DevO
description: Azure DevOps Starter makes it easy to get started on Azure. With DevOps Starter, you can deploy your Node.js app that's powered by Azure Cosmos DB to Windows Web App in a few quick steps. ms.+ Last updated 03/24/2020
devops-project Azure Devops Project Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-functions.md
Title: 'Tutorial: Deploy ASP.NET apps to Azure Functions with Azure DevOps Start
description: Azure DevOps Starter makes it easy to get started on Azure. With DevOps Starter, you can deploy your ASP.NET app to Azure Functions in a few quick steps. ms.+ Last updated 03/24/2020
devops-project Azure Devops Project Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-github.md
documentationcenter: vs-devops-build
ms. + na Last updated 03/24/2020
devops-project Azure Devops Project Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-go.md
Title: 'Quickstart: Create a CI/CD pipeline for the Go programming language by using Azure DevOps Starter' description: DevOps Starter makes it easy to get started on Azure. It helps you launch a Go programming language web app on an Azure service in a few quick steps.+ documentationcenter: vs-devops-build
devops-project Azure Devops Project Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-java.md
Last updated 03/24/2020+ na
devops-project Azure Devops Project Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-nodejs.md
Title: Create a CI/CD pipeline for a PWA with GatsbyJS and Azure DevOps Starter description: Learn to create a NodeJS progressive web app (PWA) using GatsbyJS and the simplified Azure DevOps Starter creation experience.+ documentationcenter: vs-devops-build
devops-project Azure Devops Project Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-php.md
Title: 'Quickstart: Create a CI/CD pipeline for PHP with Azure DevOps Starter' description: DevOps Starter makes it easy to get started on Azure. It helps you launch an app on an Azure service of your choice in few quick steps.+ documentationcenter: vs-devops-build
devops-project Azure Devops Project Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-python.md
Last updated 03/24/2020+ na
devops-project Azure Devops Project Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-ruby.md
Title: 'Quickstart: Create a CI/CD pipeline for Ruby on Rails by using Azure DevOps Starter' description: Azure DevOps Starter makes it easy to get started on Azure. You can launch a Ruby web app on an Azure service in a few quick steps.+ documentationcenter: vs-devops-build
devops-project Azure Devops Project Service Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-service-fabric.md
Title: 'Tutorial: Deploy your ASP.NET Core app to Azure Service Fabric by using
description: Azure DevOps Starter makes it easy to get started on Azure. With DevOps Projects, you can deploy your ASP.NET Core app to Azure Service Fabric in a few quick steps. ms.+ Last updated 03/24/2020
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-sql-database.md
Title: 'Tutorial: Deploy your ASP.NET app and Azure SQL Database code by using A
description: DevOps Starter makes it easy to get started on Azure. With DevOps Starter, you can deploy your ASP.NET app and Azure SQL Database code in a few quick steps. ms.+ Last updated 03/24/2020
devops-project Azure Devops Project Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-vms.md
Title: 'Tutorial: Deploy your ASP.NET app to Azure virtual machines by using Azu
description: DevOps Starter makes it easy to get started on Azure and to deploy your ASP.NET app to Azure virtual machines in a few quick steps. + Last updated 03/24/2020
devops-project Devops Starter Gh Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/devops-starter-gh-web-app.md
Title: 'Tutorial: Deploy your Node.js app to Azure Web App by using DevOps Starter for GitHub Actions' description: DevOps Starter makes it easy to get started on Azure and to deploy your Node.js app to Azure Web App in a few quick steps. + Last updated 08/25/2020
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: See [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security) for more information on Azure Storage firewall setup.
+- **Message**: `Migration for Database <Database Name> failed with error 'There are backups from multiple databases in the container folder. Please make sure the container folder has backups from a single database.`
+
+- **Cause**: Backups of multiple databases are in the same container folder.
+
+- **Recommendation**: If migrating multiple databases to **Azure SQL Managed Instance** using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. See [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations) for more information.
++ > [!NOTE] > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues) ### Error code: 2012 - TestConnectionFailed
+- **Message**: `Failed to test connections using provided Integration Runtime. Error details: 'Remote name could not be resolved.'`
+
+- **Cause**: The Self-Hosted Integration Runtime can't connect to the service back end. This issue is usually caused by network settings in the firewall.
+
+- **Recommendation**: There's a Domain Name System (DNS) issue. Contact your network team to fix the issue. See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for more information.
+
+- **Message**: `Failed to test connections using provided Integration Runtime. 'Cannot connect to <File share>. Detail Message: The system could not find the environment option that was entered`
+
+- **Cause**: The Self-Hosted Integration Runtime can't connect to the network file share where the database backups are placed.
+
+- **Recommendation**: Make sure your network file share name is entered correctly.
+
+- **Message**: `Failed to test connections using provided Integration Runtime. The file name does not conform to the naming rules by the data store. Illegal characters in path.`
+
+- **Cause**: The Self-Hosted Integration Runtime can't connect to the network file share where the database backups are placed.
+
+- **Recommendation**: Make sure your network file share name is entered correctly.
+ - **Message**: `Failed to test connections using provided Integration Runtime.` - **Cause**: Connection to the Self-Hosted Integration Runtime has failed.
Known issues and limitations associated with the Azure SQL Migration extension f
### Error code: 2039 - MigrationRetryNotAllowed - **Message**: `Migration isn't in a retriable state. Migration must be in state WaitForRetry. Current state: <State>, Target server: <Target Server>, Target database: <Target database>.`
-**Cause**: A retry request was received when the migration wasn't in a state allowing retrying.
+- **Cause**: A retry request was received when the migration wasn't in a state allowing retrying.
- **Recommendation**: No action required migration is ongoing or completed.
The Azure SQL Database offline migration (Preview) utilizes Azure Data Factory (
- Azure SQL Database table names with double byte characters currently aren't supported for migration. Mitigation is to rename tables before migration; they can be changed back to their original names after successful migration. - Tables with large blob columns may fail to migrate due to timeout. - Database names with SQL Server reserved words aren't valid.
+- Database names with double-byte character set (DBCS) are currently not supported.
+- Table names that include semicolons are currently not supported.
+- Computed columns do not get migrated.
## Azure SQL Managed Instance and SQL Server on Azure Virtual Machine known issues and limitations - If migrating multiple databases to **Azure SQL Managed Instance** using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container.
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Last updated 09/28/2022
# Tutorial: Migrate SQL Server to an Azure SQL Database offline using Azure Data Studio with DMS (Preview)
-> [!NOTE]
-> Azure SQL Database targets are only available using the [Azure Data Studio Insiders](/sql/azure-data-studio/download-azure-data-studio#download-the-insiders-build-of-azure-data-studio) version of the Azure SQL Migration extension.
- You can use the Azure SQL migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Database (Preview). In this tutorial, you'll learn how to migrate the **AdventureWorks2019** database from an on-premises instance of SQL Server to Azure SQL Database (Preview) by using the Azure SQL Migration extension for Azure Data Studio. This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
event-grid Auth0 Log Stream App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-app-insights.md
Title: Send Auth0 events Azure Monitor's Application Insights
-description: This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor's Application Insights.
+ Title: Send Auth0 events Azure Monitor Application Insights
+description: This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor Application Insights.
Last updated 10/12/2022
-# Send Auth0 events to Azure Monitor's Application Insights
-This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor's Application Insights.
+# Send Auth0 events to Azure Monitor Application Insights
+This article shows how to send Auth0 events received by Azure Event Grid to Azure Monitor Application Insights.
## Prerequisites
This article shows how to send Auth0 events received by Azure Event Grid to Azur
1. Once your Auth0 logs are generated, your data should now be visible in Application Insights > [!NOTE]
- > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor's Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
+ > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
## Next steps - [Auth0 Partner Topic](auth0-overview.md) - [Subscribe to Auth0 events](auth0-how-to.md)-- [Send Auth0 events to Azure Monitor's Application Insights](auth0-log-stream-app-insights.md)
+- [Send Auth0 events to Azure Monitor Application Insights](auth0-log-stream-app-insights.md)
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
For step #5, you should decide what kind of user experience you want to provide.
This article shows you how to **onboard as an Azure Event Grid partner** using the **Azure portal**. ## Communicate your interest in becoming a partner
-Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;to&nbsp;onboard&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
+Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;in&nbsp;onboarding&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
## Prerequisites To complete the remaining steps, make sure you have:
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
You can also create Event Grid resources to receive events from Azure Event Grid
For either publishing events or receiving events, you create the same kind of Event Grid [resources](#resources-managed-by-partners) following these general steps.
-1. Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;to&nbsp;onboard&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal.
+1. Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft.com?subject=Interested&nbsp;in&nbsp;onboarding&nbsp;as&nbsp;an&nbsp;Event&nbsp;Grid&nbsp;partner) communicating your interest in becoming a partner. Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal.
2. Create a [partner registration](#partner-registration). This is a global resource and you usually need to create once. 3. Create a [partner namespace](#partner-namespace). This resource exposes an endpoint to which you can publish events to Azure. When creating the partner namespace, provide the partner registration you created. 4. Customer authorizes you to create a [partner topic](concepts.md#partner-topics) in customer's Azure subscription.
event-hubs Apache Kafka Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-developer-guide.md
Title: Apache Kafka developer guide for Event Hubs description: This article provides links to articles that describe how to integrate your Kafka applications with Azure Event Hubs. Previously updated : 09/20/2021 Last updated : 11/09/2022
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Data streaming with Azure Event Hubs using the Kafka protoco
description: 'Quickstart: This article provides information on how to stream into Azure Event Hubs using the Kafka protocol and APIs.' Last updated 11/02/2022-+ # Quickstart: Data streaming with Event Hubs using the Kafka protocol
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
initiative definition.
|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | |[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) |
governance Shared Query Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-powershell.md
Title: 'Quickstart: Create a shared query with Azure PowerShell' description: In this quickstart, you follow the steps to create a Resource Graph shared query using Azure PowerShell. Previously updated : 08/17/2021 Last updated : 11/09/2022
before you begin.
> using the `Install-Module` cmdlet. ```azurepowershell-interactive
- Install-Module -Name Az.ResourceGraph
+ Install-Module -Name Az.ResourceGraph -Scope CurrentUser -Repository PSGallery -Force
``` - If you have multiple Azure subscriptions, choose the appropriate subscription in which the
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
Access to the Kafka REST proxy is managed with Azure Active Directory security g
For REST proxy endpoint requests, client applications should get an OAuth token. The token is used to verify security group membership. Find a [Client application sample](#client-application-sample) below that shows how to get an OAuth token. The client application passes the OAuth token in the HTTPS request to the REST proxy. > [!NOTE]
-> See [Manage app and resource access using Azure Active Directory groups](../../active-directory/fundamentals/active-directory-manage-groups.md), to learn more about AAD security groups. For more information on how OAuth tokens work, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+> See [Manage app and resource access using Azure Active Directory groups](../../active-directory/fundamentals/active-directory-manage-groups.md), to learn more about AAD security groups. For more information on how OAuth tokens work, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
## Kafka REST proxy with Network Security Groups If you bring your own VNet and control network traffic with network security groups, allow **inbound** traffic on port **9400** in addition to port 443. This will ensure that Kafka REST proxy server is reachable.
hdinsight Apache Spark Improve Performance Iocache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-improve-performance-iocache.md
Title: Apache Spark performance - Azure HDInsight IO Cache (Preview)
description: Learn about Azure HDInsight IO Cache and how to use it to improve Apache Spark performance. Previously updated : 10/31/2022 Last updated : 11/09/2022 # Improve performance of Apache Spark workloads using Azure HDInsight IO Cache
-> [!NOTE]
-> Spark 3.1.2 (HDI 5.0) doesnΓÇÖt support IO Cache.
+> [!NOTE]
+> * IO Cache is only available for Spark 2.4(HDInsight 4.0).
+> * Spark 3.1.2 (HDInsight 5.0) doesnΓÇÖt support IO Cache.
IO Cache is a data caching service for Azure HDInsight that improves the performance of Apache Spark jobs. IO Cache also works with [Apache TEZ](https://tez.apache.org/) and [Apache Hive](https://hive.apache.org/) workloads, which can be run on [Apache Spark](https://spark.apache.org/) clusters. IO Cache uses an open-source caching component called RubiX. RubiX is a local disk cache for use with big data analytics engines that access data from cloud storage systems. RubiX is unique among caching systems, because it uses Solid-State Drives (SSDs) rather than reserve operating memory for caching purposes. The IO Cache service launches and manages RubiX Metadata Servers on each worker node of the cluster. It also configures all services of the cluster for transparent use of RubiX cache.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
In order for a client application to access Azure API for FHIR, it must present
There are many ways to obtain a token, but the Azure API for FHIR doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
-For example like when you use [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md), accessing a FHIR server goes through the following four steps:
+For example like when you use [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md), accessing a FHIR server goes through the following four steps:
![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
-1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
-1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When you request a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When you request a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/develop/v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token).
1. The client makes a request to Azure API for FHIR, for example `GET /Patient`, to search all patients. When the client makes the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
The token can be decoded and inspected with tools such as [https://jwt.ms](https
As mentioned, there are several ways to obtain a token from Azure AD. They're described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
-Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
+Use either of the following authentication protocols:
-When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you're using in your client application.
-
-The pertinent sections of the Azure AD documentation are:
-
-* `v1.0` endpoint:
- * [Authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
- * [Client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
-* `v2.0` endpoint:
- * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
- * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+* [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
There are other variations (for example due to flow) for obtaining a token. Refer to the [Azure AD documentation](../../active-directory/index.yml) for details. When you use Azure API for FHIR, there are some shortcuts for obtaining an access token (such as for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
In order for an application to interact with Azure AD, it needs to be registered
*Client applications* are registrations of the clients that will be requesting tokens. Often in OAuth 2.0, we distinguish between at least three different types of applications:
-1. **Confidential clients**, also known as web apps in Azure AD. Confidential clients are applications that use [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) to obtain a token on behalf of a signed in user presenting valid credentials. They're called confidential clients because they're able to hold a secret and will present this secret to Azure AD when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they're trusted more than public clients and can have longer lived tokens and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note it's important to register the reply URL at which the client will be receiving the authorization code.
+1. **Confidential clients**, also known as web apps in Azure AD. Confidential clients are applications that use [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) to obtain a token on behalf of a signed in user presenting valid credentials. They're called confidential clients because they're able to hold a secret and will present this secret to Azure AD when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they're trusted more than public clients and can have longer lived tokens and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note it's important to register the reply URL at which the client will be receiving the authorization code.
1. **Public clients**. These are clients that canΓÇÖt keep a secret. Typically this would be a mobile device application or a single page JavaScript application, where a secret in the client could be discovered by a user. Public clients also use authorization code flow, but they aren't allowed to present a secret when obtaining a token and they may have shorter lived tokens and no refresh token. Read the details on how to [register a public client](register-public-azure-ad-client-app.md).
-1. Service clients. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it isn't necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md)
+1. Service clients. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it isn't necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md)
## Next steps
healthcare-apis Azure Active Directory Identity Configuration Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md
In order for a client application to access the FHIR service, it must present an
There are many ways to obtain a token, but the FHIR service doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
-Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps below:
+Using [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) as an example, accessing a FHIR server goes through the four steps below:
![FHIR Authorization](media/azure-active-directory-fhir-service/fhir-authorization.png)
-1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
-1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/develop/v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token).
1. The client makes a request to the FHIR service, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. The FHIR service validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
The token can be decoded and inspected with tools such as [https://jwt.ms](https
As mentioned above, there are several ways to obtain a token from Azure AD. They're described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
-Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
+Use either of the following authentication protocols:
-When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you're using in your client application.
-
-The pertinent sections of the Azure AD documentation are:
-
-* `v1.0` endpoint:
- * [Authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
- * [Client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
-* `v2.0` endpoint:
- * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
- * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+* [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
There are other variations (for example, on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the FHIR service, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Title: Receive device data through Azure IoT Hub - Azure Health Data Services
-description: In this tutorial, you'll learn how to enable device data routing from IoT Hub into the FHIR service through MedTech service.
+description: In this tutorial, you'll learn how us deploy an Azure IoT Hub with message routing to send device messages to the MedTech service using VSCode and the Azure IoT Hub extension.
Previously updated : 10/03/2022 Last updated : 11/10/2022 # Tutorial: Receive device data through Azure IoT Hub
-MedTech service may be used with devices created and managed through Azure IoT Hub for enhanced workflows and ease of use.
+The MedTech service may be used with devices created and managed through an [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy and configure a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Healthcareapis) site using the **azuredeploy.json** file located on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub/azuredeploy.json).
-This tutorial provides the steps to connect and route device data from IoT Hub to your MedTech service.
+> [!TIP]
+> For more information about ARM templates, see [What are ARM templates?](/azure/azure-resource-manager/templates/overview)
+
+Below is a diagram of the IoT device message flow when using an IoT Hub with the MedTech service. As you can see, devices send their messages to the IoT Hub, which then routes the device messages to the device message event hub to be picked up by the MedTech service. The MedTech service will then transform the device messages and persist them into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations. To learn more about the MedTech service data flow, see [MedTech service data flow](iot-data-flow.md)
+ ## Prerequisites -- An active Azure subscription - [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- FHIR service resource with at least one MedTech service - [Deploy MedTech service using Azure portal](deploy-iot-connector-in-azure.md)-- Azure IoT Hub resource connected with real or simulated device(s) - [Create an IoT Hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md)
+In order to begin the deployment and complete this tutorial, you'll need to have the following prerequisites completed:
-Below is a diagram of the IoT device message flow from IoT Hub into MedTech service:
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+- **Owner** or **Contributor + User Access Administrator** access to the Azure subscription. For more information about Azure role-based access control, see [What is Azure role-based access control?](/azure/role-based-access-control/overview).
-## Create a managed identity for IoT Hub
+- These resource providers registered with your Azure subscription: **Microsoft.HealthcareApis**, **Microsoft.EventHub**, and **Microsoft.Devices**. To learn more about registering resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
-For this tutorial, we'll be using an IoT Hub with a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to provide access from the IoT Hub to the MedTech service device message event hub.
+- [Visual Studio Code (VSCode)](https://code.visualstudio.com/Download) installed locally and configured with the addition of the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). The **Azure IoT Tools** are a collection of extensions that makes it easy to connect to IoT Hubs, create devices, and send messages. For the purposes of this tutorial, we'll be using the **Azure IoT Hub extension** to connect to your deployed IoT Hub, create a device, and send a test message from the device to your IoT Hub.
-For more information about how to create a system-assigned managed identity with your IoT Hub, see [IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md#system-assigned-managed-identity).
+When you've fulfilled these prerequisites, you're ready to use the **Deploy to Azure** button.
-For more information on Azure role-based access control, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+## Deploy to Azure button
-## Connect IoT Hub with the MedTech service
+1. Select the **Deploy to Azure** button below to begin the deployment within the Azure portal.
-Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: event hub, Storage Accounts, and Service Buses). MedTech service uses this feature to allow an IoT Hub to connect and send device messages to the MedTech service device message event hub endpoint.
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
-Follow these directions to grant access to the IoT Hub system-assigned managed identity to your MedTech service device message event hub and set up message routing: [Configure message routing with managed identities](../../iot-hub/iot-hub-managed-identity.md#egress-connectivity-from-iot-hub-to-other-azure-resources).
+ This button will call an ARM template from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/iotconnectors-with-iothub/) site to get information from your Azure subscription environment and begin deploying the MedTech service and IoT Hub using the Azure portal.
-## Send device message to IoT Hub
+## Provide configuration details
-> [!TIP]
-> [Visual Studio Code with the Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) is a recommended method for sending IoT device messages to your IoT Hub for testing and troubleshooting.
+2. When the Azure portal screen appears, your next task is to fill out the option fields that provide specific details of your deployment configuration.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-deploy-template-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\iot-hub-to-iot-connector\iot-deploy-template-options.png":::
+
+ - **Subscription** - Choose the Azure subscription you want to use for the deployment.
+
+ - **Resource group** - Choose an existing resource group or create a new resource group.
+
+ - **Region** - The Azure region of the resource group used for the deployment. This field will auto-fill based on the resource group region.
+
+ - **Basename** - This value will be appended to the name of the Azure resources and services to be deployed. For this tutorial, we're selecting the basename of **azuredocsdemo**. You'll pick a base name of your own choosing.
+
+ - **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group). For a list of Azure regions where the Azure Health Data Services is available, see [Products available by regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=health-data-services).
+
+ - **Fhir Contributor Principle Id** - **Optional** - An Azure AD user object ID that you would like to provide access to for read/write permissions to the FHIR service. This account can be used to access the FHIR service to view the device messages that are generated as part of this tutorial. It's recommended to use your own Azure AD user object ID so that you'll have access to the FHIR service. If you don't choose to use the **Fhir Contributor Principle Id** option, clear the field of any entries. To learn more about how to acquire an Azure AD user object ID, see [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id). The user object ID used in this tutorial isn't real and shouldn't be used. You'll use your own user object ID or that of another person you wish to provide access to the FHIR service.
+
+ - Don't change the **Device Mapping** and **Destination Mapping** default values at this time. These mappings will work with the provided test message later in this tutorial when you send a device message to your IoT Hub using **VSCode** with the **Azure IoT Hub extension**.
+
+ > [!IMPORTANT]
+ > For this tutorial, the ARM template will configure the MedTech service to operate in **Create** mode so that a Patient Resource and Device Resource are created for each device that sends data to your FHIR service.
+ >
+ > To learn more about the MedTech service resolution types: **Create** and **Lookup**, see: [Destination properties](/azure/healthcare-apis/iot/deploy-05-new-config#destination-properties)
+
+3. Select the **Review + create** button after all the option fields are correctly filled out. This selection will review your option inputs and check to see if all your supplied values are valid.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-review-and-create-button.png" alt-text="Screenshot of Azure portal page displaying the **Review + create**." lightbox="media\iot-hub-to-iot-connector\iot-review-and-create-button.png":::
+
+4. If the validation is successful, you'll see a **Validation Passed** message. If not, fix the option creating the validation error and attempt the validation process again.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-validation-completed.png" alt-text="Screenshot of Azure portal page displaying the **Validation Passed** message." lightbox="media\iot-hub-to-iot-connector\iot-validation-completed.png":::
+
+5. After a successful validation, select the **Create** button to begin the deployment.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-create-button.png" alt-text="Screenshot of Azure portal page displaying the **Create**." lightbox="media\iot-hub-to-iot-connector\iot-create-button.png":::
+
+6. After a few minutes wait, a message will appear telling you that your deployment is completed.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-deployment-complete-banner.png" alt-text="Screenshot of Azure portal page displaying Your deployment is complete." lightbox="media\iot-hub-to-iot-connector\iot-deployment-complete-banner.png":::
-Use your device (real or simulated) to send the sample heart rate message shown below to the IoT Hub.
+## Review of deployed resources and access permissions
-This message will get routed to MedTech service, where the message will be transformed into a FHIR Observation resource and stored into FHIR service.
+Once the deployment has competed, the following resources and access roles will be created as part of the template deployment:
-> [!IMPORTANT]
-> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
->
-> To learn about IoT Hub device message enrichment and IotJsonPathContentTemplate mappings usage with the MedTech service device mapping, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
+- An Azure Event Hubs Namespace and device message Azure event hub. In this example, the event hub is named **devicedata**.
-**Sample IoT device message to send to IoT Hub**
+- An Azure event hub consumer group. In this example, the consumer group is named **$Default**.
-```json
+- An Azure event hub sender role. In this example, the sender role is named **devicedatasender**.
-{
- "HeartRate": 80,
- "RespiratoryRate": 12,
- "HeartRateVariability": 64,
- "BodyTemperature": 99.08839032397609,
- "BloodPressure": {
- "Systolic": 23,
- "Diastolic": 34
- },
- "Activity": "walking"
-}
+- An Azure IoT Hub with [messaging routing](/azure/iot-hub/iot-hub-devguide-messages-d2c) configured to send device messages to the device message event hub.
-```
+- A [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) that provides send access from the IoT Hub to the device message event hub (**Event Hubs Data Sender** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the device message event hub).
-> [!IMPORTANT]
-> Make sure to send the device message that conforms to the [Device mappings](how-to-use-device-mappings.md) and [FHIR destinations mappings](how-to-use-fhir-mappings.md) configured with your MedTech service.
+- An Azure Health Data Services workspace.
-## View device data in FHIR service
+- An Azure Health Data Services FHIR service.
-You can view the FHIR Observation resource(s) created by the MedTech service on the FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
+- An Azure Health Data Services MedTech service instance, including the necessary [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) roles to the device message event hub (**Azure Events Hubs Receiver** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the device message event hub) and FHIR service (**FHIR Data Writer** role within the [Access control section (IAM)](/azure/role-based-access-control/overview) of the FHIR service).
> [!TIP]
-> Ensure that your user has appropriate access to FHIR service data plane. Use [Azure role-based access control (Azure RBAC)](../azure-api-for-fhir/configure-azure-rbac.md) to assign required data plane roles.
+> For detailed step-by-step instructions on how to manually deploy the MedTech service, see [How to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md).
+
+## Create a device and send a test message
+
+Now that your deployment has successfully completed, we'll connect to your IoT Hub, create a device, and send a test message to the IoT Hub using **VSCode** with the **Azure IoT Hub extension**. These steps will allow your MedTech service to:
+
+- Pick up the IoT Hub routed test message from the device message event hub.
+- Transform the test message into five FHIR Observations.
+- Persist the FHIR Observations into your FHIR service.
+
+1. Open **VSCode** with the previously installed **Azure IoT Tools**.
+
+2. The **Azure IoT Hub extension** can be found in the **Explorer** section of **VSCode**. Select **…** and then select **Select IoT Hub**. You'll be shown a list of Azure subscriptions. Select the subscription where your IoT Hub was provisioned. You'll then be shown a list of IoT Hubs. Select your IoT Hub (your IoT Hub will be the **basename** you provided when you provisioned the resources prefixed with **ih-**.). For this example, we'll select an IoT Hub named **ih-azuredocsdemo**. You'll be selecting your own IoT Hub.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-select-iot-hub.png" alt-text="Screenshot of VSCode with the Azure IoT Hub extension selecting the deployed IoT Hub for this tutorial " lightbox="media\iot-hub-to-iot-connector\iot-select-iot-hub.png":::
+
+3. To create a device within your IoT Hub to use to send a test message, select **…**, and then select **Create Device**. For this example, we'll be creating a device named **device-001**. You'll create a device name of your own choosing.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-create-device.png" alt-text="Screenshot of VSCode with the Azure IoT Hub extension selecting Create device for this tutorial." lightbox="media\iot-hub-to-iot-connector\iot-create-device.png":::
+
+4. To send a test message from the newly created device to your IoT Hub, right-click the device and select the **Send D2C Message to IoT Hub** option. For this example, we'll be using a device named **device-001**. You'll use the device you created as part of the previous step.
+
+ > [!NOTE]
+ > **D2C** stands for Device-to-Cloud. In this example, cloud is the IoT Hub that will be receiving the device message. IoT Hub allows two-way communications, which is why there's also the option to **Send C2D Message to Device Cloud** (C2D stands for Cloud-to-Device).
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-select-device-to-cloud-message.png" alt-text="Screenshot of VSCode with the Azure IoT Hub extension selecting the Send D2C Message to IoT Hub option." lightbox="media\iot-hub-to-iot-connector\iot-select-device-to-cloud-message.png":::
+
+5. In the **Send D2C Messages** box, make the following selections and edits:
+
+ - **Device(s) to send messages from** - Leave at default. The device will be the one previously created by you.
+
+ - **Message(s) per device** - Adjust from 10 to one.
+
+ - **Interval between two messages** - Leave at default of one second.
+
+ - **Message** - Leave at default value of **Plain Text**
+
+ - **Edit** - If present, remove the **Hello from Azure IoT!** example and then **copy + paste** the below test message into the **Edit** box.
+
+ > [!TIP]
+ > You can use the **Copy** option in in the right corner of the test message to place the text into your clip board so that you can then paste it into the **Edit** box.
+
+ ```json
+ {
+ "HeartRate": 78,
+ "RespiratoryRate": 12,
+ "HeartRateVariability": 30,
+ "BodyTemperature": 98.6,
+ "BloodPressure": {
+ "Systolic": 120,
+ "Diastolic": 80
+ }
+ }
+ ```
+
+6. Select **Send** to begin the process of sending a test message to your IoT Hub.
+
+ :::image type="content" source="media\iot-hub-to-iot-connector\iot-select-device-to-cloud-message-options.png" alt-text="Screenshot of VSCode with the Azure IoT Hub extension selecting the device message options." lightbox="media\iot-hub-to-iot-connector\iot-select-device-to-cloud-message-options.png":::
+
+ > [!NOTE]
+ > After the test message is sent, it may take up to five minutes for the FHIR resources to be present in the FHIR service.
+
+ > [!IMPORTANT]
+ > To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+ >
+ > To learn more about IotJsonPathContentTemplate mappings usage with the MedTech service device mappings, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
+
+## View test data in the FHIR service (Optional)
+
+If you provided your own Azure AD user object ID as the optional Fhir Contributor Principal ID when deploying this tutorial's template, then you have access to query FHIR resources in the FHIR service.
+
+Use this tutorial, [Access using Postman](/azure/healthcare-apis/fhir/use-postman) to get an Azure AD access token and view FHIR resources in the FHIR service.
## Next steps
-In this tutorial, you set up an Azure IoT Hub to route device data to MedTech service.
+In this tutorial, you deployed an Azure IoT Hub to route device data to the MedTech service.
+
+To learn about how to use device mappings, see
+
+> [!div class="nextstepaction"]
+> [How to use device mappings](how-to-use-device-mappings.md)
-To learn about the different stages of data flow within MedTech service, see
+To learn more about FHIR destination mappings, see
->[!div class="nextstepaction"]
->[MedTech service data flow](iot-data-flow.md)
+> [!div class="nextstepaction"]
+> [How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Previously updated : 10/25/2022 Last updated : 11/08/2022
-# Device mappings overview
+# How to configure device mappings
This article provides an overview and describes how to configure the MedTech service device mappings.
The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) desti
The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The device mapping is added through the **Device mapping** page and the FHIR destination mapping through the **Destination** page.
-## How to configure device mappings
+## Device mappings overview
Device mappings provide functionality to extract device message content into a common format for further evaluation. Each device message received is evaluated against all device mapping templates. A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service. The result is a normalized data object representing the value or values parsed by the device mapping templates.
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-administer.md
To access and use the **Settings > Application** and **Settings > Customization*
In the **Application > Management** page, you can change the name and URL of your application, then select **Save**.
-![Application management page](media/howto-administer/image-a.png)
If your administrator creates a custom theme for your application, this page includes an option to hide the **Application Name** in the UI. This option is useful if the application logo in the custom theme includes the application name. For more information, see [Customize the Azure IoT Central UI](./howto-customize-ui.md).
-> [!Note]
-> If you change your URL, your old URL can be taken by another Azure IoT Central customer. If that happens, it is no longer available for you to use. When you change your URL, the old URL no longer works, and you need to notify your users about the new URL to use.
+If you change your URL, your old URL can be taken by another Azure IoT Central customer. If that happens, it is no longer available for you to use. When you change your URL, the old URL no longer works, and you need to notify your users about the new URL to use.
## Delete an application Use the **Delete** button to permanently delete your IoT Central application. This action permanently deletes all data that's associated with the application.
-> [!Note]
-> To delete an application, you must also have permissions to delete resources in the Azure subscription you chose when you created the application. To learn more, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+To delete an application, you must also have permissions to delete resources in the Azure subscription you chose when you created the application. To learn more, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
## Manage programmatically
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
To configure device file uploads:
1. Select **Save**. When the status shows **Configured**, you're ready to upload files from devices. ## Disable device file uploads
If you enabled access to files in the file upload configuration, users with the
To view and delete uploaded files, navigate to the **Files** view for a device. On this page, you can see thumbnails of the uploaded files and toggle between a gallery and list view. Each file has options to download or delete it: > [!TIP] > The file type is determined by the mime type assigned to the file when it was uploaded to blob storage. The default type is `binary/octet-stream`.
You can customize the list view by filtering based on file name and choosing the
To preview the content of the file and get more information about the file, select it. IoT Central supports previews of common file types such as text and images: ## Next steps
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules.md
Rules in IoT Central serve as a customizable response tool that trigger on activ
Use the target devices section to select on what kind of devices this rule will be applied. Filters allow you to further refine what devices should be included. The filters use properties on the device template to filter down the set of devices. Filters themselves don't trigger an action. In the following screenshot, the devices that are being targeted are of device template type **Refrigerator**. The filter states that the rule should only include **Refrigerators** where the **Manufactured State** property equals **Washington**. ## Use multiple conditions Conditions are what rules trigger on. You can add multiple conditions to a rule and specify if the rule should trigger when all the conditions are true or any of the conditions are true.
-In the following screenshot, the conditions check when the temperature is greater than 70&deg; F and the humidity is less than 10. When any of these statements are true, the rule evaluates to true and triggers an action.
+In the following screenshot, the conditions check when the temperature is greater than 70&deg; F and the humidity is less than 10%. When any of these statements are true, the rule evaluates to true and triggers an action.
:::image type="content" source="media/howto-configure-rules/conditions.png" alt-text="Screenshot shows a refrigerator monitor with conditions specified for temperature and humidity." lightbox="media/howto-configure-rules/conditions.png"::: > [!NOTE]
-> Currently only Telemetry Conditions are supported.
+> Currently only telemetry conditions are supported.
### Use a cloud property in a value field
If you choose an event type telemetry value, the **Value** drop-down includes th
You can specify a time aggregation to trigger your rule based on a time window. Rule conditions evaluate aggregate time windows on telemetry data as tumbling windows. If there are any property filters in the rule, they're applied at the end of the time window. In the screenshot below, the time window is five minutes. Every five minutes, the rule evaluates on the last five minutes of telemetry data. The data is only evaluated once in the window to which it corresponds. ## Create an email action When you create an email action, the email address must be a **user ID** in the application, and the user must have signed in to the application at least once. You can also specify a note to include in the email. IoT Central shows an example of what the email will look like when the rule triggers: ## Create a webhook action
In this example, you connect to *RequestBin* to get notified when a rule fires:
1. Add an action to your rule:
- :::image type="content" source="media/howto-configure-rules/webhook-create.png" alt-text="Screenshot that shows the webhook creation screen.":::
+ :::image type="content" source="media/howto-configure-rules/webhook-create.png" alt-text="Screenshot that shows the webhook creation screen." lightbox="media/howto-configure-rules/webhook-create.png":::
1. Choose the webhook action, enter a display name, and paste the RequestBin URL as the **Callback URL**.
An action group can:
The following screenshot shows an action group that sends email and SMS notifications and calls a webhook: To use an action group in an IoT Central rule, the action group must be in the same Azure subscription as the IoT Central application.
When you add an action to the rule in IoT Central, select **Azure Monitor Action
Choose an action group from your Azure subscription: Select **Save**. The action group now appears in the list of actions to run when the rule is triggered.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Not all device templates use components. The following screenshot shows the devi
The following screenshot shows a [temperature controller](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json) device template that uses components. The temperature controller has two thermostat components and a device information component: In IoT Central, a module refers to an IoT Edge module running on a connected IoT Edge device. A module can have a simple model such as the thermostat that doesn't use components. A module can also use components to organize a more complex set of capabilities. The following screenshot shows an example of a device template that uses modules. The environmental sensor device has a module called `SimulatedTemperatureSensor` and an inherited interface called `management`: ## Get a device component
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
You land on the Azure IoT Central Application Manager page. Azure IoT Central ke
![Create Manager for CSPs](media/howto-create-and-manage-applications-csp/image3.png) + To create an Azure IoT Central application, select **Build** in the left menu. Choose one of the industry templates, or choose **Custom app** to create an application from scratch. This will load the Application Creation page. You must complete all the fields on this page and then choose **Create**. ## Application name
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
Use the URL output by the script to navigate to the IoT Central application it c
| Host name | The event hub namespace host name, it's the value you assigned to `eventhubnamespace` in the earlier script | | Event Hub | The event hub name, it's the value you assigned to `eventhub` in the earlier script |
- :::image type="content" source="media/howto-create-custom-analytics/data-export-1.png" alt-text="Screenshot showing data export destination.":::
+ :::image type="content" source="media/howto-create-custom-analytics/data-export-1.png" alt-text="Screenshot showing data export destination." lightbox="media/howto-create-custom-analytics/data-export-1.png":::
1. Select **Save**.
To create the export definition:
1. Select **Save**.
- :::image type="content" source="media/howto-create-custom-analytics/data-export-2.png" alt-text="Screenshot showing data export definition.":::
Wait until the export status is **Healthy** on the **Data export** page before you continue.
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
Title: Extend Azure IoT Central with custom rules and notifications | Microsoft
description: As a solution developer, configure an IoT Central application to send email notifications when a device stops sending telemetry. This solution uses Azure Stream Analytics, Azure Functions, and SendGrid. Previously updated : 06/21/2022 Last updated : 11/08/2022
Create an IoT Central application on the [Azure IoT Central application manager]
| URL | Accept the default or choose your own unique URL prefix | | Directory | Your Azure Active Directory tenant | | Azure subscription | Your Azure subscription |
-| Region | Your nearest region |
+| Location | Your nearest Azure data center |
-The examples and screenshots in this article use the **United States** region. Choose a location close to you and make sure you create all your resources in the same region.
+The examples and screenshots in this article use the **East US*** location. Choose a location close to you and make sure you create all your resources in the same location.
This application template includes two simulated thermostat devices that send telemetry.
Use the [Azure portal to create a function app](https://portal.azure.com/#create
| App name | Choose your function app name | | Subscription | Your subscription | | Resource group | DetectStoppedDevices |
-| OS | Windows |
-| Hosting Plan | Consumption Plan |
-| Location | East US |
+| Publish | Code |
| Runtime Stack | .NET |
+| Region | East US |
+| OS | Windows |
+| Hosting Plan | Consumption (Serverless) |
| Storage | Create new | ### SendGrid account and API Keys If you don't have a Sendgrid account, create a [free account](https://app.sendgrid.com/) before you begin.
-1. From the Sendgrid Dashboard Settings on the left menu, select **API Keys**.
-1. Click **Create API Key.**
+1. From the Sendgrid Dashboard, select **Settings** on the left menu, select **Settings > API Keys**.
+1. Select **Create API Key.**
1. Name the new API key **AzureFunctionAccess.**
-1. Click **Create & View**.
+1. Select **Create & View**.
+
+ :::image type="content" source="media/howto-create-custom-rules/sendgrid-api-keys.png" alt-text="Screenshot that shows how to create a SendGrid API key." lightbox="media/howto-create-custom-rules/sendgrid-api-keys.png":::
- :::image type="content" source="media/howto-create-custom-rules/sendgrid-api-keys.png" alt-text="Screenshot of the Create SendGrid API key.":::
+Make a note of the generated API key, you use it later.
-Afterwards, you will be given an API key. Save this string for later use.
+Create a **Single Sender Verification** in your SendGrid account for the email address you'll use as the **From** address.
## Create an event hub
You can configure an IoT Central application to continuously export telemetry to
1. In the Azure portal, navigate to your Event Hubs namespace and select **+ Event Hub**. 1. Name your event hub **centralexport**, and select **Create**.
-Your Event Hubs namespace looks like the following screenshot:
- ## Define the function This solution uses an Azure Functions app to send an email notification when the Stream Analytics job detects a stopped device. To create your function app:
-1. In the Azure portal, navigate to the **App Service** instance in the **DetectStoppedDevices** resource group.
-1. Select **+** to create a new function.
-1. Select **HTTP Trigger**.
-1. Select **Add**.
+1. In the Azure portal, navigate to the **Function App** instance in the **DetectStoppedDevices** resource group.
+1. Select **Functions**, then **+ Create** to create a new function.
+1. Select **HTTP Trigger** as the function template.
+1. Select **Create**.
- :::image type="content" source="media/howto-create-custom-rules/add-function.png" alt-text="Image of the Default HTTP trigger function":::
## Edit code for HTTP Trigger
-The portal creates a default function called **HttpTrigger1**:
+The portal creates a default function called **HttpTrigger1**. Select **Code + Test**:
1. Replace the C# code with the following code:
The portal creates a default function called **HttpTrigger1**:
} ```
- You may see an error message until you save the new code.
1. Select **Save** to save the function. ## Add SendGrid Key
To add your SendGrid API Key, you need to add it to your **Function Keys** as fo
1. Select **Function Keys**. 1. Choose **+ New Function Key**. 1. Enter the *Name* and *Value* of the API Key you created before.
-1. Click **OK.**
-
- :::image type="content" source="media/howto-create-custom-rules/add-key.png" alt-text="Screenshot of Add Sangrid Key.":::
+1. Select **OK.**
## Configure HttpTrigger function to use SendGrid To send emails with SendGrid, you need to configure the bindings for your function as follows:
-1. Select **Integrate**.
-1. Choose **Add Output** under **HTTP ($return)**.
+1. Select **Integration**.
+1. Select **HTTP ($return)**.
1. Select **Delete.**
-1. Choose **+ New Output**.
-1. For Binding Type, then choose **SendGrid**.
-1. For SendGrid API Key Setting Type, click New.
+1. Select **+ Add output**.
+1. Select **SendGrid** as the binding type.
+1. For the **SendGrid API Key App Setting**, select **New**.
1. Enter the *Name* and *Value* of your SendGrid API key. 1. Add the following information: | Setting | Value | | - | -- |
-| Message parameter name | Choose your name |
-| To address | Choose the name of your To Address |
-| From address | Choose the name of your From Address |
-| Message subject | Enter your subject header |
-| Message text | Enter the message from your integration |
+| Message parameter name | name |
+| To address | Enter your To Address |
+| From address | Enter your SendGrid verified single sender email address |
+| Message subject | Device stopped |
+| Message text | The device connected to IoT Central has stopped sending telemetry. |
-1. Select **OK**.
-
- :::image type="content" source="media/howto-create-custom-rules/add-output.png" alt-text="Screenshot of Add SandGrid Output.":::
+1. Select **Save**.
### Test the function works
-To test the function in the portal, first choose **Logs** at the bottom of the code editor. Then choose **Test** to the right of the code editor. Use the following JSON as the **Request body**:
+To test the function in the portal, first select **Logs** at the bottom of the code editor. Then select **Test/Run**. Use the following JSON as the **Request body**:
```json [{"deviceid":"test-device-1","time":"2019-05-02T14:23:39.527Z"},{"deviceid":"test-device-2","time":"2019-05-02T14:23:50.717Z"},{"deviceid":"test-device-3","time":"2019-05-02T14:24:28.919Z"}]
To test the function in the portal, first choose **Logs** at the bottom of the c
The function log messages appear in the **Logs** panel: After a few minutes, the **To** email address receives an email with the following content:
This solution uses a Stream Analytics query to detect when a device stops sendin
1. Select **Save**. 1. To start the Stream Analytics job, choose **Overview**, then **Start**, then **Now**, and then **Start**:
- :::image type="content" source="media/howto-create-custom-rules/stream-analytics.png" alt-text="Screenshot of Stream Analytics.":::
+ :::image type="content" source="media/howto-create-custom-rules/stream-analytics.png" alt-text="Screenshot of Stream Analytics overview page." lightbox="media/howto-create-custom-rules/stream-analytics.png":::
-## Configure export in IoT Central
+## Configure export in IoT Central
On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application you created.
In this section, you configure the application to stream the telemetry from its
| Display Name | Export to Event Hubs | | Enabled | On | | Type of data to export | Telemetry |
- | Enrichments | Enter desired key / Value of how you want the exported data to be organized |
+ | Enrichments | Enter desired key / Value of how you want the exported data to be organized |
| Destination | Create New and enter information for where the data will be exported |
- :::image type="content" source="media/howto-create-custom-rules/cde-configuration.png" alt-text="Screenshot of the Data Export.":::
+ :::image type="content" source="media/howto-create-custom-rules/cde-configuration.png" alt-text="Screenshot of the data export settings in IoT Central." lightbox="media/howto-create-custom-rules/cde-configuration.png":::
Wait until the export status is **Running** before you continue.
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
After you define your organization hierarchy, assign your devices to organizatio
When you create a new device in your application, assign it to an organization in your hierarchy: To assign or reassign an existing device to an organization, select the device in the device list and then select **Organization**:
You assign gateway and downstream devices to organizations. You don't have to as
When you create the first organization in your application, IoT Central adds three new roles in your application: **Org Administrator**, **Org Operator**, and **Org Viewer**. These roles are necessary because an organization user can't access certain application-wide capabilities such as pricing plans, branding and colors, API tokens, and application-wide enrollment group information. You can use these roles when you invite users to an organization in your application.
You can use these roles when you invite users to an organization in your applica
To create a custom role for your organization users, create a new role and choose the **Organization** role type:
-Then select the permissions for the role:
-
+Then select the permissions for the role.
## Invite users
After you've created your organization hierarchy and assigned devices to organiz
To invite a user, navigate to **Permissions > Users**. Enter their email address, the organization they're assigned to, and the role or roles the user is a member of. The organization you select filters the list of available roles to make sure you assign the user to a valid role: You can assign the same user to multiple organizations. The user can have a different role in each organization they're assigned to:
iot-central Howto Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-private-endpoint.md
To create a private endpoint on an existing IoT Central application:
1. On the **DNS** tab, select **Yes** for **Integrate with private DNS zone.** The private DNS resolves all the required endpoints to private IP addresses in your virtual network.
- :::image type="content" source="media/howto-create-private-endpoint/private-dns-integrationΓÇï.png" alt-text="Screenshot from Azure portal that shows private D N S integration.":::
+ :::image type="content" source="media/howto-create-private-endpoint/private-dns-integrationΓÇï.png" alt-text="Screenshot from Azure portal that shows private DNS integration.":::
> [!NOTE] > Because of the autoscale capabilities in IoT Central, you should use the **Private DNS integration** option if at all possible. If for some reason you can't use this option, see [Use a custom DNS server](#use-a-custom-dns-server).
When you configure a private endpoint for your IoT Central application, the IoT
Update your device code to use the direct DPS endpoint. ## Best practices
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-customize-ui.md
This article describes how you can customize the UI of your application by apply
The following screenshot shows a page using the standard theme:
-![Standard IoT Central theme](./media/howto-customize-ui/standard-ui.png)
The following screenshot shows a page using a custom screenshot with the customized UI elements highlighted:
-![Custom IoT Central theme](./media/howto-customize-ui/themed-ui.png)
+
+> [!TIP]
+> You can also customize the image shown in browser's address bar and list of favorites.
## Create theme To create a custom theme, navigate to the **Appearance** section in the **Customization** page.
-![IoT Central themes](./media/howto-customize-ui/themes.png)
On this page, you can customize the following aspects of your application:
To provide custom help information to your operators and other users, you can mo
To modify the help links, navigate to the **Help links** section in the **Customization** page.
-![Customize IoT Central help links](./media/howto-customize-ui/help-links.png)
You can also add new entries to the help menu and remove default entries:
-![Customized IoT Central help](./media/howto-customize-ui/custom-help.png)
> [!NOTE] > You can always revert back to the default help links on the **Customization** page.
Following example shows how to change the word `Device` to `Asset` when you view
1. Upload your edited customization file and select **Save** to see your new text in the application:
- :::image type="content" source="media/howto-customize-ui/upload-custom-text.png" alt-text="Screenshot showing how to upload custom text file.":::
+ :::image type="content" source="media/howto-customize-ui/upload-custom-text.png" alt-text="Screenshot showing how to upload a custom text file." lightbox="media/howto-customize-ui/upload-custom-text.png":::
The UI now uses the new text values:
- :::image type="content" source="media/howto-customize-ui/updated-ui-text.png" alt-text="Screenshot that shows updated text in the U I.":::
+ :::image type="content" source="media/howto-customize-ui/updated-ui-text.png" alt-text="Screenshot that shows updated text in the UI." lightbox="media/howto-customize-ui/updated-ui-text.png":::
You can reupload the customization file with further changes by selecting the relevant language from the list on the **Text** section in the **Customization** page.
iot-central Howto Export To Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md
Title: Export data to Azure Data Explorer IoT Central | Microsoft Docs description: How to use the new data export to export your IoT data to Azure Data Explorer --++ Last updated 04/28/2022
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
Title: Export data to Blob Storage IoT Central | Microsoft Docs description: How to use the new data export to export your IoT data to Blob Storage --++ Last updated 04/28/2022
iot-central Howto Export To Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md
Title: Export data to Event Hubs IoT Central | Microsoft Docs description: How to use the new data export to export your IoT data to Event Hubs --++ Last updated 04/28/2022
iot-central Howto Export To Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md
Title: Export data to Service Bus IoT Central | Microsoft Docs description: How to use the new data export to export your IoT data to Service Bus --++ Last updated 04/28/2022
iot-central Howto Export To Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-webhook.md
Title: Export data to Webhook IoT Central | Microsoft Docs description: How to use the new data export to export your IoT data to Webhook --++ Last updated 04/28/2022
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
The following example shows you how to create and run a job to set the light thr
1. On the left pane, select **Jobs**.
-1. Select **+ New job**.
+1. Select **+ New**.
1. On the **Configure your job** page, enter a name and description to identify the job you're creating.
The following example shows you how to create and run a job to set the light thr
1. Select the target device group that you want your job to apply to. If your application uses organizations, the selected organization determines the available device groups. You can see how many devices your job configuration applies to below your **Device group** selection.
-1. Choose **Cloud property**, **Property**, **Command**, **Change device template**, or **Change edge deployment manifest** as the **Job type**:
+1. Choose **Cloud property**, **Property**, **Command**, **Change device template**, or **Change edge deployment manifest** as the **Job type**. To configure a:
- To configure a **Property** job, select a property and set its new value. A property job can set multiple properties. To configure a **Command** job, choose the command to run. To configure a **Change device template** job, select the device template to assign to the devices in the device group. To configure a **Change edge deployment manifest** job, select the IoT Edge deployment manifest to assign to the IoT Edge devices in the device group.
+ * **Property** job, select a property and set its new value. A property job can set multiple properties.
+ * **Command** job, choose the command to run.
+ * **Change device template** job, select the device template to assign to the devices in the device group.
+ * **Change edge deployment manifest** job, select the IoT Edge deployment manifest to assign to the IoT Edge devices in the device group.
Select **Save and exit** to add the job to the list of saved jobs on the **Jobs** page. You can later return to a job from the list of saved jobs.
-1. Select **Next** to move to the **Delivery Options** page. The **Delivery Options** page lets you set the delivery options for this job: **Batches** and **Cancellation threshold**.
+1. Select **Next** to move to the **Delivery Options** page. The **Delivery Options** page lets you set the **Batches** and **Cancellation threshold** delivery options for this job:
Batches let you stagger jobs for large numbers of devices. The job is divided into multiple batches and each batch contains a subset of the devices. The batches are queued and run in sequence.
The following example shows you how to create and run a job to set the light thr
1. Select **Next** to move to the **Review** page. The **Review** page shows the job configuration details. Select **Schedule** to schedule the job:
- :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-review.png" alt-text="Screenshot of scheduled job wizard review page":::
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-review.png" alt-text="Screenshot of scheduled job wizard review page." lightbox="media/howto-manage-devices-in-bulk/job-wizard-schedule-review.png":::
1. The job details page shows information about scheduled jobs. When the scheduled job executes, you see a list of the job instances. The scheduled job execution is also be part of the **Last 30-day** job list. On this page, you can **Unschedule** the job or **Edit** the scheduled job. You can return to a scheduled job from the list of scheduled jobs.
-1. In the job wizard, you can choose to not schedule a job, and run it immediately. The following screenshot shows a job without a schedule that's ready to run immediately. Select **Run** to run the job:
+1. In the job wizard, you can choose to not schedule a job, and run it immediately.
1. A job goes through *pending*, *running*, and *completed* phases. The job execution details contain result metrics, duration details, and a device list grid.
- When the job is complete, you can select **Results log** to download a CSV file of your job details, including the devices and their status values. This information can be useful for troubleshooting.
+ When the job is complete, you can select **Results log** to download a CSV file of your job details, including the devices and their status values. This information can be useful for troubleshooting:
- :::image type="content" source="media/howto-manage-devices-in-bulk/download-details.png" alt-text="Screenshot that shows device status":::
+ :::image type="content" source="media/howto-manage-devices-in-bulk/download-details.png" alt-text="Screenshot that shows device status." lightbox="media/howto-manage-devices-in-bulk/download-details.png":::
1. The job now appears in **Last 30 days** list on the **Jobs** page. This page shows currently running jobs and the history of any previously run or saved jobs.
To stop a running job, open it and select **Stop**. The job status changes to re
When a job is in a stopped state, you can select **Continue** to resume running the job. The job status changes to reflect that the job is now running again. The **Summary** section continues to update with the latest progress. - ## Copy a job
-To copy an existing job, select an executed job. Select **Copy** on the job results page or jobs details page:
--
-A copy of the job configuration opens for you to edit, and **Copy** is appended to the job name.
+To copy an existing job, select an executed job. Select **Copy** on the job results page or jobs details page. A copy of the job configuration opens for you to edit, and **Copy** is appended to the job name.
## View job status
To download a CSV file that includes the job details and the list of devices and
You can filter the device list on the **Job details** page by selecting the filter icon. You can filter on the **Device ID** or **Status** field: ## Customize columns in the device list You can add columns to the device list by selecting the column options icon: Use the **Column options** dialog box to choose the device list columns. Select the columns that you want to display, select the right arrow, and then select **OK**. To select all the available columns, choose **Select all**. The selected columns appear in the device list.
Selected columns persist across a user session or across user sessions that have
## Rerun jobs
-You can rerun a job that has failed devices. Select **Rerun on failed**:
-
+You can rerun a job that has failed devices. Select **Rerun on failed**.
Enter a job name and description, and then select **Rerun job**. A new job is submitted to retry the action on failed devices.
Enter a job name and description, and then select **Rerun job**. A new job is su
## Import devices
-To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/bulk-upload-devices). The CSV file should include the following column headers:
+To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/bulk-upload-devices/IoT%20Central%20Bulk%20Upload%20Sample%20File.csv). The CSV file should include the following column headers:
| Column | Description | | - | - |
To bulk-register devices in your application:
1. Select **Import**.
- :::image type="content" source="media/howto-manage-devices-in-bulk/bulk-import-1.png" alt-text="Screenshot showing import action settings.":::
- 1. Select an organization to assign the devices to. All the devices you're importing are assigned to the same organization. To assign devices to different organizations, create multiple import files, one for each organization. Alternatively, upload them all to the root organization and then in the UI reassign them to the correct organizations. 1. Select the CSV file that has the list of device IDs to be imported.
To bulk-register devices in your application:
1. Once the import completes, a success message is shown in the **Device Operations** panel.
- :::image type="content" source="media/howto-manage-devices-in-bulk/bulk-import-2.png" alt-text="Screenshot showing import success.":::
+ :::image type="content" source="media/howto-manage-devices-in-bulk/bulk-import.png" alt-text="Screenshot showing import success." lightbox="media/howto-manage-devices-in-bulk/bulk-import.png":::
If the device import operation fails, you see an error message on the **Device Operations** panel. A log file capturing all the errors is generated that you can download.
To bulk export devices from your application:
1. Select the **Download File** link to download the file to a local folder on the disk.
- ![Export Success](./media/howto-manage-devices-in-bulk/export-2.png)
+ :::image type="content" source="media/howto-manage-devices-in-bulk/export.png" alt-text="Screenshot that shows a successful device export." lightbox="media/howto-manage-devices-in-bulk/export.png":::
1. The exported CSV file contains the following columns: device ID, device name, device keys, and X509 certificate thumbprints:
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
To view an individual device:
1. Choose a device template.
-1. In the right-hand pane of the **Devices** page, you see a list of devices accessible to your organization created from that device template. Choose an individual device to see the device details page for that device:
+1. In the right-hand pane of the **Devices** page, you see a list of devices accessible to your organization created from that device template:
- :::image type="content" source="media/howto-manage-devices-individually/device-list.png" alt-text="Screenshot showing device list.":::
+ :::image type="content" source="media/howto-manage-devices-individually/device-list.png" alt-text="Screenshot showing the device list." lightbox="media/howto-manage-devices-individually/device-list.png":::
+
+ Choose an individual device to see the device details page for that device.
> [!TIP] > You can use the filter tool on this page to view devices in a specific organization.
To move a device to a different organization, you must have access to both the s
1. Select the new organization for the device:
- :::image type="content" source="media/howto-manage-devices-individually/change-device-organization.png" alt-text="Screenshot that shows how to move a device to a new organization.":::
+ :::image type="content" source="media/howto-manage-devices-individually/change-device-organization.png" alt-text="Screenshot that shows how to move a device to another organization." lightbox="media/howto-manage-devices-individually/change-device-organization.png":::
1. Select **Save**.
If you register devices by starting the import under **All devices**, then the d
1. Choose **Devices** on the left pane.
-1. On the left panel, choose **All devices**:
-
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassigned devices.":::
-
-1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassigned** for any of your devices.
+1. On the left panel, choose **All devices**.
-1. Select the devices you want to assign to a template.
+1. Select the **unassigned** devices you want to assign to a template:
1. Select **Migrate**:
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to assign a device.":::
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices.png" alt-text="Screenshot showing how to assign a device to a device template." lightbox="media/howto-manage-devices-individually/unassociated-devices.png":::
1. Choose the template from the list of available templates and select **Migrate**.
iot-central Howto Manage Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-preferences.md
Last updated 06/22/2022
- # This article applies to operators, builders, and administrators.
IoT Central is supported in multiple languages. You can switch your preferred la
We have support for both dark theme and light theme. While the light theme is the default, you can change the theme by selecting the settings icon on the top navigation bar.
-![IoT Central theme picker](media/howto-manage-preferences/settings.png)
> [!NOTE] > The option to choose between light and dark themes isn't available if your administrator has configured a custom theme for the application.
We have support for both dark theme and light theme. While the light theme is th
## Change default organization If your application uses organizations, you can select a default organization to use whenever you need to select an organization. For example, the default organization pre-populates the organization field when you add a new device to your application.-
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
Every user must have a user account before they can sign in and access an applic
1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section.
- :::image type="content" source="media/howto-manage-users-roles/manage-users.png" alt-text="Screenshot of manage users page in IoT Central." lightbox="media/howto-manage-users-roles/manage-users.png":::
+ :::image type="content" source="media/howto-manage-users-roles/manage-users.png" alt-text="Screenshot that shows the manage users page in IoT Central." lightbox="media/howto-manage-users-roles/manage-users.png":::
1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. To add an Azure Active Directory group on the **Users** page, choose **+ Assign group**. Start typing the name of the Active Directory group or service principal to auto-populate the form.
Every user must have a user account before they can sign in and access an applic
1. Choose a role for the user from the **Role** drop-down menu. Learn more about roles in the [Manage roles](#manage-roles) section of this article.
- :::image type="content" source="media/howto-manage-users-roles/add-user.png" alt-text="Screenshot to add a user and select a role." lightbox="media/howto-manage-users-roles/add-user.png":::
+ :::image type="content" source="media/howto-manage-users-roles/add-user.png" alt-text="Screenshot showing how to add a user and select a role." lightbox="media/howto-manage-users-roles/add-user.png":::
The available roles depend on the organization the user is associated with. You can assign **App** roles to users associated with the root organization, and **Org** roles to users associated with any other organization in the hierarchy.
To delete users, select one or more check boxes on the **Users** page. Then sele
Roles enable you to control who within your organization is allowed to do various tasks in IoT Central. There are three built-in roles you can assign to users of your application. You can also [create custom roles](#create-a-custom-role) if you require finer-grained control. ### App Administrator
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
To create a mapping in your IoT Central application, choose one of the following
* From any device page, select **Manage device > Map data**:
- :::image type="content" source="media/howto-map-data/manage-device.png" alt-text="Screenshot that shows the **Map data** menu item.":::
+ :::image type="content" source="media/howto-map-data/manage-device.png" alt-text="Screenshot that shows the **Map data** menu item." lightbox="media/howto-map-data/manage-device.png":::
* From the **Raw data** view for your device, expand any telemetry message, hover the mouse pointer over a path, and select **Add alias**. The **Map data** panel opens with the JSONPath expression copied to the **JSON path** field:
- :::image type="content" source="media/howto-map-data/raw-data.png" alt-text="Screenshot that shows the **Add alias** option on the **Raw data** view.":::
+ :::image type="content" source="media/howto-map-data/raw-data.png" alt-text="Screenshot that shows the **Add alias** option on the **Raw data** view." lightbox="media/howto-map-data/raw-data.png":::
The left-hand side of the **Map data** panel shows the latest message from your device. Hover to mouse pointer over any part of the data and select **Add Alias**. The JSONPath expression is copied to **JSON path**. Add an **Alias** name with no more than 64 characters. You can't use the alias to refer to a field in a complex object defined in the device template. Add as many mappings as you need and then select **Save**: For a given device:
For a given device:
To verify that IoT Central is mapping the telemetry, navigate to **Raw data** view for your device and check the `_mappeddata` section: If you don't see your mapped data after refreshing the **Raw data** several times, check that the JSONPath expression you're using matches the structure of the telemetry message.
For devices assigned to a device template, you can't map data for components or
To view, edit, or delete mappings, navigate to the **Mapped aliases** page. Select a mapping to edit or delete it. You can select multiple mappings and delete them at the same time: By default, data exports from IoT Central include mapped data. To exclude mapped data, use a [data transformation](howto-transform-data-internally.md) in your data export.
iot-central Howto Migrate To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-migrate-to-iot-hub.md
The migrator tool requires an Azure Active Directory application registration to
1. Enter a name such as "IoTC Migrator app".
-1. Select **Accounts in this organizational directory only (iot-partners only - Single tenant)**.
+1. Select **Accounts in this organizational directory only ({your directory} only - Single tenant)**.
1. Select **Single page application (SPA)**.
The migrator tool requires an Azure Active Directory application registration to
1. Make a note of the **Application (client) ID** and **Directory (tenant) ID** values. You use these values later to configure the migrator app:
- :::image type="content" source="media/howto-migrate-to-iot-hub/azure-active-directry-app.png" alt-text="Screenshot that shows the Azure Active Directory application in the Azure portal.":::
+ :::image type="content" source="media/howto-migrate-to-iot-hub/azure-active-directory-app.png" alt-text="Screenshot that shows the Azure Active Directory application in the Azure portal." lightbox="media/howto-migrate-to-iot-hub/azure-active-directory-app.png":::
### Add the device keys to DPS
Use the tool to migrate your devices in batches. Enter the migration details on
1. Select the DPS instance linked to your target IoT hub. 1. Select **Migrate**. The tool now registers all the connected devices that matched the target device filter in the destination IoT hub. The tool then creates a job in your IoT Central application to call the **DeviceMove** method on all those devices. The command payload contains the ID scope of the destination DPS instance.
The tool now registers all the connected devices that matched the target device
The **Migration status** page in the tool shows you when the migration is complete: Select a job on this page to view the [job status](howto-manage-devices-in-bulk.md#view-job-status) in your IoT Central application. Use this page to view the status of the individual devices in the job: Devices that migrated successfully: - Show as **Disconnected** on the devices page in your IoT Central application. - Show as registered and provisioned in your IoT hub:
- :::image type="content" source="media/howto-migrate-to-iot-hub/destination-devices.png" alt-text="Screenshot of IoT Hub in the Azure portal that shows the provisioned devices.":::
+ :::image type="content" source="media/howto-migrate-to-iot-hub/destination-devices.png" alt-text="Screenshot of IoT Hub in the Azure portal that shows the provisioned devices." lightbox="media/howto-migrate-to-iot-hub/destination-devices.png":::
- Are now sending telemetry to your IoT hub
- :::image type="content" source="media/howto-migrate-to-iot-hub/destination-metrics.png" alt-text="Screenshot of IoT Hub in the Azure portal that shows telemetry metrics for the migrated devices.":::
+ :::image type="content" source="media/howto-migrate-to-iot-hub/destination-metrics.png" alt-text="Screenshot of IoT Hub in the Azure portal that shows telemetry metrics for the migrated devices." lightbox="media/howto-migrate-to-iot-hub/destination-metrics.png":::
## Next steps
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
If your device template uses components such as the **Device information** compo
You can find the component name in the device template: The following limits apply in the `SELECT` clause:
The `FROM` clause must contain a device template ID. The `FROM` clause specifies
To find a device template ID, navigate to the **Devices** page in your IoT Central application and hover over a device that uses the template. The card includes the device template ID: You can also use the [Devices - Get](/rest/api/iotcentral/2022-07-31dataplane/devices/get) REST API call to get the device template ID for a device.
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
# Transform data inside your IoT Central application for export - IoT devices send data in various formats. To use the device data in your IoT solution, you may need to transform your device data before it's exported to other services. This article shows you how to transform device data as part of a data export definition in an IoT Central application.
The following video introduces you to IoT Central data transformations:
To add a transformation for a destination in your data export, select **+ Transform** as shown in the following screenshot: The **Data Transformation** panel lets you specify the transformation. In the **1. Add your input message** section, you can enter a sample message that you want to pass through the transformation. You can also generate a sample message by selecting a device template. In the **2. Build transformation query** section, you can enter the query that transforms the input message. The **3. Preview output messages(s)** section shows the result of the transformation: > [!TIP] > If you don't know the format of your input message, use `.` as the query to export the message as is to a destination such as a Webhook. Then paste the message received by the webhook into ***1. Add your input message**. Then build a transform query to process this message into your required output format.
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
In the example described in the following sections, the downstream device sends
} ```
-You want to use an IoT Edge module to transform the data and convert the temperature value from `Celsius` to `Fahrenheit` before sending it to IoT Central:
+You use an IoT Edge module to transform the data and convert the temperature value from `Celsius` to `Fahrenheit` before sending it to IoT Central:
```json {
For simplicity, the code for the downstream device provisions the device in IoT
To verify the scenario is running, navigate to your **IoT Edge gateway device** in IoT Central: - Select **Modules**. Verify that the three IoT Edge modules **$edgeAgent**, **$edgeHub** and **transformmodule** are running. - Select **Raw data**. The telemetry data in the **Device** column looks like:
iot-central Howto Use Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md
The log stores data for 30 days, after which it's no longer available.
The following screenshot shows the audit log view with the location of the sorting and filtering controls highlighted: ## Customize the log Select **Column options** to customize the audit log view. You can add and remove columns, reorder the columns, and change the column widths: ## Sort the log You can sort the log into ascending or descending timestamp order. To sort, select **Timestamp**: ## Filter the log To focus on a specific time, filter the log by time range. Select **Edit time range** and specify the range you're interested in: To focus on specific entries, filter by entity type or action. Select **Filter** and use the multi-select drop-downs to specify your filter conditions: ## Manage access
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
To learn more, see [Authenticate devices signed with X.509 CA certificates](../i
The provisioning service exposes two enrollment types that you can use to control device access with the X.509 attestation mechanism: - [Individual enrollment](./concepts-service.md#individual-enrollment) entries are configured with the device certificate associated with a specific device. These entries control enrollments for specific devices.-- [Enrollment group](./concepts-service.md#enrollment-group) entries are associated with a specific intermediate or root CA certificate. These entries control enrollments for all devices that have that intermediate or root certificate in their certificate chain.
+- [Enrollment group](./concepts-service.md#enrollment-group) entries are associated with a specific intermediate or root CA certificate. These entries control enrollments for all devices that have that intermediate or root certificate in their certificate chain.
+
+A certificate can be specified in only one enrollment entry in your DPS instance.
### Mutual TLS support
If the device sends the full device chain as follows during provisioning, then D
### DPS order of operations with certificates
-When a device connects to the provisioning service, the service prioritizes more specific enrollment entries over less specific enrollment entries. That is, if an individual enrollment for the device exists, the provisioning service applies that entry. If there is no individual enrollment for the device and an enrollment group for the first intermediate certificate in the device's certificate chain exists, the service applies that entry, and so on, down the chain to the root. The service applies the first applicable entry that it finds, such that:
+When a device connects to the provisioning service, the service walks its certificate chain beginning with the device (leaf) certificate and looks for a corresponding enrollment entry. It uses the first entry that it finds in the chain to determine whether to provision the device. That is, if an individual enrollment for the device (leaf) certificate exists, the provisioning service applies that entry. If there isn't an individual enrollment for the device, the service looks for an enrollment group that corresponds to the first intermediate certificate. If it finds one, it applies that entry; otherwise, it looks for an enrollment group for the next intermediate certificate, and so on down the chain to the root.
+
+The service applies the first entry that it finds, such that:
- If the first enrollment entry found is enabled, the service provisions the device.-- If the first enrollment entry found is disabled, the service does not provision the device. -- If no enrollment entry is found for any of the certificates in the device's certificate chain, the service does not provision the device.
+- If the first enrollment entry found is disabled, the service doesn't provision the device.
+- If no enrollment entry is found for any of the certificates in the device's certificate chain, the service doesn't provision the device.
+
+Note that each certificate in a device's certificate chain can be specified in an enrollment entry, but it can be specified in only one entry in the DPS instance.
-This mechanism and the hierarchical structure of certificate chains provides powerful flexibility in how you can control access for individual devices as well as for groups of devices. For example, imagine five devices with the following certificate chains:
+This mechanism and the hierarchical structure of certificate chains provides powerful flexibility in how you can control access for individual devices as well as for groups of devices. For example, imagine five devices with the following certificate chains:
- *Device 1*: root certificate -> certificate A -> device 1 certificate - *Device 2*: root certificate -> certificate A -> device 2 certificate
This mechanism and the hierarchical structure of certificate chains provides pow
- *Device 4*: root certificate -> certificate B -> device 4 certificate - *Device 5*: root certificate -> certificate B -> device 5 certificate
-Initially, you can create a single enabled group enrollment entry for the root certificate to enable access for all five devices. If certificate B later becomes compromised, you can create a disabled enrollment group entry for certificate B to prevent *Device 4* and *Device 5* from enrolling. If still later *Device 3* becomes compromised, you can create a disabled individual enrollment entry for its certificate. This revokes access for *Device 3*, but still allows *Device 1* and *Device 2* to enroll.
+Initially, you can create a single enabled group enrollment entry for the root certificate to enable access for all five devices. If certificate B later becomes compromised, you can create a disabled enrollment group entry for certificate B to prevent *Device 4* and *Device 5* from enrolling. If still later *Device 3* becomes compromised, you can create a disabled individual enrollment entry for its certificate. This revokes access for *Device 3*, but still allows *Device 1* and *Device 2* to enroll.
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
In general, the error message presented should explain how to fix the error. If
* The authorization rule used has the permission for the operation requested. * For the last error messages beginning with "principal...", this error can be resolved by assigning the correct level of Azure RBAC permission to the user. For example, an Owner on the IoT Hub can assign the "IoT Hub Data Owner" role, which gives all permissions. Try this role to resolve the lack of permission issue.
+> [!NOTE]
+> Some devices may experience a time drift issue when the device time has a greater than five minute difference from the server. This error can occur when a device has been connecting to an IoT hub without issues for weeks or even months but then starts to continually have its connection refused. The error can also be specific to a subset of devices connected to the IoT hub, since the time drift can happen at different rates depending upon when a device is first connected or turned on.
+>
+> Often, performing a time sync using NTP or rebooting the device (which can automatically perform a time sync during the boot sequence) fixes the issue and allows the device to connect again. To avoid this error, configure the device to perform a periodic time sync using NTP. You can schedule the sync for daily, weekly or monthly depending on the amount of drift the device experiences. If you can't configure a periodic NTP sync on your device, then schedule a periodic reboot.
+ ## 403002 IoTHubQuotaExceeded You may see requests to IoT Hub fail with the error **403002 IoTHubQuotaExceeded**. And in Azure portal, the IoT hub device list doesn't load.
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards encryption keys. As these keys ar
- Use least privilege access principal to assign roles. - Create custom role definition with precise set of permissions.
-## Choose regions that support availability zones
--- To ensure best high-availability and zone-resiliency, choose Azure regions where [Availability Zones](../../availability-zones/az-overview.md) are supported. These regions appear as "Recommended regions" in the Azure portal.- ## Backup - Make sure you take regular backups of your HSM. Backups can be done at the HSM level and for specific keys.
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
### Fully managed, highly available, single-tenant HSM as a service - **Fully managed**: HSM provisioning, configuration, patching, and maintenance is handled by the service.-- **Highly available and zone resilient** (where Availability zones are supported): Each HSM cluster consists of multiple HSM partitions that span across at least two availability zones. If the hardware fails, member partitions for your HSM cluster will be automatically migrated to healthy nodes.
+- **Highly available**: Each HSM cluster consists of multiple HSM partitions. If the hardware fails, member partitions for your HSM cluster will be automatically migrated to healthy nodes. For more information, see [Managed HSM Service Level Agreement](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/)
- **Single-tenant**: Each Managed HSM instance is dedicated to a single customer and consists of a cluster of multiple HSM partitions. Each HSM cluster uses a separate customer-specific security domain that cryptographically isolates each customer's HSM cluster.
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
- See [Best Practices using Azure Key Vault Managed HSM](best-practices.md) - [Managed HSM Status](https://azure.status.microsoft) - [Managed HSM Service Level Agreement](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/)-- [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)
+- [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)
lab-services Azure Polices For Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md
Title: Azure Policies for Lab Services
-description: This article describes the policies available for Azure Lab Services.
+description: Learn how to use Azure Policy to use built-in policies for Azure Lab Services to make sure your labs are compliant with your requirements.
Previously updated : 08/15/2022 Last updated : 11/08/2022
-# WhatΓÇÖs new with Azure Policy for Lab Services?
+# Use policies to audit and manage Azure Lab Services
-Azure Policy helps you manage and prevent IT issues by applying policy definitions that enforce rules and effects for your resource. Azure Lab Services has added four built-in Azure policies. This article summarizes the new policies available in the August 2022 Update for Azure Lab Services.
+When teams create and run labs on Azure Lab Services, they may face varying requirements to the configuration of resources. Administrators may look for options to control cost, provide customization through templates, or restrict user permissions.
-1. Lab Services should enable all options for auto shutdown
-1. Lab Services should not allow template virtual machines for labs
-1. Lab Services should require non-admin user for labs
-1. Lab Services should restrict allowed virtual machine SKU sizes
+As a platform administrator, you can use policies to lay out guardrails for teams to manage their own resources. [Azure Policy](../governance/policy/index.yml) helps audit and govern resource state. In this article, you learn about available auditing controls and governance practices for Azure Lab Services.
-For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](../governance/policy/samples/built-in-policies.md#lab-services).
+## Policies for Azure Lab Services
+[Azure Policy](../governance/policy/index.yml) is a governance tool that allows you to ensure that Azure resources are compliant with your policies.
+Azure Lab Services provides a set of policies that you can use for common scenarios with Azure Lab Services. You can assign these policy definitions to your existing subscription or use them as the basis to create your own custom definitions.
+
+Policies can be set at different scopes, such as at the subscription or resource group level. For more information, see the [Azure Policy documentation](../governance/policy/overview.md).
+
+For a full list of built-in policies, including policies for Lab Services, see Azure Policy built-in policy definitions.
-## Lab Services should enable all options for auto shutdown
+### Lab Services should enable all options for auto shutdown
-This policy enforces that all [shutdown options](how-to-configure-auto-shutdown-lab-plans.md) are enabled while creating the lab. During policy assignment, lab administrators can choose the following effects.
+This policy enforces that all [shutdown options](how-to-configure-auto-shutdown-lab-plans.md) are enabled while creating the lab.
+
+During policy assignment, lab administrators can choose the following effects:
|**Effect**|**Behavior**|
-|--|--|
-|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
-|**Deny**|Lab creation will fail if all shutdown options are not enabled. |
+|-||
+|**Audit** | Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when all shutdown options aren't enabled for a lab. |
+|**Deny** | Lab creation will fail if all shutdown options aren't enabled. |
+
+### Lab Services should not allow template virtual machines for labs
-## Lab Services should not allow template virtual machines for labs
+You can use this policy to restrict [customization of lab templates](tutorial-setup-lab.md). When you create a new lab, you can choose to *Create a template virtual machine* or *Use virtual machine image without customization*. If this policy is enabled, only *Use virtual machine image without customization* is allowed.
-This policy can be used to restrict [customization of lab templates](tutorial-setup-lab.md). When you create a new lab, you can select to *Create a template virtual machine* or *Use virtual machine image without customization*. If this policy is enabled, only *Use virtual machine image without customization* is allowed. During policy assignment, lab administrators can choose the following effects.
+During policy assignment, lab administrators can choose the following effects:
|**Effect**|**Behavior**|
-|--|--|
-|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
-|**Deny**|Lab creation to fail if ΓÇ£create a template virtual machineΓÇ¥ option is used for a lab.|
+|-||
+|**Audit** |Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
+|**Deny** |Lab creation will fail if *Create a template virtual machine* option is used for a lab.|
-## Lab Services requires non-admin user for labs
+### Lab Services requires non-admin user for labs
-This policy is used to enforce using non-admin accounts while creating a lab. With the August 2022 Update, you can choose to add a non-admin account to the VM image. This new feature allows you to keep separate credentials for VM admin and non-admin users. For more information to create a lab with a non-admin user, see [Tutorial: Create and publish a lab](tutorial-setup-lab.md#create-a-lab), which shows how to give a student non-administrator account rather than default administrator account on the ΓÇ£Virtual machine credentialsΓÇ¥ page of the new lab wizard.
+Use this policy to enforce using non-admin accounts while creating a lab. With the August 2022 Update, you can choose to add a non-admin account to the VM image. This new feature allows you to keep separate credentials for VM admin and non-admin users. For more information to create a lab with a non-admin user, see [Tutorial: Create and publish a lab](tutorial-setup-lab.md#create-a-lab). The tutorial shows how to give a student a non-administrator account rather than default administrator account on the **Virtual machine credentials** page in the new lab wizard.
-During the policy assignment, the lab administrator can choose the following effects.
+During the policy assignment, the lab administrator can choose the following effects:
|**Effect**|**Behavior**|
-|--|--|
-|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
-|**Deny**|Lab creation will fail if ΓÇ£Give lab users a non-admin account on their virtual machinesΓÇ¥ is not checked while creating a lab.|
+|-||
+|**Audit** |Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when non-admin accounts aren't used while creating the lab.|
+|**Deny** |Lab creation will fail if *Give lab users a non-admin account on their virtual machines* isn't checked while creating a lab.|
+
+### Lab Services should restrict allowed virtual machine SKU sizes
-## Lab Services should restrict allowed virtual machine SKU sizes
-This policy is used to enforce which SKUs can be used while creating the lab. For example, a lab administrator might want to prevent educators from creating labs with GPU SKUs since they are not needed for any classes being taught. This policy would allow lab administrators to enforce which SKUs can be used while creating the lab.
-During the policy assignment, the Lab Administrator can choose the following effects.
+This policy enforces which SKUs can be used while creating a lab. For example, a lab administrator might want to prevent educators from creating labs with GPU SKUs, since they aren't needed for any classes being taught.
+
+During the policy assignment, the Lab Administrator can choose the following effects:
|**Effect**|**Behavior**|
-|--|--|
-|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
-|**Deny**|Lab creation will fail if SKU chosen while creating a lab is not allowed as per the policy assignment.|
+|-||
+|**Audit** |Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
+|**Deny** |Lab creation will fail if the selected SKU while creating a lab isn't allowed as per the policy assignment.|
+
+## Assigning built-in policies
+
+To view the built-in policy definitions related to Azure Lab Services, use the following steps:
+
+1. Go to **Azure Policy** in the [Azure portal](https://portal.azure.com).
+1. Select **Definitions**.
+1. For **Type**, select *Built-in*, and for **Category**, select **Lab Services**.
+
+From here, you can select policy definitions to view them. While viewing a definition, you can use the **Assign** link to assign the policy to a specific scope, and configure the parameters for the policy. For more information, see [Assign a policy - portal](../governance/policy/assign-policy-portal.md).
+
+You can also assign policies by using [Azure PowerShell](../governance/policy/assign-policy-powershell.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), and [templates](../governance/policy/assign-policy-template.md).
## Custom policies
-In addition to the new built-in policies described above, you can create and apply custom policies. This technique is helpful in situations where none of the built-in policies apply or where you need more granularity.
+In addition to the new built-in policies described above, you can create and apply custom policies. This technique is helpful in situations where none of the built-in policies apply or where you need more granularity.
Learn how to create custom policies: - [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md).
Learn how to create custom policies:
## Next steps
-See the following articles:
- [How to use the Lab Services should restrict allowed virtual machine SKU sizes Azure policy](how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md)-- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)
+- [Built-in policies for Azure Lab Services](./policy-reference.md)
- [What is Azure policy?](../governance/policy/overview.md)
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
+
+ Title: Built-in policy definitions for Lab Services
+description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 11/08/2022++++++
+# Azure Policy built-in definitions for Azure Lab Services
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Lab Services. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Lab Services
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
Gateway Load Balancer consists of the following components:
* **Tunnel interfaces** - Gateway Load balancer backend pools have another component called the tunnel interfaces. The tunnel interface enables the appliances in the backend to ensure network flows are handled as expected. Each backend pool can have up to two tunnel interfaces. Tunnel interfaces can be either internal or external. For traffic coming to your backend pool, you should use the external type. For traffic going from your appliance to the application, you should use the internal type.
-* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain.
+* **Chain** - A Gateway Load Balancer can be referenced by a Standard Public Load Balancer frontend or a Standard Public IP configuration on a virtual machine. The addition of advanced networking capabilities in a specific sequence is known as service chaining. As a result, this reference is called a chain. In order to chain a Load Balancer frontend or Public IP configuration to a Gateway Load Balancer that is cross-subscription, users will need permission for the resource provider operation "Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action". For cross-tenant chaining, the user will also need Guest access.
## Pricing
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
The various load balancer configurations provide the following metrics:
The Azure portal exposes the load balancer metrics via the Metrics page. This page is available on both the load balancer's resource page for a particular resource and the Azure Monitor page.
+ >[!NOTE]
+ > Azure Load Balancer does not send health probes to deallocated virtual machines. When virtual machines are deallocated, the load balancer will stop reporting metrics for that instance. Metrics that are unavailable will appear as a dashed line in Portal, or display an error message indicating that metrics cannot be retrieved.
+ To view the metrics for your standard load balancer resources: 1. Go to the metrics page and do either of the following tasks:
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
Previously updated : 09/09/2022 Last updated : 11/04/2022
When you start the load test, Azure Load Testing service injects the following A
- A network security group (NSG). - An Azure Load Balancer.
-These resources are ephemeral and exist only for the duration of the load test run. If you restrict access to your virtual network, you need to [configure your virtual network](#configure-your-virtual-network) to enable communication between these Azure Load Testing and the injected VMs.
+These resources are ephemeral and exist only during the load test run. If you restrict access to your virtual network, you need to [configure your virtual network](#configure-your-virtual-network) to enable communication between these Azure Load Testing and the injected VMs.
> [!NOTE] > Virtual network support for Azure Load Testing is available in the following Azure regions: Australia East, East US, East US 2, North Europe, South Central US, UK South, and West US 2.
These resources are ephemeral and exist only for the duration of the load test r
- An existing virtual network and a subnet to use with Azure Load Testing. - The virtual network must be in the same subscription and the same region as the Azure Load Testing resource.
+- The virtual network address range cannot overlap with 172.29.0.0/30, the address range that Azure Load Testing uses.
- You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions. - The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test. Learn more about [configuring your test for high-scale load](./how-to-high-scale-load.md). - The subnet shouldn't be delegated to any other Azure service. For example, it shouldn't be delegated to Azure Container Instances (ACI). Learn more about [subnet delegation](/azure/virtual-network/subnet-delegation-overview).
To configure the load test with your virtual network settings, update the [YAML
## Troubleshooting
-### Creating or updating the load test fails with `Subnet ID passed is invalid`
+### Creating or updating the load test fails with `Subscription not registered with Microsoft.Batch (ALTVNET001)`
-To configure a load test in a virtual network, you must have sufficient permissions for managing virtual networks. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
+When you configure a load test in a virtual network, the subscription has to be registered with `Microsoft.Batch`.
-### Starting the load test fails with `Test cannot be started`
+1. Try to create or update the load test again after a few minutes.
-To start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
+1. If the error persists, follow these steps to [register your subscription](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider) with the `Microsoft.Batch` resource provider manually.
-If you're using the [Azure Load Testing REST API](/rest/api/loadtesting/) to start a load test, check that you're using a valid subnet ID. The subnet must be in the same Azure region as your Azure Load Testing resource.
+### Creating or updating the load test fails with `Subnet is not in the Succeeded state (ALTVNET002)`
-### The load test is stuck in `Provisioning` state and then goes to `Failed`
+The subnet you're using for the load test isn't in the `Succeeded` state and isn't ready to deploy your load test into it.
-1. Verify that your subscription is registered with `Microsoft.Batch`.
+1. Verify the state of the subnet.
- Run the following Azure CLI command to verify the status. The result should be `Registered`.
+ Run the following Azure CLI command to verify the state. The result should be `Succeeded`.
```azurecli
- az provider show --namespace Microsoft.Batch --query registrationState
+ az network vnet subnet show -g MyResourceGroup -n MySubnet --vnet-name MyVNet
```
-1. Verify that Microsoft Batch node management and the Azure Load Testing IPs can make inbound connections to the test engine VMs.
+1. Resolve any issues with the subnet. If you've just created the subnet, verify the state again after a few minutes.
- 1. Enable [Network Watcher](/azure/network-watcher/network-watcher-monitoring-overview) for the virtual network region.
+1. Alternately, select another subnet for the load test.
- ```azurecli
- az network watcher configure \
- --resource-group NetworkWatcherRG \
- --locations eastus \
- --enabled
- ```
+### Create or updating the load test fails with `Subnet is delegated to other service (ALTVNET003)`
- 1. Create a temporary VM with a Public IP in the subnet you're using for the Azure Load Testing service. You'll only use this VM to diagnose the network connectivity and delete it afterwards. The VM can be of any type.
+The subnet you use for deploying the load test can't be delegated to another Azure service. Either remove the existing delegation, or select another subnet that is not delegated to a service.
- ```azurecli
- az vm create \
- --resource-group myResourceGroup \
- --name myVm \
- --image UbuntuLTS \
- --generate-ssh-keys \
- --subnet mySubnet
- ```
+Learn more about [adding or removing a subnet delegation](/azure/virtual-network/manage-subnet-delegation#remove-subnet-delegation-from-an-azure-service).
- 1. Test the inbound connectivity to the temporary VM from the `BatchNodeManagement` service tag.
+### Starting the load test fails with `User doesn't have subnet/join/action permission on the virtual network (ALTVNET004)`
- 1. In the [Azure portal](https://portal.azure.com), go to **Network Watcher**.
- 1. On the left pane, select **NSG Diagnostic**.
- 1. Enter the details of the VM you created in the previous step.
- 1. Select **Service Tag** for the **Source type**, and then select **BatchNodeManagement** for the **Service tag**.
- 1. The **Destination IP address** is the IP address of the VM you created in previous step.
- 1. For **Destination port**, you have to validate two ports: *29876* and *29877*. Enter one value at a time and move to the next step.
- 1. Press **Check** to verify that the network security group isn't blocking traffic.
+To start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
- :::image type="content" source="media/how-to-test-private-endpoint/test-network-security-group-connectivity.png" alt-text="Screenshot that shows the NSG Diagnostic page to test network connectivity.":::
+1. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
- If the traffic status is **Denied**, [configure your virtual network](#configure-your-virtual-network) to allow traffic for the **BatchNodeManagement** service tag.
+1. Follow these steps to [assign the Network Contributor role](/azure/role-based-access-control/role-assignments-steps) to your account.
- 1. Test the inbound connectivity to the temporary VM from the `AzureLoadTestingInstanceManagement` service tag.
+### Creating or updating the load test fails with `IPv6 enabled subnet not supported (ALTVNET005)`
- 1. In the [Azure portal](https://portal.azure.com), go to **Network Watcher**.
- 1. On the left pane, select **NSG Diagnostic**.
- 1. Enter the details of the VM you created in the previous step.
- 1. Select **Service Tag** for the **Source type**, and then select **AzureLoadTestingInstanceManagement** for the **Service tag**.
- 1. The **Destination IP address** is the IP address of the VM you created in previous step.
- 1. For **Destination port**, enter *8080*.
- 1. Press **Check** to verify that the network security group isn't blocking traffic.
+Azure Load Testing doesn't support IPv6 enabled subnets. Select another subnet for which IPv6 isn't enabled.
- If the traffic status is **Denied**, [configure your virtual network](#configure-your-virtual-network) to allow traffic for the **AzureLoadTestingInstanceManagement** service tag.
+### Creating or updating the load test fails with `NSG attached to subnet is not in Succeeded state (ALTVNET006)`
- 1. Delete the temporary VM you created earlier.
+The network security group (NSG) that is attached to the subnet isn't in the `Succeeded` state.
-### The test executes and results in a 100% error rate
+1. Verify the state of the NSG.
-Possible cause: there are connectivity issues between the subnet in which you deployed Azure Load Testing and the subnet in which the application endpoint is hosted.
+ Run the following Azure CLI command to verify the state. The result should be `Succeeded`.
-1. You might deploy a temporary VM in the subnet used by Azure Load Testing and then use the [curl](https://curl.se/) tool to test connectivity to the application endpoint. Verify that there are no firewall or NSG rules that are blocking traffic.
+ ```azurecli
+ az network nsg show -g MyResourceGroup -n MyNsg
+ ```
+
+1. Resolve any issues with the NSG. If you've just created the NSG or subnet, verify the state again after a few minutes.
+
+1. Alternately, select another NSG.
+
+### Creating or updating the load test fails with `Route Table attached to subnet is not in Succeeded state (ALTVNET007)`
+
+The route table attached to the subnet isn't in the `Succeeded` state.
+
+1. Verify the state of the route table.
+
+ Run the following Azure CLI command to verify the state. The result should be `Succeeded`.
+
+ ```azurecli
+ az network route-table show -g MyResourceGroup -n MyRouteTable
+ ```
+
+1. Resolve any issues with the route table. If you've just created the route table or subnet, verify the state again after a few minutes.
+
+1. Alternately, select another route table.
+
+### Creating or updating the load test fails with `Inbound not allowed from AzureLoadTestingInstanceManagement service tag (ALTVNET008)`
+
+Inbound access from the `AzureLoadTestingInstanceManagement` service tag to the virtual network isn't allowed.
+
+Follow these steps to [enable traffic access](/azure/load-testing/how-to-test-private-endpoint#configure-traffic-access) for the `AzureLoadTestingInstanceManagement` service tag.
+
+### Creating or updating the load test fails with `Inbound not allowed from BatchNodeManagement service tag (ALTVNET009)`
+
+Inbound access from the `BatchNodeManagement` service tag to the virtual network isn't allowed.
+
+Follow these steps to [enable inbound access](/azure/load-testing/how-to-test-private-endpoint#configure-traffic-access) for the `BatchNodeManagement` service tag.
+
+### Creating or updating the load test fails with `Subnet is in a different subscription than resource (ALTVNET011)`
+
+The virtual network isn't in the same subscription and region as your Azure load testing resource. Either move or recreate the Azure virtual network or the Azure load testing resource to the same subscription and region.
+
+### Provisioning fails with `An azure policy is restricting engine deployment to your subscription (ALTVNET012)`
+
+An Azure policy is restricting load test engine deployment to your subscription. Check your policy restrictions and try again.
+
+### Provisioning fails with `Engines could not be deployed due to an error in subnet configuration (ALTVNET013)`
+
+The load test engine instances couldn't be deployed due to an error in the subnet configuration. Verify your subnet configuration. If the issue persists, raise a ticket with support along with the run ID of the test.
+
+1. Verify the state of the subnet.
+
+ Run the following Azure CLI command to verify the state. The result should be `Succeeded`.
+
+ ```azurecli
+ az network vnet subnet show -g MyResourceGroup -n MySubnet --vnet-name MyVNet
+ ```
+
+1. Resolve any issues with the subnet. If you've just created the subnet, verify the state again after a few minutes.
+
+1. If the problem persists, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+ Provide the load test run ID within the support request.
+
+### Starting the load test fails with `Subnet has {0} free IPs, {1} more free IP(s) required to run {2} engine instance load test (ALTVNET014)`
-1. Verify the [Azure Load Testing results file](./how-to-export-test-results.md) for error response messages:
+The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test.
- |Response message | Action |
- |||
- | **Non http response code java.net.unknownhostexception** | Possible cause is a DNS resolution issue. If youΓÇÖre using Azure Private DNS, verify that the DNS is set up correctly for the subnet in which Azure Load Testing instances are injected, and for the application subnet. |
- | **Non http response code SocketTimeout** | Possible cause is when thereΓÇÖs a firewall blocking connections from the subnet in which Azure Load Testing instances are injected to your application subnet. |
+Follow these steps to [update the subnet settings](/azure/virtual-network/virtual-network-manage-subnet#change-subnet-settings) and increase the IP address range.
## Next steps
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
| Concurrent engine instances | 5-100 <sup>1</sup> | 1000 | | Engine instances per test run | 1-45 <sup>1</sup> | 45 |
-<sup>1</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
+<sup>1</sup> If you aren't already at the maximum limit, you can request an increase. We aren't currently able to approve increase requests past our maximum limitations stated above. To request an increase for your default limit, contact Azure Support. Default limits vary by offer category type.
### Test runs
The following limits apply on a per-region, per-subscription basis.
| Concurrent test runs | 5-25 <sup>2</sup> | 1000 | | Test duration | 3 hours |
-<sup>2</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
+<sup>2</sup> If you aren't already at the maximum limit, you can request an increase. We aren't currently able to approve increase requests past our maximum limitations stated above. To request an increase for your default limit, contact Azure Support. Default limits vary by offer category type.
### Data retention
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
In the [Azure portal](https://portal.azure.com), add one or more authorization p
| Property | Required | Type | Description | |-|-||-| | **Policy name** | Yes | String | The name that you want to use for the authorization policy |
- | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value. |
+ | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. |
||| 1. To add another claim, select from these options:
In your ARM template, define an authorization policy following these steps and s
1. Provide a name for authorization policy, set the policy type to `AAD`, and include a `claims` array where you specify one or more claim types.
- At a minimum, the `claims` array must include the Issuer claim type where you set the claim's `name` property to `iss` and set the `value` to start with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value.
+ At a minimum, the `claims` array must include the Issuer claim type where you set the claim's `name` property to `iss` and set the `value` to start with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value.
1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request trigger outputs](#include-auth-header).
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
Last updated 08/17/2022
-# Make data-driven policies and influence decision-making (preview)
+# Make data-driven policies and influence decision-making
Machine learning models are powerful in identifying patterns in data and making predictions. But they offer little support for estimating how the real-world outcome changes in the presence of an intervention.
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
* The compute instance is also a secure training compute target similar to [compute clusters](how-to-create-attach-compute-cluster.md), but it is single node. * You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview)**. * You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs.
-* To save on costs, **[create a schedule (preview)](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview)** to automatically start and stop the compute instance.
+* To save on costs, **[create a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop)** to automatically start and stop the compute instance.
## Tools and environments
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
Last updated 08/17/2022
-# Counterfactuals analysis and what-if (preview)
+# Counterfactuals analysis and what-if
What-if counterfactuals address the question of what the model would predict if you changed the action input. They enable understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes.
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
Last updated 11/09/2022
-# Understand your datasets (preview)
+# Understand your datasets
Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points that belong to an underrepresented group or to be optimized along an inappropriate metric.
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
-+ Last updated 05/24/2022 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
Last updated 08/17/2022
-# Assess errors in machine learning models (preview)
+# Assess errors in machine learning models
One of the biggest challenges with current model-debugging practices is using aggregate metrics to score models on a benchmark dataset. Model accuracy might not be uniform across subgroups of data, and there might be input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, the appearance of fairness issues, and a loss of trust in machine learning altogether.
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
Title: Machine learning fairness (preview)
+ Title: Machine learning fairness
description: Learn about machine learning fairness and how the Fairlearn Python package can help you assess and mitigate unfairness.
#Customer intent: As a data scientist, I want to learn about machine learning fairness and how to assess and mitigate unfairness in machine learning models.
-# Model performance and fairness (preview)
+# Model performance and fairness
This article describes methods that you can use to understand your model performance and fairness in Azure Machine Learning.
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
An Azure Machine Learning pipeline is an independently executable workflow of a
## Why are Azure Machine Learning pipelines needed? The core of a machine learning pipeline is to split a complete machine learning task into a multistep workflow. Each step is a manageable component that can be developed, optimized, configured, and automated individually. Steps are connected through well-defined interfaces. The Azure Machine Learning pipeline service automatically orchestrates all the dependencies between pipeline steps. This modular approach brings two key benefits:-- [Standardize the Machine learning operation (MLOPs) practice and support scalable team collaboration](#standardize-the-mlops-practice-and-support-scalable-team-collaboration)
+- [Standardize the Machine learning operation (MLOps) practice and support scalable team collaboration](#standardize-the-mlops-practice-and-support-scalable-team-collaboration)
- [Training efficiency and cost reduction](#training-efficiency-and-cost-reduction) ### Standardize the MLOps practice and support scalable team collaboration
-Machine learning operation (MLOPs) automates the process of building machine learning models and taking the model to production. This is a complex process. It usually requires collaboration from different teams with different skills. A well-defined machine learning pipeline can abstract this complex process into a multiple steps workflow, mapping each step to a specific task such that each team can work independently.
+Machine learning operation (MLOps) automates the process of building machine learning models and taking the model to production. This is a complex process. It usually requires collaboration from different teams with different skills. A well-defined machine learning pipeline can abstract this complex process into a multiple steps workflow, mapping each step to a specific task such that each team can work independently.
For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
When you create resources for an Azure Machine Learning workspace, resources for
* [Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) * [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
+When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
### Costs might accrue before resource deletion
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
Last updated 11/09/2022
-# Assess AI systems by using the Responsible AI dashboard (preview)
+# Assess AI systems by using the Responsible AI dashboard
Implementing Responsible AI in practice requires rigorous engineering. But rigorous engineering can be tedious, manual, and time-consuming without the right tooling and infrastructure.
machine-learning Concept Responsible Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai.md
Title: What is Responsible AI (preview)
+ Title: What is Responsible AI
description: Learn what Responsible AI is and how to use it with Azure Machine Learning to understand models, protect data, and control the model lifecycle.
#Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
-# What is Responsible AI (preview)?
+# What is Responsible AI?
[!INCLUDE [dev v1](../../includes/machine-learning-dev-v1.md)]
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
| **\*.anaconda.org** | Used to get repo data. | | **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. | | **cloud.r-project.org** | Used when installing CRAN packages for R development. |
+ | **ghcr.io**</br>**pkg-containers.githubusercontent.com** | Used by the Custom Applications feature on a compute instance to pull images from Github Container Repository (ghcr.io). For example, the RStudio Workbench image is hosted here. |
| **\*pytorch.org** | Used by some examples based on PyTorch. | | **\*.tensorflow.org** | Used by some examples based on Tensorflow. | | **\*vscode.dev**</br>**\*vscode-unpkg.net**</br>**\*vscode-cdn.net**</br>**\*vscodeexperiments.azureedge.net**</br>**default.exp-tas.com** | Required to access vscode.dev (Visual Studio Code for the Web) |
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
This guide assumes you don't have a managed identity, a storage account or an on
pip install --pre azure-mgmt-authorization ``` -
-Install them with the following code:
- # [User-assigned (Python)](#tab/user-identity-python) * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
--++ Previously updated : 10/19/2022 Last updated : 11/10/2022 # Create and manage an Azure Machine Learning compute instance
In this article, you learn how to:
* [Create](#create) a compute instance * [Manage](#manage) (start, stop, restart, delete) a compute instance
-* [Create a schedule](#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview)
+* [Create a schedule](#schedule-automatic-start-and-stop) to automatically start and stop the compute instance
You can also [use a setup script (preview)](how-to-customize-compute-instance.md) to create the compute instance with your own custom environment.
Where the file *create-instance.yml* is:
* Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ (preview) to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup. * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview) * Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
- * Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop-preview) below.
+ * Add schedule. Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop) below.
* Enable auto-stop (preview). Configure a compute instance to automatically shut down if it's inactive. For more information, see [configure auto-stop](#configure-auto-stop-preview).
SSH access is disabled by default. SSH access can't be changed after creation.
## Configure auto-stop (preview)+
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ To avoid getting charged for a compute instance that is switched on but inactive, you can configure auto-stop. A compute instance is considered inactive if the below conditions are met:
You can also create your own custom Azure policy. For example, if the below poli
## Create on behalf of (preview)
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with: * Studio, using the [Advanced settings](?tabs=azure-studio#advanced-settings)
The data scientist can start, stop, and restart the compute instance. They can u
* RStudio * Integrated notebooks
-## Schedule automatic start and stop (preview)
+## Schedule automatic start and stop
Define multiple schedules for auto-shutdown and auto-start. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance. Schedules can also be defined for [create on behalf of](#create-on-behalf-of-preview) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are useful when you create a compute instance on behalf of another user.
+Prior to a scheduled shutdown, users will see a notification alerting them that the Compute Instance is about to shutdown. At that point, the user can choose to dismiss the upcoming shutdown event, if for example they are in the middle of using their Compute Instance.
+ ### Create a schedule in studio 1. [Fill out the form](?tabs=azure-studio#create).
Where the file *create-instance.yml* is:
:::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/instance-schedule.yml":::
+### Create a schedule with SDK
++
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.constants import TimeZone
+from azure.ai.ml.entities import ComputeInstance, AmlCompute, ComputeSchedules, ComputeStartStopSchedule, RecurrencePattern, RecurrenceTrigger
+from azure.identity import DefaultAzureCredential
+from dateutil import tz
+import datetime
+# Enter details of your AML workspace
+subscription_id = "<guid>"
+resource_group = "sample-rg"
+workspace = "sample-ws"
+# get a handle to the workspace
+ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+)
+ci_minimal_name = "sampleCI"
+mytz = tz.gettz("Asia/Kolkata")
+now = datetime.datetime.now(tz = mytz)
+starttime = now + datetime.timedelta(minutes=25)
+triggers = RecurrenceTrigger(frequency="day", interval=1, schedule=RecurrencePattern(hours=17, minutes=30))
+myschedule = ComputeStartStopSchedule(start_time=starttime, time_zone=TimeZone.INDIA_STANDARD_TIME, trigger=triggers, action="Stop")
+com_sch = ComputeSchedules(compute_start_stop=[myschedule])
+ci_minimal = ComputeInstance(name=ci_minimal_name, schedules=com_sch)
+ml_client.begin_create_or_update(ci_minimal)
+```
+ ### Create a schedule with a Resource Manager template You can schedule the automatic start and stop of a compute instance by using a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
Following is a sample policy to default a shutdown schedule at 10 PM PST.
## Assign managed identity (preview)
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account. You can create compute instance with managed identity from Azure ML Studio:
arm_access_token = get_access_token_msi("https://management.azure.com")
## Add custom applications such as RStudio (preview)
+> [!IMPORTANT]
+> Items marked (preview) below are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ You can set up other applications, such as RStudio, when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance 1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create)
Access the custom applications that you set up in studio:
Start, stop, restart, and delete a compute instance. A compute instance doesn't automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you'll still be billed for disk, public IP, and standard load balancer.
-You can [create a schedule](#schedule-automatic-start-and-stop-preview) for the compute instance to automatically start and stop based on a time and day of week.
+You can [create a schedule](#schedule-automatic-start-and-stop) for the compute instance to automatically start and stop based on a time and day of week.
> [!TIP] > The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance. Please do not stop the compute instance by issuing sudo shutdown from the terminal. The temp disk size on compute instance depends on the VM size chosen and is mounted on /mnt.
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training job](v1/how-to-set-up-training-targets.md)
+* [Submit a training job](v1/how-to-set-up-training-targets.md)
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Last updated 05/04/2022
-# Customize the compute instance with a script (preview)
-
-> [!IMPORTANT]
-> Setup scripts are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Customize the compute instance with a script
Use a setup script for an automated way to customize and configure a compute instance at provisioning time. Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets) or for an [inference target](concept-compute-target.md#compute-targets-for-inference). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
-As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
+As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements. You can configure your setup script as a Creation script, which will run once when the compute instance is created. Or you can configure it as a Startup script, which will run every time the compute instance is started (including initial creation).
Some examples of what you can do in a setup script:
Once you store the script, specify it during creation of your compute instance:
1. [Fill out the form](how-to-create-manage-compute-instance.md?tabs=azure-studio#create). 1. On the second page of the form, open **Show advanced settings**. 1. Turn on **Provision with setup script**.
+1. Select either **Creation script** or **Startup script** tab.
1. Browse to the shell script you saved. Or upload a script from your computer. 1. Add command arguments as needed.
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
store = AzureDataLakeGen2Datastore(
name="", description="", account_name="",
- file_system=""
+ filesystem=""
) ml_client.create_or_update(store)
ml_client.create_or_update(store)
- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job) - [Create data assets](how-to-create-data-assets.md#create-data-assets)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
+
+ Title: Interact with your jobs (debug and monitor)
+
+description: Debug or monitor your Machine Learning job as it runs on AzureML compute with your training application of choice.
++++++++ Last updated : 03/15/2022
+#Customer intent: I'm a data scientist with ML knowledge in the machine learning space, looking to build ML models using data in Azure Machine Learning with full control of the model training including debugging and monitoring of live jobs.
++
+# Debug jobs and monitor training progress (preview)
+Machine learning model training is usually an iterative process and requires significant experimentation. With the Azure Machine Learning interactive job experience, data scientists can use the Azure Machine Learning Python SDKv2, Azure Machine Learning CLIv2 or the Azure Studio to access the container where their job is running. Once the job container is accessed, users can iterate on training scripts, monitor training progress or debug the job remotely like they typically do on their local machines. Jobs can be interacted with via different training applications including **JupyterLab, TensorBoard, VS Code** or by connecting to the job container directly via **SSH**.
+
+Interactive training is supported on **Azure Machine Learning Compute Clusters** and **Azure Arc-enabled Kubernetes Cluster**.
+
+## Prerequisites
+- Review [getting started with training on Azure Machine Learning](./how-to-train-model.md).
+- To use **VS Code**, [follow this guide](how-to-setup-vs-code.md) to set up the Azure Machine Learning extension.
+- Make sure your job environment has the `openssh-server` and `ipykernel ~=6.0` packages installed (all Azure Machine Learning curated training environments have these packages installed by default).
+- Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than Pytorch, Tensorflow or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) is not currently supported.
++
+## Interact with your job container
+
+By specifying interactive applications at job creation, you can connect directly to the container on the compute node where your job is running. Once you have access to the job container, you can test or debug your job in the exact same environment where it would run. You can also use VS Code to attach to the running process and debug as you would locally.
+
+### Enable during job submission
+# [Azure Machine Learning Studio](#tab/ui)
+1. Create a new job from the left navigation pane in the studio portal.
++
+2. Choose `Compute cluster` or `Attached compute` (Kubernetes) as the compute type, choose the compute target, and specify how many nodes you need in `Instance count`.
+
+ :::image type="content" source="./media/interactive-jobs/select-compute.png" alt-text="Screenshot of selecting a compute location for a job.":::
+
+3. Follow the wizard to choose the environment you want to start the job.
+
+
+4. In `Job settings` step, add your training code (and input/output data) and reference it in your command to make sure it's mounted to your job.
+
+ :::image type="content" source="./media/interactive-jobs/sleep-command.png" alt-text="Screenshot of reviewing a drafted job and completing the creation.":::
+
+ You can put `sleep <specific time>` at the end of your command to specify the amount of time you want to reserve the compute resource. The format follows:
+ * sleep 1s
+ * sleep 1m
+ * sleep 1h
+ * sleep 1d
+
+ You can also use the ```sleep infinity``` command that would keep the job alive indefinitely.
+
+ > [!NOTE]
+ > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing).
+
+5. Select the training applications you want to use to interact with the job.
+
+ :::image type="content" source="./media/interactive-jobs/select-training-apps.png" alt-text="Screenshot of selecting a training application for the user to use for a job.":::
+
+6. Review and create the job.
++
+# [Python SDK](#tab/python)
+1. Define the interactive services you want to use for your job. Make sure to replace `your compute name` with your own value. If you want to use your own custom environment, follow the examples in [this tutorial](how-to-manage-environments-v2.md) to create a custom environment.
+
+ Note that you have to import the `JobService` class from the `azure.ai.entities` package to configure interactive services via the SDKv2.
+
+ ```python
+ command_job = command(...
+ code="./src", # local path where the code is stored
+ command="python main.py", # you can add a command like "sleep 1h" to reserve the compute resource is reserved after the script finishes running
+ environment="AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu@latest",
+ compute="<name-of-compute>",
+ services={
+ "My_jupyterlab": JobService(
+ job_service_type="jupyter_lab",
+ nodes="all" # For distributed jobs, use the `nodes` property to pick which node you want to enable interactive services on. If `nodes` are not selected, by default, interactive applications are only enabled on the head node. Values are "all", or compute node index (for ex. "0", "1" etc.)
+ ),
+ "My_vscode": JobService(
+ job_service_type="vs_code",
+ nodes="all"
+ ),
+ "My_tensorboard": JobService(
+ job_service_type="tensor_board",
+ nodes="all",
+ properties={
+ "logDir": "output/tblogs" # relative path of Tensorboard logs (same as in your training script)
+ }
+ ),
+ "My_ssh": JobService(
+ job_service_type="ssh",
+ sshPublicKeys="<add-public-key>",
+ nodes="all"
+ properties={
+ "sshPublicKeys":"<add-public-key>"
+ }
+ ),
+ }
+ )
+
+ # submit the command
+ returned_job = ml_client.jobs.create_or_update(command_job)
+ ```
+
+ The `services` section specifies the training applications you want to interact with.
+
+ You can put `sleep <specific time>` at the end of your command to specify the amount of time you want to reserve the compute resource. The format follows:
+ * sleep 1s
+ * sleep 1m
+ * sleep 1h
+ * sleep 1d
+
+ You can also use the `sleep infinity` command that would keep the job alive indefinitely.
+
+ > [!NOTE]
+ > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing).
+
+2. Submit your training job. For more details on how to train with the Python SDKv2, check out this [article](./how-to-train-model.md).
+
+# [Azure CLI](#tab/azurecli)
+
+1. Create a job yaml `job.yaml` with below sample content. Make sure to replace `your compute name` with your own value. If you want to use custom environment, follow the examples in [this tutorial](how-to-manage-environments-v2.md) to create a custom environment.
+ ```dotnetcli
+ code: src
+ command:
+ python train.py
+ # you can add a command like "sleep 1h" to reserve the compute resource is reserved after the script finishes running.
+ environment: azureml:AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu:41
+ compute: azureml:<your compute name>
+
+ my_vs_code:
+ job_service_type: vs_code
+ nodes: all # For distributed jobs, use the `nodes` property to pick which node you want to enable interactive services on. If `nodes` are not selected, by default, interactive applications are only enabled on the head node. Values are "all", or compute node index (for ex. "0", "1" etc.)
+ my_tensor_board:
+ job_service_type: tensor_board
+ properties:
+ logDir: "output/tblogs" # relative path of Tensorboard logs (same as in your training script)
+ nodes: all
+ my_jupyter_lab:
+ job_service_type: jupyter_lab
+ nodes: all
+ my_ssh:
+ job_service_type: ssh
+ properties:
+ sshPublicKeys: <paste the entire pub key content>
+ nodes: all
+ ```
+ The `services` section specifies the training applications you want to interact with.
+
+ You can put `sleep <specific time>` at the end of the command to specify the amount of time you want to reserve the compute resource. The format follows:
+ * sleep 1s
+ * sleep 1m
+ * sleep 1h
+ * sleep 1d
+
+ You can also use the `sleep infinity` command that would keep the job alive indefinitely.
+
+ > [!NOTE]
+ > If you use `sleep infinity`, you will need to manually [cancel the job](./how-to-interactive-jobs.md#end-job) to let go of the compute resource (and stop billing).
+
+2. Run command `az ml job create --file <path to your job yaml file> --workspace-name <your workspace name> --resource-group <your resource group name> --subscription <sub-id> `to submit your training job. For more details on running a job via CLIv2, check out this [article](./how-to-train-model.md).
+++
+### Connect to endpoints
+# [Azure Machine Learning Studio](#tab/ui)
+To interact with your running job, click the button **Debug and monitor** on the job details page.
+++
+Clicking the applications in the panel opens a new tab for the applications. You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with.
++
+It might take a few minutes to start the job and the training applications specified during job creation.
+
+# [Python SDK](#tab/python)
+- Once the job is submitted, you can use `ml_client.jobs.show_services("<job name>", <compute node index>)` to view the interactive service endpoints.
+
+- To connect via SSH to the container where the job is running, run the command `az ml job connect-ssh --name <job-name> --node-index <compute node index> --private-key-file-path <path to private key>`. To set up the Azure Machine Learning CLIv2, follow this [guide](./how-to-configure-cli.md).
+
+You can find the reference documentation for the SDKv2 [here](/sdk/azure/ml).
+
+You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with by passing in the node index.
+
+# [Azure CLI](#tab/azurecli)
+- When the job is **running**, Run the command `az ml job show-services --name <job name> --node-index <compute node index>` to get the URL to the applications. The endpoint URL will show under `services` in the output. Note that for VS Code, you must copy and paste the provided URL in your browser.
+
+- To connect via SSH to the container where the job is running, run the command `az ml job connect-ssh --name <job-name> --node-index <compute node index> --private-key-file-path <path to private key>`.
+
+You can find the reference documentation for these commands [here](/cli/azure/ml).
+
+You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with by passing in the node index.
+++
+### Interact with the applications
+When you click on the endpoints to interact when your job, you're taken to the user container under your working directory, where you can access your code, inputs, outputs, and logs. If you run into any issues while connecting to the applications, the interactive capability and applications logs can be found from **system_logs->interactive_capability** under **Outputs + logs** tab.
++
+- You can open a terminal from Jupyter Lab and start interacting within the job container. You can also directly iterate on your training script with Jupyter Lab.
+
+ :::image type="content" source="./media/interactive-jobs/jupyter-lab.png" alt-text="Screenshot of interactive jobs Jupyter lab content panel.":::
+
+- You can also interact with the job container within VS Code. To attach a debugger to a job during job submission and pause execution, [navigate here](./how-to-interactive-jobs.md#attach-a-debugger-to-a-job).
+
+ :::image type="content" source="./media/interactive-jobs/vs-code-open.png" alt-text="Screenshot of interactive jobs VS Code panel when first opened. This shows the sample python file that was created to print two lines.":::
+
+- If you have logged tensorflow events for your job, you can use TensorBoard to monitor the metrics when your job is running.
+
+ :::image type="content" source="./media/interactive-jobs/tensorboard-open.png" alt-text="Screenshot of interactive jobs tensorboard panel when first opened. This information will vary depending upon customer data":::
+
+### End job
+Once you're done with the interactive training, you can also go to the job details page to cancel the job which will release the compute resource. Alternatively, use `az ml job cancel -n <your job name>` in the CLI or `ml_client.job.cancel("<job name>")` in the SDK.
++
+## Attach a debugger to a job
+To submit a job with a debugger attached and the execution paused, you can use debugpy and VS Code (`debugpy` must be installed in your job environment).
+
+1. During job submission (either through the UI, the CLIv2 or the SDKv2) use the debugpy command to run your python script. For example, the below screenshot shows a sample command that uses debugpy to attach the debugger for a tensorflow script (`tfevents.py` can be replaced with the name of your training script).
+
+
+2. Once the job has been submitted, [connect to the VS Code](./how-to-interactive-jobs.md#connect-to-endpoints), and click on the in-built debugger.
+
+ :::image type="content" source="./media/interactive-jobs/open-debugger.png" alt-text="Screenshot of interactive jobs location of open debugger on the left side panel":::
+
+3. Use the "Remote Attach" debug configuration to attach to the submitted job and pass in the path and port you configured in your job submission command. You can also find this information on the job details page.
+
+ :::image type="content" source="./media/interactive-jobs/debug-path-and-port.png" alt-text="Screenshot of interactive jobs completed jobs":::
+
+ :::image type="content" source="./media/interactive-jobs/remote-attach.png" alt-text="Screenshot of interactive jobs add a remote attach button":::
+
+4. Set breakpoints and walk through your job execution as you would in your local debugging workflow.
+
+ :::image type="content" source="./media/interactive-jobs/set-breakpoints.png" alt-text="Screenshot of location of an example breakpoint that is set in the Visual Studio Code editor":::
++
+> [!NOTE]
+> If you use debugpy to start your job, your job will **not** execute unless you attach the debugger in VS Code and execute the script. If this is not done, the compute will be reserved until the job is [cancelled](./how-to-interactive-jobs.md#end-job).
+
+## Next steps
+++ Learn more about [how and where to deploy a model](./how-to-deploy-managed-online-endpoints.md).
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
accuracy = accuracy_score(y_test, y_pred)
``` > [!TIP]
-> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demostrates how to log a model with preprocessing using pipelines.
+> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demonstrates how to log a model with preprocessing using pipelines.
## Logging models with a custom signature, environment or samples
mlflow.xgboost.log_model(model,
> [!NOTE] > * `log_models=False` is configured in `autolog`. This prevents MLflow to automatically log the model, as it is done manually later.
-> * `infer_signature` is a convenient method to try to infer the signature directly from inputs and outpus.
+> * `infer_signature` is a convenient method to try to infer the signature directly from inputs and outputs.
> * `mlflow.utils.environment._mlflow_conda_env` is a private method in MLflow SDK and it may change in the future. This example uses it just for sake of simplicity, but it must be used with caution or generate the YAML definition manually as a Python dictionary. ## Logging models with a different behavior in the predict method
A solution to this scenario is to implement machine learning pipelines that move
## Logging custom models
-MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) including FastAI, MXNet Gluon, PyTorch, TensorFlow, XGBoost, CatBoost, h2o, Keras, LightGBM, MLeap, ONNX, Prophet, spaCy, Spark MLLib, Scikit-Learn, and statsmodels. However, they may be times where you need to change how a flavor works, log a model not natively supported by MLflow or even log a model that uses multiple elements from different frameworks. For those cases, you may need to create a custom model flavor.
+MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) including FastAI, MXNet Gluon, PyTorch, TensorFlow, XGBoost, CatBoost, h2o, Keras, LightGBM, MLeap, ONNX, Prophet, spaCy, Spark MLLib, Scikit-Learn, and statsmodels. However, there may be times where you need to change how a flavor works, log a model not natively supported by MLflow or even log a model that uses multiple elements from different frameworks. For those cases, you may need to create a custom model flavor.
For this type of models, MLflow introduces a flavor called `pyfunc` (standing from Python function). Basically this flavor allows you to log any object you want as a model, as long as it satisfies two conditions:
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
Title: Model interpretability (preview)
+ Title: Model interpretability
description: Learn how your machine learning model makes predictions during training and inferencing by using the Azure Machine Learning CLI and Python SDK.
Last updated 11/04/2022
-# Model interpretability (preview)
+# Model interpretability
This article describes methods you can use for model interpretability in Azure Machine Learning.
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
Low-Priority VMs have a single quota separate from the dedicated quota value, wh
## Schedule compute instances
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop-preview) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
+When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. [Set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance to save cost when you aren't planning to use it.
## Use reserved instances
machine-learning How To Manage Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-synapse-spark-pool.md
description: Learn how to attach and manage Spark pools with Azure Synapse -+
machine-learning How To Responsible Ai Insights Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-sdk-cli.md
There are two output ports:
-## How to generate a Responsible AI scorecard?
+## How to generate a Responsible AI scorecard (preview)
The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
machine-learning How To Responsible Ai Insights Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-ui.md
After youΓÇÖve finished configuring your experiment, select **Create** to start
To learn how to view and use your Responsible AI dashboard see, [Use the Responsible AI dashboard in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
-## Generate Responsible AI scorecard (preview)
+## How to generate Responsible AI scorecard (preview)
Once you've created a dashboard, you can use a no-code UI in Azure Machine Learning studio to customize and generate a Responsible AI scorecard. This enables you to share key insights for responsible deployment of your model, such as fairness and feature importance, with non-technical and technical stakeholders. Similar to creating a dashboard, you can use the following steps to access the scorecard generation wizard:
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
-# Schedule machine learning pipeline jobs (preview)
+# Schedule machine learning pipeline jobs
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
You can use a service principal for Azure CLI commands. For more information, se
-The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/). You use the Azure Active Directory [client credentials grant flow](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows.
+The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/). You use the Azure Active Directory [client credentials grant flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows.
> [!IMPORTANT] > If you are currently using Azure Active Directory Authentication Library (ADAL) to get credentials, we recommend that you [Migrate to the Microsoft Authentication Library (MSAL)](../active-directory/develop/msal-migration.md). ADAL support ended June 30, 2022.
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
description: Learn how to submit standalone and pipeline Spark jobs in Azure Machine Learning -+
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
description: Learn how to use Apache Spark to wrangle data with Azure Machine Learning -+
To create and configure a Managed (Automatic) Spark compute in an open notebook:
The Notebooks UI also provides options for Spark session configuration, for the Managed (Automatic) Spark compute. To configure a Spark session:
+1. Select **Configure session** at the bottom of the screen.
1. Select a version of **Apache Spark** from the dropdown menu. 1. Select **Instance type** from the dropdown menu. 1. Input a Spark **Session timeout** value, in minutes.
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### Azure Container for PyTorch (ACPT) (preview)
-**Name**: AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu
+**Name**: AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu
**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It is optimized for large, distributed deep learning workloads and comes pre-packaged with the best of Microsoft technologies for accelerated training, e.g., OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc. The following configurations are supported: | Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version | | | | | | | | | |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
+| AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.9 | 1.13.1 | 1.12.1 | 0.7.3 | 1.13.1 |
+| AzureML-ACPT-pytorch-1.12-py38-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.8 | 1.12.0 | 1.12.0 | 0.7.3 | 1.12.0 |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
### PyTorch
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
Limitations and known issues for data drift monitors:
| Feature type | Data type | Condition | Limitations | | | | | -- |
- | Categorical | string, bool, int, float | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. |
+ | Categorical | string | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. |
| Numerical | int, float | The values in the feature are of a numerical data type and do not meet the condition for a categorical feature. | Feature dropped if >15% of values are null. | * When you have created a data drift monitor but cannot see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
Limitations and known issues for data drift monitors:
* Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) to set up a dataset monitor. * See how to set up data drift on [models deployed to Azure Kubernetes Service](how-to-enable-data-collection.md).
-* Set up dataset drift monitors with [Azure Event Grid](../how-to-use-event-grid.md).
+* Set up dataset drift monitors with [Azure Event Grid](../how-to-use-event-grid.md).
marketplace Analytics Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-prerequisites.md
The Azure AD application you created in the Azure portal needs to be linked to y
## Generate an Azure AD token
-You need to Generate an Azure AD token using the Application (client) ID. This ID helps to uniquely identify your client application in the Microsoft identity platform and the client secret from the previous step. For the steps to generate an Azure AD token, see [Service to service calls using client credentials (shared secret or certificate)](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
+You need to Generate an Azure AD token using the Application (client) ID. This ID helps to uniquely identify your client application in the Microsoft identity platform and the client secret from the previous step. For the steps to generate an Azure AD token, see [Service to service calls using client credentials (shared secret or certificate)](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
> [!NOTE] > The token is valid for one hour.
marketplace Azure Ad Free Or Trial Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-free-or-trial-landing-page.md
To get started, follow the instructions for [registering a new application](../a
If you intend to query the Microsoft Graph API, [configure your new application to access web APIs](../active-directory/develop/quickstart-configure-app-access-web-apis.md). When you select the API permissions for this application, the default of **User.Read** is enough to gather basic information about the user to make the onboarding process smooth and automatic. Do not request any API permissions labeled **needs admin consent**, as this will block all non-administrator users from visiting your landing page.
-If you do require elevated permissions as part of your onboarding or provisioning process, consider using the [incremental consent](../active-directory/azuread-dev/azure-ad-endpoint-comparison.md) functionality of Azure AD so that all users sent from the marketplace are able to interact initially with the landing page.
+If you do require elevated permissions as part of your onboarding or provisioning process, consider using the [incremental consent](../active-directory/develop/permissions-consent-overview.md#consent) functionality of Azure AD so that all users sent from the marketplace are able to interact initially with the landing page.
## Use a code sample as a starting point
marketplace Azure Ad Transactable Saas Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-ad-transactable-saas-landing-page.md
To get started, follow the instructions for [registering a new application](../a
If you intend to query the Microsoft Graph API, [configure your new application to access web APIs](../active-directory/develop/quickstart-configure-app-access-web-apis.md). When you select the API permissions for this application, the default of **User.Read** is enough to gather basic information about the buyer to make the onboarding process smooth and automatic. Do not request any API permissions labeled **needs admin consent**, as this will block all non-administrator users from visiting your landing page.
-If you require elevated permissions as part of your onboarding or provisioning process, consider using the [incremental consent](../active-directory/azuread-dev/azure-ad-endpoint-comparison.md) functionality of Azure AD so that all buyers sent from the marketplace are able to interact initially with the landing page.
+If you require elevated permissions as part of your onboarding or provisioning process, consider using the [incremental consent](../active-directory/develop/permissions-consent-overview.md#consent) functionality of Azure AD so that all buyers sent from the marketplace are able to interact initially with the landing page.
## Use a code sample as a starting point
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-apis.md
To use the Microsoft Store submission API, you must associate an Azure AD applic
Before you call any of the methods in the Partner Center submission API, you must first obtain an Azure AD access token that you pass to the **Authorization** header of each method in the API. After you obtain an access token, you have 60 minutes to use it before it expires. After the token expires, you can refresh the token so you can continue to use it in future calls to the API.
-To obtain the access token, follow the instructions in [Service to Service Calls Using Client Credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) to send an `HTTP POST` to the `https://login.microsoftonline.com/<tenant_id>/oauth2/token` endpoint. Here is a sample request:
+To obtain the access token, follow the instructions in [Service to Service Calls Using Client Credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to send an `HTTP POST` to the `https://login.microsoftonline.com/<tenant_id>/oauth2/token` endpoint. Here is a sample request:
JSONCopy ```Json
marketplace Submission Api Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submission-api-onboard.md
To use the Partner Center submission API, you must associate an Azure AD applica
Before you call any of the methods in the Partner Center submission API, you must first obtain an Azure AD access token to pass to the **Authorization** header of each method in the API. An access token expires 60 minutes after issuance. After that, you can refresh it so you can use it in future calls to the API.
-To obtain the access token, follow the instructions in [Service to Service Calls Using Client Credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) to send an `HTTP POST` to the `https://login.microsoftonline.com/<tenant_id>/oauth2/token` endpoint. Here is a sample request:
+To obtain the access token, follow the instructions in [Service to Service Calls Using Client Credentials](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to send an `HTTP POST` to the `https://login.microsoftonline.com/<tenant_id>/oauth2/token` endpoint. Here is a sample request:
```json POST https://login.microsoftonline.com/<tenant_id>/oauth2/token HTTP/1.1
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
ms.devlang: java -+ Last updated 08/15/2022
payment-hsm Support Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/support-guide.md
Microsoft will work with Thales to ensure that customers meet the prerequisites
The HSM base firmware installed is Thales payShield10K base software version 1.4a 1.8.3 with the Premium Package license. Versions below 1.4a 1.8.3. are not supported. Customers must ensure that they only upgrade to a firmware version that meets their compliance requirements.
+The Premium Package license included in Azure payment HSM features:
+- Premium Key Management
+- Magnetic Stripe Issuing
+- Magnetic Stripe Transaction Processing
+- EMV Chip, Contactless & Mobile Issuing
+- EMV Transaction Processing
+- Premium Data Protection
+- Remote payShield Manager
+- Hosted HSM
+ Customers are responsible for applying payShield security patches and upgrading payShield firmware for their provisioned HSMs, as needed. If customers have questions or require assistance, they should work with Thales support. Microsoft is responsible for applying payShield security patches to unallocated HSMs. ## Microsoft support
-Microsoft will provide support for hardware issues, networking issues, and provisioning issues.
-
-Explore the range of Azure support options and choose the plan that best fits at [Microsoft Support Plans](https://azure.microsoft.com/support/plans/). Customers should understand initial response time, listed at [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/).
+Microsoft will provide support for hardware issues, networking issues, and provisioning issues. Enterprise customers should contact their CSAM to find out details of their support contract .
Microsoft support can be contacted by creating a support ticket through the Azure portal:
Depending on the nature of your issue or query, you may need to contact Microsof
- Learn more about [Azure Payment HSM](overview.md) - See some common [deployment scenarios](deployment-scenarios.md) - Learn about [Certification and compliance](certification-compliance.md)-- Read the [frequently asked questions](faq.yml)
+- Read the [frequently asked questions](faq.yml)
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
ms.devlang: java-+ Last updated 09/27/2022
purview Concept Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-business-glossary.md
The same term can also imply multiple business objects. It is important that eac
## Custom attributes
-Microsoft Purview supports eight out-of-the-box attributes for any business glossary term:
+Microsoft Purview supports these out-of-the-box attributes for any business glossary term:
- Name (mandatory) - Nickname - Status
Microsoft Purview supports eight out-of-the-box attributes for any business glos
- Resources - Parent term
-These attributes cannot be edited or deleted. However, these attributes are not sufficient to completely define a term in an organization. To solve this problem, Microsoft Purview provides a feature where you can define custom attributes for your glossary.
+These attributes cannot be edited or deleted, but only the Name is mandatory to create a glossary term.
+However, these attributes are not sufficient to completely define a term in an organization. To solve this problem, Microsoft Purview provides a feature where you can define custom attributes for your glossary.
## Term templates
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
This guide covers how a data owner can delegate authoring policies in Microsoft
[!INCLUDE [Access policies Arc enabled SQL Server pre-requisites](./includes/access-policies-prerequisites-arc-sql-server.md)] ## Microsoft Purview configuration ### Register data sources in Microsoft Purview Register each data source with Microsoft Purview to later define access policies.
Register each data source with Microsoft Purview to later define access policies
1. **Select a collection** to put this registration in.
-1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management]
-(./how-to-enable-data-use-management.md)
+1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
This guide covers how a data owner can delegate authoring policies in Microsoft
[!INCLUDE [Access policies Azure SQL Database pre-requisites](./includes/access-policies-prerequisites-azure-sql-db.md)] ## Microsoft Purview configuration ### Register the data sources in Microsoft Purview The Azure SQL Database data source needs to be registered first with Microsoft Purview before creating access policies. You can follow these guides:
purview How To Policies Data Owner Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-resource-group.md
In this guide we cover how to register an entire resource group or subscription
(*) Only the *SQL Performance monitoring* and *Security auditing* actions are fully supported for SQL-type data sources. The *Read* action needs a workaround described later in this guide. The *Modify* action is not currently supported for SQL-type data sources. ## Microsoft Purview configuration ### Register the subscription or resource group for Data Use Management The subscription or resource group needs to be registered with Microsoft Purview to later define access policies.
purview How To Policies Data Owner Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-storage.md
This guide covers how a data owner can delegate in Microsoft Purview management
[!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)] ## Microsoft Purview configuration ### Register the data sources in Microsoft Purview for Data Use Management The Azure Storage resources need to be registered first with Microsoft Purview to later define access policies.
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
This how-to guide covers how to provision access from Microsoft Purview to Arc-e
[!INCLUDE [Access policies Arc enabled SQL Server pre-requisites](./includes/access-policies-prerequisites-arc-sql-server.md)] ## Microsoft Purview configuration ### Register data sources in Microsoft Purview The Arc-enabled SQL Server data source needs to be registered first with Microsoft Purview, before policies can be created.
purview How To Policies Devops Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md
This how-to guide covers how to provision access from Microsoft Purview to Azure
[!INCLUDE [Access policies Azure SQL Database pre-requisites](./includes/access-policies-prerequisites-azure-sql-db.md)] ## Microsoft Purview Configuration ### Register the data sources in Microsoft Purview The Azure SQL Database data source needs to be registered first with Microsoft Purview, before access policies can be created. You can follow these guides:
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Previously updated : 11/04/2022 Last updated : 11/10/2022 # What's available in the Microsoft Purview governance portal?
For more information, see our [introduction to Data Sharing](concept-data-share.
## Data Policy Microsoft Purview Data Policy is a set of central, cloud-based experiences that help you manage access to data sources and datasets securely and at scale.
-Benefits:
-* Structure and simplify the process of granting/revoking access.
-* Reduce the effort of access provisioning.
-* Access decision in Microsoft data systems has negligible latency penalty.
-* Enhanced security:
- - Easier to review access/revoke it in a central vs. distributed access provisioning model.
- - Reduced need for privileged accounts to configure access.
- - Support Principle of Least Privilege (give people the appropriate level of access, limiting to the minimum permissions and the least data objects).
+- Manage access to data sources from a single-pane of glass, cloud-based experience
+- Introduces a new data-plane permission model that is external to data sources
+- Seamless integration with Microsoft Purview Data Map and Catalog helps search for data assets and grant access only to what is required via fine-grained policies
+- Based on role definitions that are simple and abstracted (e.g. Read, Modify)
+- At-scale access provisioning
For more information, see our introductory guides: * [Data owner access policies](concept-policies-data-owner.md)(preview): Provision fine-grained to broad access to users and groups via intuitive authoring experience.
Discovering and understanding data sources and their use is the primary purpose
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users. Lastly, Microsoft Purview Data Policy app leverages the metadata in the Data Map, providing a superior solution to keep your data secure.
+* Structure and simplify the process of granting/revoking access.
+* Reduce the effort of access provisioning.
+* Access decision in Microsoft data systems has negligible latency penalty.
+* Enhanced security:
+ - Easier to review access/revoke it in a central vs. distributed access provisioning model.
+ - Reduced need for privileged accounts to configure access.
+ - Support Principle of Least Privilege (give people the appropriate level of access, limiting to the minimum permissions and the least data objects).
## Next steps
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
Title: Troubleshoot distribution of Microsoft Purview access policies
-description: Learn how to troubleshoot the enforcement of access policies that were created in Microsoft Purview
+description: Learn how to troubleshoot the communication of access policies that were created in Microsoft Purview and need to be enforced in data sources
Last updated 11/09/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-In this tutorial, learn how to programmatically fetch access policies that were created in Microsoft Purview. With this you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, on which these policies are enforced.
+In this tutorial, learn how to programmatically fetch access policies that were created in Microsoft Purview. With this you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, where these policies need to be enforced.
To get the necessary context about Microsoft Purview policies, see concept guides listed in [next-steps](#next-steps).
In this example, the delta pull communicates the event that the policy on the re
## Policy constructs
-There are 3 top-level policy constructs used within the full pull (/policyElements) and delta pull (/policyEvents) requests: PolicySet, Policy and AttributeRule.
+There are 3 top-level policy constructs used within the responses to the full pull (/policyElements) and delta pull (/policyEvents) requests: Policy, PolicySet and AttributeRule.
-### PolicySet
+### Policy
-PolicySet associates Policy to a resource scope. Purview policy decision compute starts with a list of PolicySets. PolicySet evaluation triggers evaluation of Policy referenced in the PolicySet.
+Policy specifies the decision the data source must enforce (permit vs. deny) when an Azure AD principal attempts an access via a client, provided request context attributes satisfy attribute predicates specified in the policy (for example scope, requested action, etc.). Evaluation of the Policy triggers evaluation of AttributeRules referenced in the Policy.
|member|value|type|cardinality|description| ||--|-|--|--|
PolicySet associates Policy to a resource scope. Purview policy decision compute
|kind| |string|1|| |version|1|number|1|| |updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"|
-|preconditionRules| |array[Object:Rule]|0..1||
-|policyRefs| |array[string]|1|List of policy IDs|
+|preconditionRules| |array[Object:Rule]|0..1|All the rules are 'anded'|
+|decisionRules| |array[Object:DecisionRule]|1||
-### Policy
+### PolicySet
-Policy specifies decision that should be emitted if the policy is applicable for the request provided request context attributes satisfy attribute predicates specified in the policy. Evaluation of policy triggers evaluation of AttributeRules referenced in the Policy.
+PolicySet associates an array of Policy IDs to a resource scope where they need to be enforced.
|member|value|type|cardinality|description| ||--|-|--|--|
Policy specifies decision that should be emitted if the policy is applicable for
|kind| |string|1|| |version|1|number|1|| |updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"|
-|preconditionRules| |array[Object:Rule]|0..1|All the rules are 'anded'|
-|decisionRules| |array[Object:DecisionRule]|1||
+|preconditionRules| |array[Object:Rule]|0..1||
+|policyRefs| |array[string]|1|List of policy IDs|
### AttributeRule
remote-rendering Configure Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/configure-model-conversion.md
By forcing a component to `NONE`, it's guaranteed that the output mesh doesn't h
These formats are allowed for the respective components:
-| :::no-loc text="Vertex"::: component | Supported formats (bold = default) |
-|:--|:|
-|position| **32_32_32_FLOAT**, 16_16_16_16_FLOAT |
-|color0| **8_8_8_8_UNSIGNED_NORMALIZED**, NONE |
-|color1| 8_8_8_8_UNSIGNED_NORMALIZED, **NONE**|
-|normal| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE |
-|tangent| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE |
-|binormal| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE |
-|texcoord0| **32_32_FLOAT**, 16_16_FLOAT, NONE |
-|texcoord1| **32_32_FLOAT**, 16_16_FLOAT, NONE |
+| :::no-loc text="Vertex"::: component | Supported formats (bold = default) | Usage in materials |
+|:--|:|:|
+|position| **32_32_32_FLOAT**, 16_16_16_16_FLOAT | Vertex position, must always be present. |
+|color0| **8_8_8_8_UNSIGNED_NORMALIZED**, NONE | Vertex colors. See `useVertexColor` property in both [Color materials](../../overview/features/color-materials.md) and [PBR materials](../../overview/features/pbr-materials.md), and `vertexMix` in [Color materials](../../overview/features/color-materials.md). |
+|color1| 8_8_8_8_UNSIGNED_NORMALIZED, **NONE**| Unused, leave it to **NONE**. |
+|normal| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE | Used for lighting in [PBR materials](../../overview/features/pbr-materials.md). |
+|tangent| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE | Used for lighting with normal maps in [PBR materials](../../overview/features/pbr-materials.md). |
+|binormal| **8_8_8_8_SIGNED_NORMALIZED**, 16_16_16_16_FLOAT, NONE | Used for lighting with normal maps in [PBR materials](../../overview/features/pbr-materials.md). |
+|texcoord0| **32_32_FLOAT**, 16_16_FLOAT, NONE | First slot of texture coordinates. Individual textures (albedo, normal map, ...) can either use slot 0 or 1, which is defined in the source file. |
+|texcoord1| **32_32_FLOAT**, 16_16_FLOAT, NONE | Second slot of texture coordinates. Individual textures (albedo, normal map, ...) can either use slot 0 or 1, which is defined in the source file. |
#### Supported component formats
remote-rendering Late Stage Reprojection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/late-stage-reprojection.md
Using the illustration above, the following transform is applied in :::no-loc te
### :::no-loc text="Local pose mode":::
-In this mode, the reprojection is split into two distinct steps: In the first step, the remote content is reprojected into local pose space, that is, the space that the local content is rendered with on VR/AR devices by default. After that, the local content is rendered on top of this pre-transformed image using the usual local pose. In the second step, the combined result is forwarded to the OS for the final reprojection. Since this second reprojection incurs only a small delta - in fact the same delta that would be used if ARR was not present - the distortion artifacts on local content are mitigated significantly.
+In this mode, the reprojection is split into two distinct steps: In the first step, the remote content is reprojected into local pose space, that is, the space that the local content is rendered with on VR/AR devices by default. After that, the local content is rendered on top of this pre-transformed image using the usual local pose. In the second step, the combined result is forwarded to the OS for the final reprojection. Since this second reprojection incurs only a small delta - in fact the same delta that would be used if ARR wasn't present - the distortion artifacts on local content are mitigated significantly.
Accordingly, the illustration looks like this:
Conceptually, this mode can be compared to conventional cloud-streaming applicat
### Performance and quality considerations
-The choice of the pose mode has visual quality and performance implications. The additional runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that is not prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: doesn't gain any visual benefit because the quality of the remote content reprojection is unaffected.
+The choice of the pose mode has visual quality and performance implications. The extra runtime cost on the client side for doing the extra reprojection in :::no-loc text="Local pose mode"::: on a HoloLens 2 device amounts to about 1 millisecond per frame of GPU time. This extra cost needs to be put into consideration if the client application is already close to the frame budget of 16 milliseconds. On the other hand, there are types of applications with either no local content or local content that isn't prone to distortion artifacts. In those cases :::no-loc text="Local pose mode"::: doesn't gain any visual benefit because the quality of the remote content reprojection is unaffected.
The general advice would thus be to test the modes on a per use case basis and see whether the gain in visual quality justifies the extra performance overhead. It's also possible to toggle the mode dynamically, for instance enable local mode only when important UIs are shown.
public static void InitRemoteManager(Camera camera)
} ```
-If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes to `PoseMode.Remote` are required. If no pose mode is specified, the ARR Unity runtime will select an appropriate default depending on the current platform.
+If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures, and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes to `PoseMode.Remote` are required. If no pose mode is specified, the ARR Unity runtime will select an appropriate default depending on the current platform.
> [!WARNING]
-> The proxy camera redirection might be incompatible with other Unity extensions, which expect scene rendering to take place with the main camera. The proxy camera can be retrieved via the `RemoteManagerUnity.ProxyCamera` property if it needs to be queried or registered elsewhere.
+> The proxy camera redirection might be incompatible with other Unity extensions, which expect scene rendering to take place with the main camera. The proxy camera can be retrieved via the `RemoteManagerUnity.ProxyCamera` property if it needs to be queried or registered elsewhere. Specifically for the `Cinemachine` plugin, refer to this troubleshooting entry: [The Unity `Cinemachine` plugin doesn't work in Remote pose mode](../../resources/troubleshoot.md#the-unity-cinemachine-plugin-doesnt-work-in-remote-pose-mode).
-If `PoseMode.Local` or `PoseMode.Passthrough` is used instead, the graphics binding won't be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. If the respective use case requires remote pose mode at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. Directly rendering with Unity's main camera is more efficient and can prevent issues with other Unity extensions. Therefore, it's recommended to use the non-proxy rendering path.
+If `PoseMode.Local` or `PoseMode.Passthrough` is used instead, the graphics binding won't be initialized with offscreen proxy textures, and a fast path using the Unity scene's main camera to render will be used. If the respective use case requires remote pose mode at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. Directly rendering with Unity's main camera is more efficient and can prevent issues with other Unity extensions. Therefore, it's recommended to use the non-proxy rendering path.
## Next steps
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Previously updated : 07/27/2022 Last updated : 11/09/2022 #Customer intent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-changed-deleted-blobs.md
Previously updated : 09/09/2022 Last updated : 11/09/2022 # Change and delete detection using indexers for Azure Storage in Azure Cognitive Search
For this deletion detection approach, Cognitive Search depends on the [native bl
+ You must use the preview REST API (`api-version=2020-06-30-Preview`), or the indexer Data Source configuration in the Azure portal, to configure support for soft delete. ++ [Blob versioning](../storage/blobs/versioning-overview.md) must not be enabled in the storage account. Otherwise, native soft delete isn't supported by design.+++ ### Configure native soft delete In Blob storage, when enabling soft delete per the requirements, set the retention policy to a value that's much higher than your indexer interval schedule. If there's an issue running the indexer, or if you have a large number of documents to index, there's plenty of time for the indexer to eventually process the soft deleted blobs. Azure Cognitive Search indexers will only delete a document from the index if it processes the blob while it's in a soft deleted state.
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
Each schema has a standard set of filtering parameters documented in the relevan
- [Network Session](network-normalization-schema.md#filtering-parser-parameters) - [Web Session](web-normalization-schema.md#filtering-parser-parameters)
-Every schema that supports filtering parameters supports at least the `starttime` and `enttime` parameters and using them is often critical for optimizing performance.
+Every schema that supports filtering parameters supports at least the `starttime` and `endtime` parameters and using them is often critical for optimizing performance.
For an example of using filtering parsers see [Unifying parsers](#unifying-parsers) above.
service-bus-messaging Enable Dead Letter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-dead-letter.md
Title: Enable dead lettering for Azure Service Bus queues and subscriptions description: This article explains how to enable dead lettering for queues and subscriptions by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Previously updated : 04/20/2021 - Last updated : 11/09/2022 # Enable dead lettering on message expiration for Azure Service Bus queues and subscriptions
az servicebus topic subscription create \
To **enable the dead lettering on message expiration setting for a subscription to a topic**, use the [`az servicebus topic subscription update`](/cli/azure/servicebus/topic/subscription#az-servicebus-topic-subscription-update) command with `--enable-dead-lettering-on-message-expiration` set `true`. ```azurecli-interactive
-az servicebus topic subscription create \
+az servicebus topic subscription update \
--resource-group myresourcegroup \ --namespace-name mynamespace \ --topic-name mytopic \
az servicebus topic subscription create \
--enable-dead-lettering-on-message-expiration true ```
+> [!NOTE]
+> If you specify a queue or topic by using the `--forward-dead-lettered-messages-to` parameter, Event Grid automatically forwards dead-lettered messages to that queue or topic. Here's an example: `az servicebus queue create --resource-group mysbusrg --namespace-name mysbusns --name myqueue --enable-dead-lettering-on-message-expiration true --forward-dead-lettered-messages-to myqueue2`.
+ ## Using Azure PowerShell To **create a queue with dead lettering on message expiration enabled**, use the [`New-AzServiceBusQueue`](/powershell/module/az.servicebus/new-azservicebusqueue) command with `-DeadLetteringOnMessageExpiration` set to `$True`.
Set-AzServiceBusSubscription -ResourceGroup myresourcegroup `
-SubscriptionObj $subscription ```
+> [!NOTE]
+> If you specify a queue or topic by using the `-ForwardDeadLetteredMessagesTo` parameter, Event Grid automatically forwards dead-lettered messages to that queue or topic.
+ ## Using Azure Resource Manager template To **create a queue with dead lettering on message expiration enabled**, set `deadLetteringOnMessageExpiration` in the queue properties section to `true`. For more information, see [Microsoft.ServiceBus namespaces/queues template reference](/azure/templates/microsoft.servicebus/namespaces/queues?tabs=json).
To **create a subscription for a topic with dead lettering on message expiration
} ```
+> [!NOTE]
+> If you specify a queue or topic for the `forwardDeadLetteredMessagesTo` property, Event Grid automatically forwards dead-lettered messages to that queue or topic.
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features.
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects. ## [Connection String](#tab/connection-string) ## [Passwordless](#tab/passwordless) [!INCLUDE [service-bus-create-queue-portal](./includes/service-bus-create-queue-portal.md)]
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects. ## [Connection String](#tab/connection-string) ## [Passwordless](#tab/passwordless) [!INCLUDE [service-bus-create-topic-subscription-portal](./includes/service-bus-create-topic-subscription-portal.md)]
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
+
+ Title: Impacted resources support for outages
+description: This article details what is communicated to users and where they can view information about their impacted resources.
+ Last updated : 11/9/2022++
+# Impacted Resources for Azure Outages
+
+[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that impact their Subscriptions and Tenants. The Service Issues blade on Service Health shows any ongoing problems in Azure services that are impacting your resources. You can understand when the issue began, and what services and regions are impacted. Previously, the Potential Impact tab on the Service Issues blade was within the details of an incident. It showed any resources under a customer's Subscriptions or Tenants that may be impacted by an outage, and their resource health signal to help customers evaluate impact.
+
+**In support of the impacted resource experience, Service Health has enabled a new feature to:**
+
+- Replace ΓÇ£Potential ImpactΓÇ¥ tab with ΓÇ£Impacted ResourcesΓÇ¥ tab on Service Issues.
+- Display resources that are confirmed to be impacted by an outage.
+- Display resources that are not confirmed to be impacted by an outage but could be impacted because they fall under a service or region that is confirmed to be impacted by an outage.
+- Resource Health status of both confirmed and potential impacted resources showing the availability of the resource.
+
+This article details what is communicated to users and where they can view information about their impacted resources.
+
+>[!Note]
+>This feature will be enabled to users in phases. Only selected subscription-level customers will start seeing the experience initially and gradually expand to 100% of subscription customers. In future this capability will be live for tenant level customers.
+
+## Impacted Resources for Outages on the Service Health Portal
+
+The impacted resources tab under Azure portal-> Service Health ->Service Issues will display resources that are Confirmed to be impacted by an outage and resources that could Potentially be impacted by an outage. Below is an example of impacted resources tab for an incident on Service Issues with Confirmed and Potential impact resources.
++
+##### Service Health provides the below information to users whose resources are impacted by an outage:
+
+|Column |Description |
+|||
+|Resource Name|Name of resource|
+|Resource Health|Health status of a resource at that point in time|
+|Impact Type|Tells customers if their resource is confirmed to be impacted or potentially impacted|
+|Resource Type|Type of resource impacted (.ie Virtual Machines)|
+|Resource Group|Resource group which contains the impacted resource|
+|Location|Location which contains the impacted resource|
+|Subscription ID|Unique ID for the subscription that contains the impacted resource|
+|Subscription Name|Subscription name for the subscription which contains the impacted resource|
+|Tenant ID|Unique ID for the tenant that contains the impacted resource|
+
+## Resource Name
+
+This will be the resource name of the resource. The resource name will be a clickable link that links to Resource Health page for this resource.
+
+It will be text only if there is no Resource Health signal available for this resource.
+
+## Impact Type
+
+This column will display values ΓÇ£Confirmed vs PotentialΓÇ¥
+
+- *Confirmed*: Resource that was confirmed to be impacted by an outage. Customers should check the Summary section to make sure customer action items (if any) are taken to remediate the issue.
+- *Potential*: Resource that is not confirmed to be impacted by an outage but could potentially be impacted as it is under a service or region which is impacted by an outage. Customers are advised to look at the resource health and make sure everything is working as planned.
+
+## Resource Health
+
+The health status listed under **[Resource Health](../service-health/resource-health-overview.md)** refers to the status of a given resource at that point in time.
+
+- A health status of available means your resource is healthy but it may have been affected by the service event at a previous point in time.
+- A health status of degraded or unavailable (caused by a customer-initiated action or platform-initiated action) means your resource is impacted but could be now healthy and pending a status update.
+++
+>[!Note]
+>Not all resources will show resource health status. This will be shown on resources for which we have a resource health signal available only. The status of resources where the health signal is not available is shown as **ΓÇ£N/AΓÇ¥** and corresponding Resource name value will be text only not a clickable link.
+
+## Filters
+
+Customers can filter on the results using the below filters:
+
+- Impact type: Confirmed or Potential
+- Subscription ID: All Subscription IDs the user has access to
+- Status: Resource Health status column that shows Available, Unavailable, Degraded, Unknown, N/A
+
+## Export to CSV
+
+The list of impacted resources can be exported to an excel file by clicking on this option.
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
Last updated 09/26/2022-+ # Use a managed identity to connect Azure SQL Database to an Azure Spring Apps app
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
description: Learn how to bind Azure Cosmos DB to your application in Azure Spri
Previously updated : 10/06/2019 Last updated : 11/09/2022
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. Add one of the following dependencies to your application's pom.xml pom.xml file. Choose the dependency that is appropriate for your API type.
- * API type: NoSQL
+ * API type: NoSQL
- ```xml
- <dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
- <version>4.3.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
+ <version>4.3.0</version>
+ </dependency>
+ ```
- * API type: MongoDB
+ * API type: MongoDB
- ```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-data-mongodb</artifactId>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-data-mongodb</artifactId>
+ </dependency>
+ ```
- * API type: Cassandra
+ * API type: Cassandra
- ```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-data-cassandra</artifactId>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-data-cassandra</artifactId>
+ </dependency>
+ ```
- * API type: Azure Table
+ * API type: Azure Table
- ```xml
- <dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
- <version>4.3.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
+ <version>4.3.0</version>
+ </dependency>
+ ```
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`. ## Bind your app to the Azure Cosmos DB
-#### [Service Binding](#tab/Service-Binding)
+### [Service Connector](#tab/Service-Connector)
+
+1. Use the Azure CLI to configure your Spring app to connect to a Cosmos SQL Database with a system-assigned managed identity by using the `az spring connection create` command, as shown in the following example.
+
+ > [!NOTE]
+ > Updating Azure Cosmos DB database settings can take a few minutes to complete.
+
+ ```azurecli
+ az spring connection create cosmos-sql \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $COSMOSDB_RESOURCE_GROUP \
+ --account $COSMOSDB_ACCOUNT_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+ ```
+
+ > [!NOTE]
+ > If you're using [Service Connector](../service-connector/overview.md) for the first time, start by running the command `az provider register --namespace Microsoft.ServiceLinker` to register the Service Connector resource provider.
+ >
+ > If you're using Cosmos Cassandra, use a `--key_space` instead of `--database`.
+
+ > [!TIP]
+ > Run the command `az spring connection list-support-types --output table` to get a list of supported target services and authentication methods for Azure Spring Apps. If the `az spring` command isn't recognized by the system, check that you have installed the required extension by running `az extension add --name spring`.
+
+1. Alternately, you can use the Azure portal to configure this connection by completing the following steps. The Azure portal provides the same capabilities as the Azure CLI and provides an interactive experience.
+
+ 1. Select your Azure Spring Apps instance in the Azure portal and select **Apps** from the navigation menu. Choose the app you want to connect and select **Service Connector** on the navigation menu.
+
+ 1. Select **Create**.
+
+ 1. On the **Basics** tab, for service type, select Cosmos DB, then choose a subscription. For API type, select Core (SQL), choose a Cosmos DB account, and a database. For client type, select Java, then select **Next: Authentication**. If you haven't created your database yet, see [Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal](../cosmos-db/nosql/quickstart-portal.md).
+
+ 1. On the **Authentication** tab, choose **Connection string**. Service Connector automatically retrieves the access key from your Cosmos DB account. Select **Next: Networking**.
+
+ 1. On the **Networking** tab, select **Configure firewall rules to enable access to target service**, then select **Next: Review + Create**.
+
+ 1. On the **Review + Create** tab, wait for the validation to pass and then select **Create**. The creation can take a few minutes to complete.
+
+ 1. Once the connection between your Spring apps and your Cosmos DB database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
+
+### [Service Binding](#tab/Service-Binding)
+
+> [!NOTE]
+> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
Azure Cosmos DB has five different API types that support binding. The following procedure shows how to use them:
Azure Cosmos DB has five different API types that support binding. The following
1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cosmos DB. This application is the same one you updated or deployed in the previous step. 1. Select **Service binding**, and select **Create service binding**. To fill out the form, select:+ * The **Binding type** value **Azure Cosmos DB**. * The API type. * Your database name. * The Azure Cosmos DB account.
- > [!NOTE]
- > If you are using Cassandra, use a key space for the database name.
+ > [!NOTE]
+ > If you are using Cassandra, use a key space for the database name.
1. Restart the application by selecting **Restart** on the application page. 1. To ensure the service is bound correctly, select the binding name and verify its details. The `property` field should be similar to this example:
- ```properties
- spring.cloud.azure.cosmos.endpoint=https://<some account>.documents.azure.com:443
- spring.cloud.azure.cosmos.key=abc******
- spring.cloud.azure.cosmos.database=testdb
- ```
+ ```properties
+ spring.cloud.azure.cosmos.endpoint=https://<some account>.documents.azure.com:443
+ spring.cloud.azure.cosmos.key=abc******
+ spring.cloud.azure.cosmos.database=testdb
+ ```
-#### [Terraform](#tab/Terraform)
+### [Terraform](#tab/Terraform)
The following Terraform script shows how to set up an Azure Spring Apps app with an Azure Cosmos DB account.
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
description: Learn how to bind an Azure Database for MySQL instance to your appl
Previously updated : 09/26/2022 Last updated : 11/09/2022 -+ # Bind an Azure Database for MySQL instance to your application in Azure Spring Apps
With Azure Spring Apps, you can bind select Azure services to your applications
## Bind your app to the Azure Database for MySQL instance
-#### [Service Binding](#tab/Service-Binding)
+### [Service Connector](#tab/Service-Connector)
+
+To configure your Spring app to connect to an Azure Database for MySQL Flexible Server with a system-assigned managed identity, use the `az spring connection create` command, as shown in the following example.
+
+```azurecli
+az spring connection create mysql-flexible \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $MYSQL_RESOURCE_GROUP \
+ --server $MYSQL_SERVER_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
+
+### [Service Binding](#tab/Service-Binding)
+
+> [!NOTE]
+> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
1. Note the admin username and password of your Azure Database for MySQL account.
With Azure Spring Apps, you can bind select Azure services to your applications
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect ```
-#### [Passwordless connection using a managed identity](#tab/Passwordless)
-
-Configure your Spring app to connect to a MySQL Database Flexible Server with a system-assigned managed identity by using the `az spring connection create` command, as shown in the following example.
-
-```azurecli
-az spring connection create mysql-flexible \
- --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
- --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
- --app $APP_NAME \
- --deployment $DEPLOYMENT_NAME \
- --target-resource-group $MYSQL_RESOURCE_GROUP \
- --server $MYSQL_SERVER_NAME \
- --database $DATABASE_NAME \
- --system-assigned-identity
-```
-
-#### [Terraform](#tab/Terraform)
+### [Terraform](#tab/Terraform)
The following Terraform script shows how to set up an Azure Spring Apps app with Azure Database for MySQL.
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
Last updated 09/26/2022 + # Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps
Use the following steps to bind your app.
Configure Azure Spring Apps to connect to the PostgreSQL Database with a system-assigned managed identity using the `az spring connection create` command.
+> [!NOTE]
+> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
+ ```azurecli az spring connection create postgres-flexible \ --resource-group $SPRING_APP_RESOURCE_GROUP \
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-redis.md
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
## Prepare your Java project
-1. Add the following dependency to your project's pom.xml file:
+1. Add the following dependency to your project's *pom.xml* file:
- ```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-data-redis-reactive</artifactId>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-data-redis-reactive</artifactId>
+ </dependency>
+ ```
-1. Remove any `spring.redis.*` properties from the `application.properties` file
+1. Remove any `spring.redis.*` properties from the *application.properties* file
1. Update the current deployment using `az spring app update` or create a new deployment using `az spring app deployment create`. ## Bind your app to the Azure Cache for Redis
-#### [Service Binding](#tab/Service-Binding)
+### [Service Connector](#tab/Service-Connector)
+
+1. Use the Azure CLI to configure your Spring app to connect to a Redis database with an access key using the `az spring connection create` command, as shown in the following example.
+
+ ```azurecli
+ az spring connection create redis \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $REDIS_RESOURCE_GROUP \
+ --server $REDIS_SERVER_NAME\
+ --database $REDIS_DATABASE_NAME \
+ --secret
+ ```
+
+ > [!NOTE]
+ > If you're using [Service Connector](../service-connector/overview.md) for the first time, start by running the command `az provider register --namespace Microsoft.ServiceLinker` to register the Service Connector resource provider.
+ >
+ > If you're using Redis Enterprise, use the `az spring connection create redis-enterprise` command instead.
+
+ > [!TIP]
+ > Run the command `az spring connection list-support-types --output table` to get a list of supported target services and authentication methods for Azure Spring Apps. If the `az spring` command isn't recognized by the system, check that you have installed the required extension by running `az extension add --name spring`.
+
+1. Alternately, you can use the Azure portal to configure this connection by completing the following steps. The Azure portal provides the same capabilities as the Azure CLI and provides an interactive experience.
+
+ 1. Select your Azure Spring Apps instance in the Azure portal and then select **Apps** from the navigation menu. Choose the app you want to connect and then select **Service Connector** on the navigation menu.
+
+ 1. Select **Create**.
+
+ 1. On the **Basics** tab, for service type, select Cache for Redis. Choose a subscription and a Redis cache server. Fill in the Redis database name ("0" in this example) and under client type, select Java. Select **Next: Authentication**.
+
+ 1. On the **Authentication** tab, choose **Connection string**. Service Connector will automatically retrieve the access key from your Redis database account. Select **Next: Networking**.
+
+ 1. On the **Networking** tab, select **Configure firewall rules to enable access to target service**, then select **Review + Create**.
+
+ 1. On the **Review + Create** tab, wait for the validation to pass and then select **Create**. The creation can take a few minutes to complete.
+
+ 1. Once the connection between your Spring app your Redis database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
+
+### [Service Binding](#tab/Service-Binding)
+
+> [!NOTE]
+> We recommend using Service Connector instead of Service Binding to connect your app to your database. Service Binding is going to be deprecated in favor of Service Connector. For instructions, see the Service Connector tab.
1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step.
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. To ensure the service binding is correct, select the binding name and verify its details. The `property` field should look like this:
- ```properties
- spring.redis.host=some-redis.redis.cache.windows.net
- spring.redis.port=6380
- spring.redis.password=abc******
- spring.redis.ssl=true
- ```
+ ```properties
+ spring.redis.host=some-redis.redis.cache.windows.net
+ spring.redis.port=6380
+ spring.redis.password=abc******
+ spring.redis.ssl=true
+ ```
-#### [Terraform](#tab/Terraform)
+### [Terraform](#tab/Terraform)
The following Terraform script shows how to set up an Azure Spring Apps app with Azure Cache for Redis.
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md
This article shows you how to start or stop your Azure Spring Apps service instance. > [!NOTE]
-> Stop and start is currently under preview and we do not recommend this feature for production.
+> You can stop and start your Azure Spring Apps service instance to help you save costs, but you shouldn't stop and start a running instance for service recovery.
Your applications running in Azure Spring Apps may not need to run continuously. For example, an application may not need to run continuously if you have a service instance that's used only during business hours. There may be times when Azure Spring Apps is idle and running only the system components.
You can reduce the active footprint of Azure Spring Apps by reducing the running
To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off. > [!NOTE]
-> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days during preview. If your cluster is stopped for more than 90 days, the cluster state cannot be recovered. The maximum stop time may change after preview.
+> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days. If your cluster is stopped for more than 90 days, you can't recover the cluster state.
You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
static-web-apps Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-overview.md
The following constraints apply to all API backends:
- Route rules for APIs only support [redirects](configuration.md#defining-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles). - Only HTTP requests are supported for APIs. WebSocket, for example, is not supported. - The maximum duration of each API request 45 seconds.-- Network isolated backends are not supported.
+- When you bring your own API, an application must be deployed to your static web app before requests to the `api` route will resolve correctly.
## Next steps
storage Anonymous Read Access Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-client.md
- Title: Access public containers and blobs anonymously with .NET-
-description: Use the Azure Storage client library for .NET to access public containers and blobs anonymously.
----- Previously updated : 02/16/2022------
-# Access public containers and blobs anonymously with .NET
-
-Azure Storage supports optional public read access for containers and blobs. Clients can access public containers and blobs anonymously by using the Azure Storage client libraries, as well as by using other tools and utilities that support data access to Azure Storage.
-
-This article shows how to access a public container or blob from .NET. For information about configuring anonymous read access on a container, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md). For information about preventing all anonymous access to a storage account, see [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md).
-
-A client that accesses containers and blobs anonymously can use constructors that do not require credentials. The following examples show a few different ways to reference containers and blobs anonymously.
-
-> [!IMPORTANT]
-> Any firewall rules that are in effect for the storage account apply even when public access is enabled for a container.
-
-## Create an anonymous client object
-
-You can create a new service client object for anonymous access by providing the Blob storage endpoint for the account. However, you must also know the name of a container in that account that's available for anonymous access.
-
-# [\.NET v12 SDK](#tab/dotnet)
--
-# [\.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-public static void CreateAnonymousBlobClient()
-{
- // Create the client object using the Blob storage endpoint for your account.
- CloudBlobClient blobClient = new CloudBlobClient(
- new Uri(@"https://storagesamples.blob.core.windows.net"));
-
- // Get a reference to a container that's available for anonymous access.
- CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
-
- // Read the container's properties.
- // Note this is only possible when the container supports full public read access.
- container.FetchAttributes();
- Console.WriteLine(container.Properties.LastModified);
- Console.WriteLine(container.Properties.ETag);
-}
-```
---
-## Reference a container anonymously
-
-If you have the URL to a container that is anonymously available, you can use it to reference the container directly.
-
-# [\.NET v12 SDK](#tab/dotnet)
--
-# [\.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-public static void ListBlobsAnonymously()
-{
- // Get a reference to a container that's available for anonymous access.
- CloudBlobContainer container = new CloudBlobContainer(
- new Uri(@"https://storagesamples.blob.core.windows.net/sample-container"));
-
- // List blobs in the container.
- // Note this is only possible when the container supports full public read access.
- foreach (IListBlobItem blobItem in container.ListBlobs())
- {
- Console.WriteLine(blobItem.Uri);
- }
-}
-```
---
-## Reference a blob anonymously
-
-If you have the URL to a blob that is available for anonymous access, you can reference the blob directly using that URL:
-
-# [\.NET v12 SDK](#tab/dotnet)
--
-# [\.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-public static void DownloadBlobAnonymously()
-{
- CloudBlockBlob blob = new CloudBlockBlob(
- new Uri(@"https://storagesamples.blob.core.windows.net/sample-container/logfile.txt"));
- blob.DownloadToFile(@"C:\Temp\logfile.txt", FileMode.Create);
-}
-```
---
-## Next steps
--- [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md)-- [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md)-- [Authorizing access to Azure Storage](../common/authorize-data-access.md)
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Previously updated : 10/28/2022 Last updated : 11/09/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
Azure Storage supports optional anonymous public read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. When you configure a container's public access level setting to permit anonymous access, clients can read data in that container without authorizing the request. > [!WARNING]
-> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, Microsoft recommends that you disallow it for the storage account. For more information, see [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md).
+> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, we recommend that you disallow it for the storage account.
-This article describes how to configure anonymous public read access for a container and its blobs. For information about how to access blob data anonymously from a client application, see [Access public containers and blobs anonymously with .NET](anonymous-read-access-client.md).
+This article describes how to configure anonymous public read access for a container and its blobs. For information about how to remediate anonymous access for optimal security, see one of these articles:
+
+- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
+- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
## About anonymous public read access Public access to your data is always prohibited by default. There are two separate settings that affect public access:
-1. **Allow public access for the storage account.** By default, a storage account allows a user with the appropriate permissions to enable public access to a container. Blob data is not available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
+1. **Allow public access for the storage account.** By default, an Azure Resource Manager storage account allows a user with the appropriate permissions to enable public access to a container. Blob data is not available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
1. **Configure the container's public access setting.** By default, a container's public access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's public access setting to enable anonymous access only if anonymous access is allowed for the storage account. The following table summarizes how both settings together affect public access for a container.
By default, a storage account is configured to allow a user with the appropriate
Keep in mind that public access to a container is always turned off by default and must be explicitly configured to permit anonymous requests. Regardless of the setting on the storage account, your data will never be available for public access unless a user with appropriate permissions takes this additional step to enable public access on the container.
-Disallowing public access for the storage account prevents anonymous access to all containers and blobs in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access. For improved security, Microsoft recommends that you disallow public access for your storage accounts unless your scenario requires that users access blob resources anonymously.
+Disallowing public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously. For more information, see [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md).
> [!IMPORTANT]
-> Disallowing public access for a storage account overrides the public access settings for all containers in that storage account. When public access is disallowed for the storage account, any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously. For more information, see [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md).
+> After anonymous public access is disallowed for a storage account, clients that use the anonymous bearer challenge will find that Azure Storage returns a 403 error (Forbidden) rather than a 401 error (Unauthorized). We recommend that you make all containers private to mitigate this issue. For more information on modifying the public access setting for containers, see [Set the public access level for a container](anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+
+Allowing or disallowing blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
+
+### Permissions for disallowing public access
+
+To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role
+- The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role
+
+Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow public access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those administrative users who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
+
+These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However, they include the **Microsoft.Storage/storageAccounts/listkeys/action**, which grants access to the account access keys. With this permission, a user can use the account access keys to access all data in a storage account.
+
+The **Microsoft.Storage/storageAccounts/listkeys/action** itself grants data access via the account keys, but does not grant a user the ability to change the **AllowBlobPublicAccess** property for a storage account. For users who need to access data in your storage account but should not have the ability to change the storage account's configuration, consider assigning roles such as [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor), [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader), or [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create storage accounts and manage account configuration. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+### Set the storage account's AllowBlobPublicAccess property
To allow or disallow public access for a storage account, configure the account's **AllowBlobPublicAccess** property. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
To allow or disallow public access for a storage account in the Azure portal, fo
To allow or disallow public access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
-The following example creates a storage account and explicitly sets the **AllowBlobPublicAccess** property to **true**. It then updates the storage account to set the **AllowBlobPublicAccess** property to **false**. The example also retrieves the property value in each case. Remember to replace the placeholder values in brackets with your own values:
+The following example creates a storage account and explicitly sets the **AllowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
```powershell $rgName = "<resource-group>" $accountName = "<storage-account>" $location = "<location>"
-# Create a storage account with AllowBlobPublicAccess set to true (or null).
+# Create a storage account with AllowBlobPublicAccess set to false.
New-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName ` -Location $location ` -SkuName Standard_GRS `
- -AllowBlobPublicAccess $true
-
-# Read the AllowBlobPublicAccess property for the newly created storage account.
-(Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).AllowBlobPublicAccess
-
-# Set AllowBlobPublicAccess set to false
-Set-AzStorageAccount -ResourceGroupName $rgName `
- -Name $accountName `
-AllowBlobPublicAccess $false
-# Read the AllowBlobPublicAccess property.
+# Read the AllowBlobPublicAccess property for the newly created storage account.
(Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).AllowBlobPublicAccess ```
Set-AzStorageAccount -ResourceGroupName $rgName `
To allow or disallow public access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
-The following example creates a storage account and explicitly sets the **allowBlobPublicAccess** property to **true**. It then updates the storage account to set the **allowBlobPublicAccess** property to **false**. The example also retrieves the property value in each case. Remember to replace the placeholder values in brackets with your own values:
+The following example creates a storage account and explicitly sets the **allowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
```azurecli-interactive az storage account create \
az storage account create \
--resource-group <resource-group> \ --kind StorageV2 \ --location <location> \
- --allow-blob-public-access true
-
-az storage account show \
- --name <storage-account> \
- --resource-group <resource-group> \
- --query allowBlobPublicAccess \
- --output tsv
-
-az storage account update \
- --name <storage-account> \
- --resource-group <resource-group> \
--allow-blob-public-access false az storage account show \
When a container is configured for anonymous public access, requests to read blo
Allowing or disallowing blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
-The examples in this section showed how to read the **AllowBlobPublicAccess** property for the storage account to determine if public access is currently allowed or disallowed. To learn more about how to verify that an account's public access setting is configured to prevent anonymous access, see [Remediate anonymous public access](anonymous-read-access-prevent.md#remediate-anonymous-public-access).
+The examples in this section showed how to read the **AllowBlobPublicAccess** property for the storage account to determine if public access is currently allowed or disallowed. To learn more about how to verify that an account's public access setting is configured to prevent anonymous access, see [Remediate anonymous public access for the storage account](anonymous-read-access-prevent.md#remediate-anonymous-public-access-for-the-storage-account).
## Set the public access level for a container
The following example creates a container with public access disabled, and then
# Set variables. $rgName = "<resource-group>" $accountName = "<storage-account>"- # Get context object. $storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName $ctx = $storageAccount.Context- # Create a new container with public access setting set to Off. $containerName = "<container>" New-AzStorageContainer -Name $containerName -Permission Off -Context $ctx- # Read the container's public access setting. Get-AzStorageContainerAcl -Container $containerName -Context $ctx- # Update the container's public access setting to Container. Set-AzStorageContainerAcl -Container $containerName -Permission Container -Context $ctx- # Read the container's public access setting. Get-AzStorageContainerAcl -Container $containerName -Context $ctx ```
az storage container create \
--public-access off \ --account-key <account-key> \ --auth-mode key- az storage container show-permission \ --name <container-name> \ --account-name <account-name> \ --account-key <account-key> \ --auth-mode key- az storage container set-permission \ --name <container-name> \ --account-name <account-name> \ --public-access container \ --account-key <account-key> \ --auth-mode key- az storage container show-permission \ --name <container-name> \ --account-name <account-name> \
The following example uses PowerShell to get the public access setting for all c
```powershell $rgName = "<resource-group>" $accountName = "<storage-account>"- $storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName $ctx = $storageAccount.Context- Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess ```
Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
- [Prevent anonymous public read access to containers and blobs](anonymous-read-access-prevent.md) - [Access public containers and blobs anonymously with .NET](anonymous-read-access-client.md)-- [Authorizing access to Azure Storage](../common/authorize-data-access.md)
+- [Authorizing access to Azure Storage](../common/authorize-data-access.md)
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
+
+ Title: Overview of remediating anonymous public read access for blob data
+
+description: Learn how to remediate anonymous public read access to blob data for both Azure Resource Manager and classic storage accounts.
+++++ Last updated : 11/09/2022+++
+ms.devlang: azurecli
++
+# Overview: Remediating anonymous public read access for blob data
+
+Azure Storage supports optional anonymous public read access for containers and blobs. By default, anonymous access to your data is never permitted. Unless you explicitly enable anonymous access, all requests to a container and its blobs must be authorized. We recommend that you disable anonymous public access for all of your storage accounts.
+
+This article provides an overview of how to remediate anonymous public access for your storage accounts.
+
+> [!WARNING]
+> Anonymous public access presents a security risk. We recommend that you take the actions described in the following section to remediate public access for all of your storage accounts, unless your scenario specifically requires anonymous access.
+
+## Recommendations for remediating anonymous public access
+
+To remediate anonymous public access, first determine whether your storage account uses the Azure Resource Manager deployment model or the classic deployment model. For more information, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+
+### Azure Resource Manager accounts
+
+If your storage account is using the Azure Resource Manager deployment model, then you can remediate public access by setting the account's **AllowBlobPublicAccess** property to **False**. After you set the **AllowBlobPublicAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the public access setting for any individual container.
+
+To learn more about how to remediate public access for Azure Resource Manager accounts, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+
+### Classic accounts
+
+If your storage account is using the classic deployment model, then you can remediate public access by setting each container's public access property to **Private**. To learn more about how to remediate public access for classic storage accounts, see [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md).
+
+### Scenarios requiring anonymous access
+
+If your scenario requires that certain containers need to be available for public access, then you should move those containers and their blobs into separate storage accounts that are reserved only for public access. You can then disallow public access for any other storage accounts using the recommendations provided in [Recommendations for remediating anonymous public access](#recommendations-for-remediating-anonymous-public-access).
+
+For information on how to configure containers for public access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+
+## Next steps
+
+- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
+- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
+
+ Title: Remediate anonymous public read access to blob data (classic deployments)
+
+description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous public access to containers.
+++++ Last updated : 11/09/2022++++++
+# Remediate anonymous public read access to blob data (classic deployments)
+
+Azure Blob Storage supports optional anonymous public read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing public access helps to prevent data breaches caused by undesired anonymous access.
+
+By default, public access to your blob data is always prohibited. However, the default configuration for a classic storage account permits a user with appropriate permissions to configure public access to containers and blobs in a storage account. To prevent public access to a classic storage account, you must configure each container in the account to block public access.
+
+If your storage account is using the classic deployment model, we recommend that you [migrate](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts) to the Azure Resource Manager deployment model as soon as possible. After you migrate your account, you can configure it to disallow anonymous public access at the account level. For information about how to disallow anonymous public access for an Azure Resource Manager account, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
+
+If you cannot migrate your classic storage accounts at this time, then you should remediate public access to those accounts now by setting all containers to be private. This article describes how to remediate access to the containers in a classic storage account.
+
+Azure Storage accounts that use the classic deployment model will be retired on August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/).
+
+> [!WARNING]
+> Anonymous public access presents a security risk. We recommend that you take the actions described in the following section to remediate public access for all of your classic storage accounts, unless your scenario specifically requires anonymous access.
+
+## Block anonymous access to containers
+
+To remediate anonymous access for a classic storage account, set the public access level for each container in the account to **Private**.
+
+# [Azure portal](#tab/portal)
+
+To remediate public access for one or more containers in the Azure portal, follow these steps:
+
+1. Navigate to your storage account overview in the Azure portal.
+1. Under **Data storage** on the menu blade, select **Blob containers**.
+1. Select the containers for which you want to set the public access level.
+1. Use the **Change access level** button to display the public access settings.
+1. Select **Private (no anonymous access)** from the **Public access level** dropdown and click the OK button to apply the change to the selected containers.
+
+ :::image type="content" source="media/anonymous-read-access-prevent-classic/configure-public-access-container.png" alt-text="Screenshot showing how to set public access level in the portal." lightbox="media/anonymous-read-access-prevent-classic/configure-public-access-container.png":::
+
+# [PowerShell](#tab/powershell)
+
+To remediate anonymous access for one or more containers with PowerShell, call the [Set-AzStorageContainerAcl](/powershell/module/az.storage/set-azstoragecontaineracl) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+
+The following example updates a container's anonymous access setting to make the container private. Remember to replace the placeholder values in brackets with your own values:
+
+```powershell
+# Set variables.
+$rgName = "<resource-group>"
+$accountName = "<storage-account>"
+
+# Get context object.
+$storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
+$ctx = $storageAccount.Context
+
+# Read the container's public access setting.
+Get-AzStorageContainerAcl -Container $containerName -Context $ctx
+
+# Update the container's public access setting to Off.
+Set-AzStorageContainerAcl -Container $containerName -Permission Off -Context $ctx
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To remediate anonymous access for one or more containers with Azure CLI, call the [az storage container set permission](/cli/azure/storage/container#az-storage-container-set-permission) command. Authorize this operation by passing in your account key, a connection string, or a shared access signature (SAS). The [Set Container ACL](/rest/api/storageservices/set-container-acl) operation that sets the container's public access level does not support authorization with Azure AD. For more information, see [Permissions for calling blob and queue data operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-calling-data-operations).
+
+The following example updates a container's anonymous access setting to make the container private. Remember to replace the placeholder values in brackets with your own values:
+
+```azurecli-interactive
+# Read the container's public access setting.
+az storage container show-permission \
+ --name <container-name> \
+ --account-name <account-name> \
+ --account-key <account-key> \
+ --auth-mode key
+
+# Update the container's public access setting to Off.
+az storage container set-permission \
+ --name <container-name> \
+ --account-name <account-name> \
+ --public-access off \
+ --account-key <account-key> \
+ --auth-mode key
+```
+++
+## Check the public access setting for a set of containers
+
+It is possible to check which containers in one or more storage accounts are configured for public access by listing the containers and checking the public access setting. This approach is a practical option when a storage account does not contain a large number of containers, or when you are checking the setting across a small number of storage accounts. However, performance may suffer if you attempt to enumerate a large number of containers.
+
+The following example uses PowerShell to get the public access setting for all containers in a storage account. Remember to replace the placeholder values in brackets with your own values:
+
+```powershell
+$rgName = "<resource-group>"
+$accountName = "<storage-account>"
+
+$storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
+$ctx = $storageAccount.Context
+
+Get-AzStorageContainer -Context $ctx | Select Name, PublicAccess
+```
+
+## Sample script for bulk remediation
+
+The following sample PowerShell script runs against all classic storage accounts in a subscription and sets the public access setting for the containers in those accounts to **Private**.
+
+> [!CAUTION]
+> Running this script against storage accounts with very large numbers of containers may require significant resources and take a long time. If you have a storage account with a very large number of containers, you may wish to devise a different approach for remediating public access.
+
+```powershell
+# This script runs against all classic storage accounts in a single subscription
+# and sets containers to private.
+
+## IMPORTANT ##
+# Running this script requires a connected account through the previous version
+# of Azure PowerShell. Use the following command to install:
+# Install-Module Azure -scope CurrentUser -force
+#
+# Once installed, you will need to connect with:
+# Add-AzureAccount
+#
+# This command may fail if there are modules installed that conflict with it.
+# One known conflicting module is AzureRm.Accounts
+# You will need to remove conflicting modules using the following:
+# Remove-Module -name <name>
+#
+# The Azure PowerShell module assumes a current subscription when enumerating
+# storage accounts. You can set the current subscription with:
+# Select-AzureSubscription -subscriptionId <subscriptionId>
+#
+# Get-AzureSubscription lists all subscriptions available to the Azure
+# module. Not all subscriptions listed under your name in the portal may
+# appear here. If a subscription does not appear, you may need to use
+# the portal to remediate public access for those accounts.
+# After you have selected your subscription, verify that it is current
+# by running:
+# Get-AzureSubscription -current
+#
+# After the current subscription runs, you can run this script, change
+# to another subscription after it completes, and then run again as necessary.
+## END IMPORTANT##
+
+# Standard operation will enumerate all accounts and check for containers with public
+# access, then allow the user to decide whether or not to disable the setting.
+
+# Run with BypassConfirmation=$true if you wish to remove permissions from all containers
+# without individual confirmation
+
+# Run with BypassArmUpgrade=$true if you wish to upgrade your storage account to use the
+# Azure Resource Manager deployment model. All accounts must be upgraded by 31 August 2024.
+
+param(
+ [boolean]$BypassConfirmation=$false,
+ [boolean]$BypassArmUpgrade=$false
+)
+
+#Do not change this
+$convertAccounts = $false
+
+foreach($classicAccount in Get-AzureStorageAccount)
+{
+ $enumerate = $false
+
+ if(!$BypassArmUpgrade)
+ {
+ write-host "Classic Storage Account" $classicAccount.storageAccountname "found"
+ $confirmation = read-host "Convert to ARM? [y/n]:"
+ }
+ if(($confirmation -eq 'y') -and (!$BypassArmUpgrade))
+ {
+ write-host "Conversion selected"
+ $convertAccounts = $true
+ }
+ else
+ {
+ write-host $classicAccount.StorageAccountName "conversion not selected. Searching for public containers..."
+ $enumerate = $true
+ }
+
+ if($enumerate)
+ {
+ foreach($container in get-azurestoragecontainer -context (get-azurestorageaccount -storageaccountname $classicAccount.StorageAccountName).context)
+ {
+ if($container.PublicAccess -eq 'Off')
+ {
+ }
+ else
+ {
+ if(!$BypassConfirmation)
+ {
+ $selection = read-host $container.Name $container.PublicAccess "access found, Make private?[y/n]:"
+ }
+ if(($selection -eq 'y') -or ($BypassConfirmation))
+ {
+ write-host "Removing permissions from" $container.name "container on storage account" $classicaccount.StorageAccountName
+ try
+ {
+ Set-AzureStorageContainerAcl -context $classicAccount.context -name $container.name -Permission Off
+ write-host "Success!"
+ }
+ catch
+ {
+ $_
+ }
+ }
+ else
+ {
+ write-host "Skipping..."
+ }
+ }
+ }
+ }
+}
+if($convertAccounts)
+{
+ write-host "Converting accounts to ARM is the preferred method, however there are some caveats."
+ write-host "The preferred method would be to use the portal to perform the conversions and then "
+ write-host "run the ARM script against them. For more information on converting a classic account"
+ write-host "to an ARM account, please see:"
+ write-host "https://learn.microsoft.com/en-us/azure/virtual-machines/migration-classic-resource-manager-overview"
+}
+write-host "Script complete"
+```
+
+## See also
+
+- [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md)
+- [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md)
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Title: Prevent anonymous public read access to containers and blobs
+ Title: Remediate anonymous public read access to blob data (Azure Resource Manager deployments)
description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.
Previously updated : 10/28/2022 Last updated : 11/09/2022 -+
-# Prevent anonymous public read access to containers and blobs
+# Remediate anonymous public read access to blob data (Azure Resource Manager deployments)
-This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage public access for your storage accounts.
+Azure Blob Storage supports optional anonymous public read access to containers and blobs. However, anonymous access may present a security risk. We recommend that you disable anonymous access for optimal security. Disallowing public access helps to prevent data breaches caused by undesired anonymous access.
-Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data, but may also present a security risk. It's important to manage anonymous access judiciously and to understand how to evaluate anonymous access to your data. Operational complexity, human error, or malicious attack against data that is publicly accessible can result in costly data breaches. Microsoft recommends that you enable anonymous access only when necessary for your application scenario.
+By default, public access to your blob data is always prohibited. However, the default configuration for an Azure Resource Manager storage account permits a user with appropriate permissions to configure public access to containers and blobs in a storage account. You can disallow all public access to an Azure Resource Manager storage account, regardless of the public access setting for an individual container, by setting the **AllowBlobPublicAccess** property on the storage account to **False**.
-By default, public access to your blob data is always prohibited. However, the default configuration for a storage account permits a user with appropriate permissions to configure public access to containers and blobs in a storage account. For enhanced security, you can disallow all public access to storage account, regardless of the public access setting for an individual container. Disallowing public access to the storage account prevents a user from enabling public access for a container in the account. Microsoft recommends that you disallow public access to a storage account unless your scenario requires it. Disallowing public access helps to prevent data breaches caused by undesired anonymous access.
-
-When you disallow public blob access for the storage account, Azure Storage rejects all anonymous requests to that account. After public access is disallowed for an account, containers in that account cannot be subsequently configured for public access. Any containers that have already been configured for public access will no longer accept anonymous requests. For more information, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+After you disallow public blob access for the storage account, Azure Storage rejects all anonymous requests to that account. Disallowing public access to a storage account prevents users from subsequently configuring public access for containers in that account. Any containers that have already been configured for public access will no longer accept anonymous requests.
> [!WARNING]
-> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, Microsoft recommends that you disallow it for the storage account.
+> When a container is configured for public access, any client can read data in that container. Public access presents a potential security risk, so if your scenario does not require it, we recommend that you disallow it for the storage account.
+
+## Remediation for Azure Resource Manager versus classic storage accounts
+
+This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage public access for storage accounts that are using the Azure Resource Manager deployment model. All general-purpose v2 storage accounts, premium block blob storage accounts, premium file share accounts, and Blob Storage accounts use the Azure Resource Manager deployment model. Some older general-purpose v1 accounts and premium page blob accounts may use the classic deployment model.
+
+If your storage account is using the classic deployment model, we recommend that you migrate to the Azure Resource Manager deployment model as soon as possible. Azure Storage accounts that use the classic deployment model will be retired on August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/).
+
+If you cannot migrate your classic storage accounts at this time, then you should remediate public access to those accounts now. To learn how to remediate public access for classic storage accounts, see [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md). For more information about Azure deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+
+## About anonymous public read access
+
+Anonymous public access to your data is always prohibited by default. There are two separate settings that affect public access:
+
+1. **Allow public access for the storage account.** By default, a storage account allows a user with the appropriate permissions to enable public access to a container. Blob data is not available for public access unless the user takes the additional step to explicitly configure the container's public access setting.
+1. **Configure the container's public access setting.** By default, a container's public access setting is disabled, meaning that authorization is required for every request to the container or its data. A user with the appropriate permissions can modify a container's public access setting to enable anonymous access only if anonymous access is allowed for the storage account.
+
+The following table summarizes how both settings together affect public access for a container.
+
+| | Public access level for the container is set to Private (default setting) | Public access level for the container is set to Container | Public access level for the container is set to Blob |
+|--|--|--|--|
+| **Public access is disallowed for the storage account** | **Recommended.** No public access to any container in the storage account. | No public access to any container in the storage account. The storage account setting overrides the container setting. | No public access to any container in the storage account. The storage account setting overrides the container setting. |
+| **Public access is allowed for the storage account (default setting)** | No public access to this container (default configuration). | **Not recommended.** Public access is permitted to this container and its blobs. | **Not recommended.** Public access is permitted to blobs in this container, but not to the container itself. |
+
+When anonymous public access is permitted for a storage account and configured for a specific container, then a request to read a blob in that container that is passed without an *Authorization* header is accepted by the service, and the blob's data is returned in the response.
## Detect anonymous requests from client applications When you disallow public read access for a storage account, you risk rejecting requests to containers and blobs that are currently configured for public access. Disallowing public access for a storage account overrides the public access settings for individual containers in that storage account. When public access is disallowed for the storage account, any future anonymous requests to that account will fail.
-To understand how disallowing public access may affect client applications, Microsoft recommends that you enable logging and metrics for that account and analyze patterns of anonymous requests over an interval of time. Use metrics to determine the number of anonymous requests to the storage account, and use logs to determine which containers are being accessed anonymously.
+To understand how disallowing public access may affect client applications, we recommend that you enable logging and metrics for that account and analyze patterns of anonymous requests over an interval of time. Use metrics to determine the number of anonymous requests to the storage account, and use logs to determine which containers are being accessed anonymously.
### Monitor anonymous requests with Metrics Explorer
You can also configure an alert rule to notify you when a certain number of anon
Azure Storage logs capture details about requests made against the storage account, including how a request was authorized. You can analyze the logs to determine which containers are receiving anonymous requests.
-To log requests to your Azure Storage account in order to evaluate anonymous requests, you can use Azure Storage logging in Azure Monitor (preview). For more information, see [Monitor Azure Storage](./monitor-blob-storage.md).
+To log requests to your Azure Storage account in order to evaluate anonymous requests, you can use Azure Storage logging in Azure Monitor. For more information, see [Monitor Azure Storage](./monitor-blob-storage.md).
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can use an Azure Log Analytics workspace. To learn more about log queries, see [Tutorial: Get started with Log Analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md).
-> [!NOTE]
-> The preview of Azure Storage logging in Azure Monitor is supported only in the Azure public cloud. Government clouds do not support logging for Azure Storage with Azure Monitor.
- #### Create a diagnostic setting in the Azure portal To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a diagnostic setting that indicates what types of requests and for which storage services you want to log data. To create a diagnostic setting in the Azure portal, follow these steps: 1. Create a new Log Analytics workspace in the subscription that contains your Azure Storage account. After you configure logging for your storage account, the logs will be available in the Log Analytics workspace. For more information, see [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md). 1. Navigate to your storage account in the Azure portal.
-1. In the Monitoring section, select **Diagnostic settings (preview)**.
+1. In the Monitoring section, select **Diagnostic settings**.
1. Select **Blob** to log requests made against Blob storage. 1. Select **Add diagnostic setting**. 1. Provide a name for the diagnostic setting.
To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analy
After you create the diagnostic setting, requests to the storage account are subsequently logged according to that setting. For more information, see [Create diagnostic setting to collect resource logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs (preview)](./monitor-blob-storage-reference.md#resource-logs-preview).
+For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs](./monitor-blob-storage-reference.md#resource-logs).
#### Query logs for anonymous requests
StorageBlobLogs
You can also configure an alert rule based on this query to notify you about anonymous requests. For more information, see [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
-## Remediate anonymous public access
+## Remediate anonymous public access for the storage account
+
+After you have evaluated anonymous requests to containers and blobs in your storage account, you can take action to remediate public access for the whole account by setting the account's **AllowBlobPublicAccess** property to **False**.
+
+The public access setting for a storage account overrides the individual settings for containers in that account. When you disallow public access for a storage account, any containers that are configured to permit public access are no longer accessible anonymously. If you've disallowed public access for the account, you do not also need to disable public access for individual containers.
+
+If your scenario requires that certain containers need to be available for public access, then you should move those containers and their blobs into separate storage accounts that are reserved for public access. You can then disallow public access for any other storage accounts.
+
+> [!IMPORTANT]
+> After anonymous public access is disallowed for a storage account, clients that use the anonymous bearer challenge will find that Azure Storage returns a 403 error (Forbidden) rather than a 401 error (Unauthorized). We recommend that you make all containers private to mitigate this issue. For more information on modifying the public access setting for containers, see [Set the public access level for a container](anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+
+Remediating blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
+
+### Permissions for disallowing public access
+
+To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role
+- The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role
+
+Role assignments must be scoped to the level of the storage account or higher to permit a user to disallow public access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those administrative users who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
+
+These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However, they include the **Microsoft.Storage/storageAccounts/listkeys/action**, which grants access to the account access keys. With this permission, a user can use the account access keys to access all data in a storage account.
+
+The **Microsoft.Storage/storageAccounts/listkeys/action** itself grants data access via the account keys, but does not grant a user the ability to change the **AllowBlobPublicAccess** property for a storage account. For users who need to access data in your storage account but should not have the ability to change the storage account's configuration, consider assigning roles such as [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor), [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader), or [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create storage accounts and manage account configuration. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+### Set the storage account's AllowBlobPublicAccess property to False
+
+To disallow public access for a storage account, set the account's **AllowBlobPublicAccess** property to **False**. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
+
+The **AllowBlobPublicAccess** property is not set for a storage account by default and does not return a value until you explicitly set it. The storage account permits public access when the property value is either **null** or **true**.
+
+> [!IMPORTANT]
+> Disallowing public access for a storage account overrides the public access settings for all containers in that storage account. When public access is disallowed for the storage account, any future anonymous requests to that account will fail. Before changing this setting, be sure to understand the impact on client applications that may be accessing data in your storage account anonymously by following the steps outlined in [Detect anonymous requests from client applications](#detect-anonymous-requests-from-client-applications).
+
+# [Azure portal](#tab/portal)
+
+To disallow public access for a storage account in the Azure portal, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. Locate the **Configuration** setting under **Settings**.
+1. Set **Blob public access** to **Disabled**.
+
+ :::image type="content" source="media/anonymous-read-access-prevent/blob-public-access-portal.png" alt-text="Screenshot showing how to disallow blob public access for account":::
+
+# [PowerShell](#tab/powershell)
+
+To disallow public access for a storage account with PowerShell, install [Azure PowerShell version 4.4.0](https://www.powershellgallery.com/packages/Az/4.4.0) or later. Next, configure the **AllowBlobPublicAccess** property for a new or existing storage account.
+
+The following example creates a storage account and explicitly sets the **AllowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
+
+```powershell
+$rgName = "<resource-group>"
+$accountName = "<storage-account>"
+$location = "<location>"
+
+# Create a storage account with AllowBlobPublicAccess set to false.
+New-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
+ -Location $location `
+ -SkuName Standard_GRS `
+ -AllowBlobPublicAccess $false
+
+# Read the AllowBlobPublicAccess property for the newly created storage account.
+(Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).AllowBlobPublicAccess
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To disallow public access for a storage account with Azure CLI, install Azure CLI version 2.9.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). Next, configure the **allowBlobPublicAccess** property for a new or existing storage account.
+
+The following example creates a storage account and explicitly sets the **allowBlobPublicAccess** property to **false**. Remember to replace the placeholder values in brackets with your own values:
+
+```azurecli-interactive
+az storage account create \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --kind StorageV2 \
+ --location <location> \
+ --allow-blob-public-access false
+
+az storage account show \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --query allowBlobPublicAccess \
+ --output tsv
+```
-After you have evaluated anonymous requests to containers and blobs in your storage account, you can take action to limit or prevent public access. If some containers in your storage account may need to be available for public access, then you can configure the public access setting for each container in your storage account. This option provides the most granular control over public access. For more information, see [Set the public access level for a container](anonymous-read-access-configure.md#set-the-public-access-level-for-a-container).
+# [Template](#tab/template)
-For enhanced security, you can disallow public access for the whole storage account. The public access setting for a storage account overrides the individual settings for containers in that account. When you disallow public access for a storage account, any containers that are configured to permit public access are no longer accessible anonymously. For more information, see [Allow or disallow public read access for a storage account](anonymous-read-access-configure.md#allow-or-disallow-public-read-access-for-a-storage-account).
+To disallow public access for a storage account with a template, create a template with the **AllowBlobPublicAccess** property set to **false**. The following steps describe how to create a template in the Azure portal.
-If your scenario requires that certain containers need to be available for public access, it may be advisable to move those containers and their blobs into storage accounts that are reserved for public access. You can then disallow public access for any other storage accounts.
+1. In the Azure portal, choose **Create a resource**.
+1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
+1. Choose **Template deployment (deploy using custom templates)**, choose **Create**, and then choose **Build your own template in the editor**.
+1. In the template editor, paste in the following JSON to create a new account and set the **AllowBlobPublicAccess** property to **false**. Remember to replace the placeholders in angle brackets with your own values.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {
+ "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'template')]"
+ },
+ "resources": [
+ {
+ "name": "[variables('storageAccountName')]",
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "location": "<location>",
+ "properties": {
+ "allowBlobPublicAccess": false
+ },
+ "dependsOn": [],
+ "sku": {
+ "name": "Standard_GRS"
+ },
+ "kind": "StorageV2",
+ "tags": {}
+ }
+ ]
+ }
+ ```
+
+1. Save the template.
+1. Specify resource group parameter, then choose the **Review + create** button to deploy the template and create a storage account with the **allowBlobPublicAccess** property configured.
+++
+> [!NOTE]
+> Disallowing public access for a storage account does not affect any static websites hosted in that storage account. The **$web** container is always publicly accessible.
+>
+> After you update the public access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
+
+## Sample script for bulk remediation
+
+The following sample PowerShell script runs against all Azure Resource Manager storage accounts in a subscription and sets the AllowBlobPublicAccess setting for those accounts to **False**.
+
+```azurepowershell
+<#
+.SYNOPSIS
+Finds storage accounts in a subscription where AllowBlobPublicAccess is True or null.
+
+.DESCRIPTION
+This script runs against all Azure Resource Manager storage accounts in a subscription
+and sets the "AllowBlobPublicAccess" property to False.
+
+Standard operation will enumerate all accounts where the setting is enabled and allow the
+user to decide whether or not to disable the setting.
+
+Classic storage accounts will require individual adjustment of containers to remove public
+access, and will not be affected by this script.
+
+Run with BypassConfirmation=$true if you wish to disallow public access on all Azure Resource Manager
+storage accounts without individual confirmation.
+
+You will need access to the subscription to run the script.
+
+.PARAMETER BypassConformation
+Set this to $true to skip confirmation of changes. Not recommended.
+
+.PARAMETER SubscriptionId
+The subscription ID of the subscription to check.
+
+.PARAMETER ReadOnly
+Set this parameter so that the script makes no changes to any subscriptions and only reports affect accounts.
+
+.PARAMETER NoSignin
+Set this parameter so that no sign-in occurs -- you must sign in first. Use this if you're invoking this script repeatedly for multiple subscriptions and want to avoid being prompted to sign-in for each subscription.
+
+.OUTPUTS
+This command produces only STDOUT output (not standard PowerShell) with information about affect accounts.
+#>
+param(
+ [boolean]$BypassConfirmation=$false,
+ [Parameter(Mandatory=$true, ValueFromPipelineByPropertyName='SubscriptionId')]
+ [String] $SubscriptionId,
+ [switch] $ReadOnly, # Use this if you don't want to make changes, but want to get information about affected accounts
+ [switch] $NoSignin # Use this if you are already signed in and don't want to be prompted again
+)
+
+begin {
+ if ( ! $NoSignin.IsPresent ) {
+ login-azaccount | out-null
+ }
+}
+
+process {
+ Write-Host "NOTE: If you are using OAuth authorization on a storage account, disabling public access at the account level may interfere with authorization."
+
+ try {
+ select-azsubscription -subscriptionid $SubscriptionId -erroraction stop | out-null
+ } catch {
+ write-error "Unable to access select subscription '$SubscriptionId' as the signed in user -- ensure that you have access to this subscription." -erroraction stop
+ }
+
+ foreach ($account in Get-AzStorageAccount)
+ {
+ if($account.AllowBlobPublicAccess -eq $null -or $account.AllowBlobPublicAccess -eq $true)
+ {
+ Write-host "Account:" $account.StorageAccountName " is not disallowing public access."
+
+ if ( ! $ReadOnly.IsPresent ) {
+ if(!$BypassConfirmation)
+ {
+ $confirmation = Read-Host "Do you wish to disallow public access? [y/n]"
+ }
+ if($BypassConfirmation -or $confirmation -eq 'y')
+ {
+ try
+ {
+ set-AzStorageAccount -Name $account.StorageAccountName -ResourceGroupName $account.ResourceGroupName -AllowBlobPublicAccess $false
+ Write-Host "Success!"
+ }
+ catch
+ {
+ Write-output $_
+ }
+ }
+ }
+ }
+ elseif($account.AllowBlobPublicAccess -eq $false)
+ {
+ Write-Host "Account:" $account.StorageAccountName " has public access disabled, no action required."
+ }
+ else
+ {
+ Write-Host "Account:" $account.StorageAccountName ". Error, please manually investigate."
+ }
+ }
+}
+
+end {
+ Write-Host "Script complete"
+}
+```
+
+## Verify that anonymous access has been remediated
+
+To verify that you've remediated anonymous access for a storage account, you can test that anonymous access to a blob is not permitted, that modifying a container's public access setting is not permitted, and that it's not possible to create a container with anonymous access enabled.
### Verify that public access to a blob is not permitted
resources
| project subscriptionId, resourceGroup, name, allowBlobPublicAccess ```
-The following image shows the results of a query across a subscription. Note that for storage accounts where the **AllowBlobPublicAccess** property has been explicitly set, it appears in the results as **true** or **false**. If the **AllowBlobPublicAccess** property has not been set for a storage account, it appears as blank (or null) in the query results.
+The following image shows the results of a query across a subscription. Note that for storage accounts where the **AllowBlobPublicAccess** property has been explicitly set, it appears in the results as **true** or **false**. If the **AllowBlobPublicAccess** property has not been set for a storage account, it appears as blank (or **null**) in the query results.
:::image type="content" source="media/anonymous-read-access-prevent/check-public-access-setting-accounts.png" alt-text="Screenshot showing query results for public access setting across storage accounts":::
The following image shows the error that occurs if you try to create a storage a
:::image type="content" source="media/anonymous-read-access-prevent/deny-policy-error.png" alt-text="Screenshot showing the error that occurs when creating a storage account in violation of policy":::
-## Permissions for allowing or disallowing public access
-
-To set the **AllowBlobPublicAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** or **Microsoft.Storage/storageAccounts/\*** action. Built-in roles with this action include:
--- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role-- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role-- The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role-
-These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However, they include the **Microsoft.Storage/storageAccounts/listkeys/action**, which grants access to the account access keys. With this permission, a user can use the account access keys to access all data in a storage account.
-
-Role assignments must be scoped to the level of the storage account or higher to permit a user to allow or disallow public access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
-
-Be careful to restrict assignment of these roles only to those who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
-
-> [!NOTE]
-> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage storage accounts. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
- ## Next steps -- [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md)-- [Access public containers and blobs anonymously with .NET](anonymous-read-access-client.md)
+- [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md)
+- [Remediate anonymous public read access to blob data (classic deployments)](anonymous-read-access-prevent-classic.md)
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
description: Learn how to calculate the cost of storing and maintaining data in the archive storage tier. Previously updated : 11/02/2022 Last updated : 11/09/2022
This scenario assumes an initial ingest of 2,000,000 files totaling 102,400 GB i
<td>$0.00001</td> <td>$0.00001</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to write (transactions * price of a write operation)</td>
- <td>$20.00</td>
- <td>$0.00</td>
- <td>$0.00</td>
- <td>$20.00</td>
+ <tr>
+ <td><strong>Cost to write (transactions * price of a write operation)</strong></td>
+ <td><strong>$20.00</strong></td>
+ <td><strong>$0.00</strong></td>
+ <td><strong>$0.00</strong></td>
+ <td><strong>$20.00</strong></td>
</tr> <tr> <td>Total file size (GB)</td>
This scenario assumes an initial ingest of 2,000,000 files totaling 102,400 GB i
<td>$0.00099</td> <td>$0.00099</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to store (file size * data price)</td>
- <td>$101.38</td>
- <td>$101.38</td>
- <td>$101.38</td>
- <td>$1,216.51</td>
+ <tr>
+ <td><strong>Cost to store (file size * data price)</strong></td>
+ <td><strong>$101.38</strong></td>
+ <td><strong>$101.38</strong></td>
+ <td><strong>$101.38</strong></td>
+ <td><strong>$1,216.51</strong></td>
</tr> <tr> <td>Data retrieval size</td>
This scenario assumes an initial ingest of 2,000,000 files totaling 102,400 GB i
<td>$0.0005</td> <td>$0.0005</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
- <td>$30.48</td>
- <td>$30.48</td>
- <td>$30.48</td>
- <td>$365.76</td>
+ <tr>
+ <td><strong>Cost to rehydrate (cost to retrieve + cost to read)</strong></td>
+ <td><strong>$30.48</strong></td>
+ <td><strong>$30.48</strong></td>
+ <td><strong>$30.48</strong></td>
+ <td><strong>$365.76</strong></td>
</tr> <tr> <td><strong>Total cost</strong></td>
This scenario assumes an initial ingest of 2,000,000 files totaling 102,400 GB i
</tr> </table>
+> [!TIP]
+> To view these costs over 12 months, open the **One-Time Backup** tab of this [workbook](https://azure.github.io/Storage/docs/backup-and-archive/azure-archive-storage-cost-estimation/azure-archive-storage-cost-estimation.xlsx). You can modify the values in that worksheet to estimate your costs.
## Scenario: Continuous tiering
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
<td>$0.00001</td> <td>$0.00001</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to write (transactions * price of a write operation)</td>
- <td>$2.00</td>
- <td>$2.00</td>
- <td>$2.00</td>
- <td>$24.00</td>
+ <tr>
+ <td><strong>Cost to write (transactions * price of a write operation)</></strong></td>
+ <td><strong>$2.00</strong></td>
+ <td><strong>$2.</strong></td>
+ <td><strong>$2.00</strong></td>
+ <td><strong>$24.00</strong></td>
</tr> <tr> <td>Total file size (GB)</td>
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
<td>$0.00099</td> <td>$0.00099</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to store (file size * data price)</td>
- <td>$10.14</td>
- <td>$20.28</td>
- <td>$30.41</td>
- <td>$790.73</td>
+ <tr>
+ <td><strong>Cost to store (file size * data price)</strong></td>
+ <td><strong>$10.14</strong></td>
+ <td><strong>20.28</></td>
+ <td><strong>$30.41</></td>
+ <td><strong>$790.73</strong></td>
</tr> <tr> <td>Price of data retrieval</td>
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
<td>$0.0005</td> <td>$0.0005</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
- <td>$3.05</td>
- <td>$6.10</td>
- <td>$9.14</td>
- <td>$237.74</td>
+ <tr>
+ <td><strong>Cost to rehydrate (cost to retrieve + cost to read)</strong></td>
+ <td><strong>$3.05</strong></td>
+ <td><strong>$6.10</strong></td>
+ <td><strong>$9.14</strong></td>
+ <td><strong>$237.74</strong></td>
</tr> <tr> <td><strong>Total cost</strong></td>
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
</tr> </table>
+> [!TIP]
+> To view these costs over 12 months, open the **Continuous Tiering** tab of this [workbook](https://azure.github.io/Storage/docs/backup-and-archive/azure-archive-storage-cost-estimation/azure-archive-storage-cost-estimation.xlsx). You can modify the values in that worksheet to estimate your costs.
-## Archive versus cool
+## Archive versus cool
-Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool and archive tiers.
+Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool and archive tiers.
The following table compares the cost of archive storage with the cost of cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000). <br><br>
The following table compares the cost of archive storage with the cost of cold s
<td>$0.00001</td> <td>$0.00001</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to write (transactions * price of a write operation)</td>
- <td>$2.00</td>
- <td>$2.00</td>
+ <tr>
+ <td><strong>Cost to write (transactions * price of a write operation)</strong></td>
+ <td><strong>$2.00</strong></td>
+ <td><strong>$2.00</strong></td>
</tr> <tr> <td>Total file size (GB)</td>
The following table compares the cost of archive storage with the cost of cold s
<td>$0.00099</td> <td>$0.0152</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to store (file size * data price)</td>
- <td>$10.14</td>
- <td>$155.65</td>
+ <tr>
+ <td><strong>Cost to store (file size * data price)</strong></td>
+ <td><strong>$10.14</strong></td>
+ <td><strong>$155.65</strong></td>
</tr> <tr> <td>Data retrieval size</td>
The following table compares the cost of archive storage with the cost of cold s
<td>$0.0005</td> <td>$0.000001</td> </tr>
- <tr bgcolor="beige">
- <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
- <td>$30.48</td>
- <td>$10.26</td>
+ <tr>
+ <td><strong>Cost to rehydrate (cost to retrieve + cost to read)</strong></td>
+ <td><strong>$30.48</strong></td>
+ <td><strong>$10.26</strong></td>
</tr> <tr> <td><strong>Monthly cost</strong></td>
The following table compares the cost of archive storage with the cost of cold s
</tr> </table>
+> [!TIP]
+> To view these costs over 12 months, open the **Cool vs Archive** tab of this [workbook](https://azure.github.io/Storage/docs/backup-and-archive/azure-archive-storage-cost-estimation/azure-archive-storage-cost-estimation.xlsx). You can modify the values in that worksheet to estimate your costs.
+ The following chart shows the impact on monthly spending given various read percentages. This chart assumes a monthly ingest of 1,000,000 files totaling 10,240 GB in size. For example, the second pair of bars assumes that workloads read 100,000 files (**10%** of 1,000,000 files) and 1,024 GB (**10%** of 10,240 GB). Assuming the sample pricing, the estimated monthly cost of cool storage is **$175.99** and the estimated monthly cost of archive storage is **$90.62**.
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
To create a container in the [Azure portal](https://portal.azure.com), follow th
1. In the navigation pane for the storage account, scroll to the **Data storage** section and select **Containers**. 1. Within the **Containers** pane, select the **+ Container** button to open the **New container** pane. 1. Within the **New Container** pane, provide a **Name** for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see [Naming and referencing containers, blobs, and metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-1. Set the **Public access level** for the container. The default level is **Private (no anonymous access)**. Read the article to learn how to [configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md?tabs=portal).
+1. Set the **Public access level** for the container. The recommended level is **Private (no anonymous access)**. For information about preventing anonymous public access to blob data, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md).
1. Select **Create** to create the container. :::image type="content" source="media/blob-containers-portal/create-container-sml.png" alt-text="Screenshot showing how to create a container within the Azure portal." lightbox="media/blob-containers-portal/create-container-lrg.png":::
Azure Active Directory (Azure AD) offers optimum security for Blob Storage resou
You can read about the assignment of roles at [Assign Azure roles using the Azure portal](assign-azure-role-data-access.md?tabs=portal).
-### Enable anonymous public read access
-
-Although anonymous read access for containers is supported, it's disabled by default. All access requests must require authorization until anonymous access is explicitly enabled. After anonymous access is enabled, any client will be able to read data within that container without authorizing the request.
-
-Read about enabling public access level in the [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md?tabs=portal) article.
- ### Generate a shared access signature A shared access signature (SAS) provides temporary, secure, delegated access to a client who wouldn't normally have permissions. A SAS gives you granular control over how a client can access your data. For example, you can specify which resources are available to the client. You can also limit the types of operations that the client can perform, and specify the duration.
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Azure Data Lake Storage Gen2 implements an access control model that supports bo
## About ACLs
-You can associate a [security principal](../../role-based-access-control/overview.md#security-principal) with an access level for files and directories. Each association is captured as an entry in an *access control list (ACL)*. Each file and directory in your storage account has an access control list. When a security principal attempts an operation on a file or directory, An ACL check determines whether that security principal (user, group, service principal, or managed identity) has the correct permission level to perform the operation.
+You can associate a [security principal](../../role-based-access-control/overview.md#security-principal) with an access level for files and directories. Each association is captured as an entry in an *access control list (ACL)*. Each file and directory in your storage account has an access control list. When a security principal attempts an operation on a file or directory, an ACL check determines whether that security principal (user, group, service principal, or managed identity) has the correct permission level to perform the operation.
> [!NOTE] > ACLs apply only to security principals in the same tenant, and they don't apply to users who use Shared Key or shared access signature (SAS) token authentication. That's because no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed.
The umask for Azure Data Lake Storage Gen2 a constant value that is set to 007.
No. Access control via ACLs is enabled for a storage account as long as the Hierarchical Namespace (HNS) feature is turned ON.
-If HNS is turned OFF, the Azure Azure RBAC authorization rules still apply.
+If HNS is turned OFF, the Azure RBAC authorization rules still apply.
### What is the best way to apply ACLs?
A GUID is shown if the entry represents a user and that user doesn't exist in Az
When you define ACLs for service principals, it's important to use the Object ID (OID) of the *service principal* for the app registration that you created. It's important to note that registered apps have a separate service principal in the specific Azure AD tenant. Registered apps have an OID that's visible in the Azure portal, but the *service principal* has another (different) OID. Article
-To get the OID for the service principal that corresponds to an app registration, you can use the `az ad sp show` command. Specify the Application ID as the parameter. Here's an example on obtaining the OID for the service principal that corresponds to an app registration with App ID = 18218b12-1895-43e9-ad80-6e8fc1ea88ce. Run the following command in the Azure CLI:
+To get the OID for the service principal that corresponds to an app registration, you can use the `az ad sp show` command. Specify the Application ID as the parameter. Here's an example of obtaining the OID for the service principal that corresponds to an app registration with App ID = 18218b12-1895-43e9-ad80-6e8fc1ea88ce. Run the following command in the Azure CLI:
```azurecli az ad sp show --id 18218b12-1895-43e9-ad80-6e8fc1ea88ce --query objectId
When you have the correct OID for the service principal, go to the Storage Explo
No. A container does not have an ACL. However, you can set the ACL of the container's root directory. Every container has a root directory, and it shares the same name as the container. For example, if the container is named `my-container`, then the root directory is named `my-container/`.
-The Azure Storage REST API does contain an operation named [Set Container ACL](/rest/api/storageservices/set-container-acl), but that operation cannot be used to set the ACL of a container or the root directory of a container. Instead, that operation is used to indicate whether blobs in a container [may be accessed publicly](anonymous-read-access-configure.md).
+The Azure Storage REST API does contain an operation named [Set Container ACL](/rest/api/storageservices/set-container-acl), but that operation cannot be used to set the ACL of a container or the root directory of a container. Instead, that operation is used to indicate whether blobs in a container may be accessed with an anonymous request. We recommend requiring authorization for all requests to blob data. For more information, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md).
### Where can I learn more about POSIX access control model?
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
The ability to apply ACL changes recursively from parent directory to child item
## Access control lists (ACL) and anonymous read access
-If [anonymous read access](./anonymous-read-access-configure.md) has been granted to a container, then ACLs have no effect on that container or the files in that container. This only affects read requests. Write requests will still honor the ACLs.
+If [anonymous read access](./anonymous-read-access-overview.md) has been granted to a container, then ACLs have no effect on that container or the files in that container. This only affects read requests. Write requests will still honor the ACLs. We recommend requiring authorization for all requests to blob data.
<a id="known-issues-tools"></a>
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Microsoft Defender for Cloud periodically analyzes the security state of your Az
| Recommendation | Comments | Defender for Cloud | |-|-|--| | Use Azure Active Directory (Azure AD) to authorize access to blob data | Azure AD provides superior security and ease of use over Shared Key for authorizing requests to Blob storage. For more information, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md). | - |
-| Keep in mind the principal of least privilege when assigning permissions to an Azure AD security principal via Azure RBAC | When assigning a role to a user, group, or application, grant that security principal only those permissions that are necessary for them to perform their tasks. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
+| Keep in mind the principle of least privilege when assigning permissions to an Azure AD security principal via Azure RBAC | When assigning a role to a user, group, or application, grant that security principal only those permissions that are necessary for them to perform their tasks. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
| Use a user delegation SAS to grant limited access to blob data to clients | A user delegation SAS is secured with Azure Active Directory (Azure AD) credentials and also by the permissions specified for the SAS. A user delegation SAS is analogous to a service SAS in terms of its scope and function, but offers security benefits over the service SAS. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json). | - | | Secure your account access keys with Azure Key Vault | Microsoft recommends using Azure AD to authorize requests to Azure Storage. However, if you must use Shared Key authorization, then secure your account keys with Azure Key Vault. You can retrieve the keys from the key vault at runtime, instead of saving them with your application. For more information about Azure Key Vault, see [Azure Key Vault overview](../../key-vault/general/overview.md). | - | | Regenerate your account keys periodically | Rotating the account keys periodically reduces the risk of exposing your data to malicious actors. | - | | Disallow Shared Key authorization | When you disallow Shared Key authorization for a storage account, Azure Storage rejects all subsequent requests to that account that are authorized with the account access keys. Only secured requests that are authorized with Azure AD will succeed. For more information, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md). | - |
-| Keep in mind the principal of least privilege when assigning permissions to a SAS | When creating a SAS, specify only those permissions that are required by the client to perform its function. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
+| Keep in mind the principle of least privilege when assigning permissions to a SAS | When creating a SAS, specify only those permissions that are required by the client to perform its function. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
| Have a revocation plan in place for any SAS that you issue to clients | If a SAS is compromised, you will want to revoke that SAS as soon as possible. To revoke a user delegation SAS, revoke the user delegation key to quickly invalidate all signatures associated with that key. To revoke a service SAS that is associated with a stored access policy, you can delete the stored access policy, rename the policy, or change its expiry time to a time that is in the past. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md). | - | | If a service SAS is not associated with a stored access policy, then set the expiry time to one hour or less | A service SAS that is not associated with a stored access policy cannot be revoked. For this reason, limiting the expiry time so that the SAS is valid for one hour or less is recommended. | - |
-| Disable anonymous public read access to containers and blobs | Anonymous public read access to a container and its blobs grants read-only access to those resources to any client. Avoid enabling public read access unless your scenario requires it. To learn how to disable anonymous public access for a storage account, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md). | - |
+| Disable anonymous public read access to containers and blobs | Anonymous public read access to a container and its blobs grants read-only access to those resources to any client. Avoid enabling public read access unless your scenario requires it. To learn how to disable anonymous public access for a storage account, see [Overview: Remediating anonymous public read access for blob data](anonymous-read-access-overview.md). | - |
## Networking
storage Static Website Content Delivery Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/static-website-content-delivery-network.md
You can enable Azure CDN for your static website directly from your storage acco
If you no longer want to cache an object in Azure CDN, you can take one of the following steps: -- Make the container private instead of public. For more information, see [Manage anonymous read access to containers and blobs](./anonymous-read-access-configure.md).
+- Make the container private instead of public. For more information, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
- Disable or delete the CDN endpoint by using the Azure portal. - Modify your hosted service to no longer respond to requests for the object.
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
using Azure.Storage.Blobs.Specialized;
## Connect to Blob Storage
-To connect to Blob Storage, create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and it's containers. You can create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using an account access key, a shared access signature (SAS), or by using an Azure Active Directory (Azure AD) authorization token.
+To connect to Blob Storage, create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and its containers. You can create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using an account access key, a shared access signature (SAS), or by using an Azure Active Directory (Azure AD) authorization token.
To learn more about each of these authorization mechanisms, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md).
public static void GetBlobServiceClientAzureAD(ref BlobServiceClient blobService
blobServiceClient = new BlobServiceClient(new Uri(blobUri), credential); }-
-```
-
-#### Connect anonymously
-
-If you explicitly enable anonymous access, then your code can create connect to Blob Storage without authorize your request. You can create a new service client object for anonymous access by providing the Blob storage endpoint for the account. However, you must also know the name of a container in that account that's available for anonymous access. To learn how to enable anonymous access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
-
-```csharp
-public static void CreateAnonymousBlobClient()
-{
- // Create the client object using the Blob storage endpoint for your account.
- BlobServiceClient blobServiceClient = new BlobServiceClient
- (new Uri(@"https://storagesamples.blob.core.windows.net/"));
-
- // Get a reference to a container that's available for anonymous access.
- BlobContainerClient container = blobServiceClient.GetBlobContainerClient("sample-container");
-
- // Read the container's properties.
- // Note this is only possible when the container supports full public read access.
- Console.WriteLine(container.GetProperties().Value.LastModified);
- Console.WriteLine(container.GetProperties().Value.ETag);
-}
-```
-
-Alternatively, if you have the URL to a container that is anonymously available, you can use it to reference the container directly.
-
-```csharp
-public static void ListBlobsAnonymously()
-{
- // Get a reference to a container that's available for anonymous access.
- BlobContainerClient container = new BlobContainerClient
- (new Uri(@"https://storagesamples.blob.core.windows.net/sample-container"));
-
- // List blobs in the container.
- // Note this is only possible when the container supports full public read access.
- foreach (BlobItem blobItem in container.GetBlobs())
- {
- Console.WriteLine(container.GetBlockBlobClient(blobItem.Name).Uri);
- }
-}
``` ## Build your application
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
To generate and manage SAS tokens, see any of these articles:
- [Create a service SAS for a container or blob](sas-service-create.md)
-## Connect anonymously
-
-If you explicitly enable anonymous access, then you can connect to Blob Storage without authorization for your request. You can create a new BlobServiceClient object for anonymous access by providing the Blob storage endpoint for the account. This requires you to know the account and container names. To learn how to enable anonymous access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
-- The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control. Each type of resource is represented by one or more associated JavaScript clients:
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
If you set up [redundancy in a secondary region](../common/storage-redundancy.md
You can modify the public access level of the **$web** container, but making this modification has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files.
-The following screenshot shows the public access level setting in the Azure portal:
-
-![Screenshot showing how to set public access level in the portal](./media/anonymous-read-access-configure/configure-public-access-container.png)
- While the primary static website endpoint isn't affected, a change to the public access level does impact the primary blob service endpoint. For example, if you change the public access level of the **$web** container from **Private (no anonymous access)** to **Blob (anonymous read access for blobs only)**, then the level of public access to the primary static website endpoint `https://contosoblobaccount.z22.web.core.windows.net/https://docsupdatetracker.net/index.html` doesn't change. However, the public access to the primary blob service endpoint `https://contosoblobaccount.blob.core.windows.net/$web/https://docsupdatetracker.net/index.html` does change from private to public. Now users can open that file by using either of these two endpoints.
-Disabling public access on a storage account doesn't affect static websites that are hosted in that storage account. For more information, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+Disabling public access on a storage account doesn't affect static websites that are hosted in that storage account. For more information, see [Remediate anonymous public read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
## Mapping a custom domain to a static website URL
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Access tier - archive](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | | [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Anonymous public access](anonymous-read-access-configure.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Blob index tags](storage-manage-find-blobs.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a standard gener
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Prevent anonymous public access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
The following table describes whether a feature is supported in a premium block
| [Access tier - archive](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Access tier - cool](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Access tier - hot](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Anonymous public access](anonymous-read-access-configure.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Blob index tags](storage-manage-find-blobs.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a premium block
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Prevent anonymous public access](anonymous-read-access-prevent.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Title: "Quickstart: Azure Blob Storage library - .NET"
description: In this quickstart, you will learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. Previously updated : 10/06/2021 Last updated : 11/09/2022 ms.devlang: csharp-+
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Title: "Quickstart: Azure Blob Storage library - Java"
description: In this quickstart, you learn how to use the Azure Blob Storage client library for Java to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. -+ Last updated 10/24/2022
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
ms.devlang: javascript-+ # Quickstart: Manage blobs with JavaScript SDK in Node.js
The preceding code cleans up the resources the app created by removing the entir
Step through the code in your debugger and check your [Azure portal](https://portal.azure.com) throughout the process. Check to see that the container is being created. You can open the blob inside the container and view the contents.
-## Use the storage emulator
-
-This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](../common/storage-use-emulator.md) for development and testing.
- ## Clean up 1. When you're done with this quickstart, delete the `blob-quickstart` directory.
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
ms.devlang: python-+ # Quickstart: Azure Blob Storage client library for Python
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
The following table describes the options that Azure Storage offers for authoriz
| Azure artifact | Shared Key (storage account key) | Shared access signature (SAS) | Azure Active Directory (Azure AD) | On-premises Active Directory Domain Services | Anonymous public read access | Storage Local Users | |--|--|--|--|--|--|--|
-| Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported](../blobs/anonymous-read-access-configure.md) | [Supported, only for SFTP](../blobs/secure-file-transfer-protocol-support-how-to.md) |
+| Azure Blobs | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../blobs/authorize-access-azure-active-directory.md) | Not supported | [Supported but not recommended](../blobs/anonymous-read-access-overview.md) | [Supported, only for SFTP](../blobs/secure-file-transfer-protocol-support-how-to.md) |
| Azure Files (SMB) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | Not supported | [Supported, only with AAD Domain Services](../files/storage-files-active-directory-overview.md) | [Supported, credentials must be synced to Azure AD](../files/storage-files-active-directory-overview.md) | Not supported | Supported | | Azure Files (REST) | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | Not supported | Not supported | Not supported | Not supported | | Azure Queues | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../queues/authorize-access-azure-active-directory.md) | Not Supported | Not supported | Not supported |
Each authorization option is briefly described below:
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over SMB through AD DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure. You can use a combination of Azure RBAC for share level access control and NTFS DACLs for directory/file level permission enforcement. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md). -- **Anonymous public read access** for containers and blobs. When anonymous access is configured, then clients can read blob data without authorization. For more information, see [Manage anonymous read access to containers and blobs](../blobs/anonymous-read-access-configure.md).-
- You can disallow anonymous public read access for a storage account. When anonymous public read access is disallowed, then users cannot configure containers to enable anonymous access, and all requests must be authorized. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).
+- **Anonymous public read access** for blob data is supported, but not recommended. When anonymous access is configured, clients can read blob data without authorization. We recommend that you disable anonymous access for all of your storage accounts. For more information, see [Overview: Remediating anonymous public read access for blob data](../blobs/anonymous-read-access-overview.md).
- **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP.
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
> [!NOTE] > A new pricing plan is now available for Microsoft Defender for Cloud that charges you according to the number of storage accounts that you protect (per-storage). >
-> In the legacy pricing plan, the cost increases according to the number of analyzed transactions in the storage account (per-transaction). The new per-storage plan fixes costs per storage account, but accounts with an exceptionally high transaction volume incur an overage charge.
+> In the legacy pricing plan, the cost increases according to the number of analyzed transactions in the storage account (per-transaction). The new per-storage account plan fixes costs per storage account, but accounts with an exceptionally high transaction volume incur an overage charge.
> > For details about the pricing plans, see [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
Learn more about the [benefits, features, and limitations of Defender for Storag
|Protected storage types:|[Blob Storage](../blobs/storage-blobs-introduction.md) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)| |Clouds:|:::image type="icon" source="../../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="../../defender-for-cloud/media/icons/yes-icon.png"::: Azure Government (Only for per-transaction plan)<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
-## Set up Microsoft Defender for Storage for the per-storage pricing plan
+## Set up Microsoft Defender for Storage for the per-storage account pricing plan
> [!NOTE]
-> You can only enable the per-storage pricing plan at the subscription level.
+> You can only enable the per-storage account pricing plan at the subscription level.
-With the Defender for Storage per-storage pricing plan, you can configure Microsoft Defender for Storage on your subscriptions in several ways. When the plan is enabled at the subscription level, Microsoft Defender for Storage is automatically enabled for all your existing and new storage accounts created under that subscription.
+With the Defender for Storage per-storage account pricing plan, you can configure Microsoft Defender for Storage on your subscriptions in several ways. When the plan is enabled at the subscription level, Microsoft Defender for Storage is automatically enabled for all your existing and new storage accounts created under that subscription.
You can configure Microsoft Defender for Storage on your subscriptions in several ways:
You can configure Microsoft Defender for Storage on your subscriptions in severa
- [Bicep template](#bicep-template) - [ARM template](#arm-template) - [Terraform template](#terraform-template)-- [PowerShell](#powershell)-- [Azure CLI](#azure-cli) - [REST API](#rest-api) ### Azure portal
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using the Azure portal:
+To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com/).
To disable the plan, select **Off** for Defender for Storage in the Defender pla
### Bicep template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
+To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
```bicep resource symbolicname 'Microsoft.Security/pricings@2022-03-01' = {
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft
### ARM template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using an ARM template, add this JSON snippet to the resources section of your ARM template:
+To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using an ARM template, add this JSON snippet to the resources section of your ARM template:
```json {
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
### Terraform template
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
+To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
```terraform resource "azapi_resource" "symbolicname" {
To disable the plan, set the `pricingTier` property value to `Free` and remove t
Learn more about the [Terraform template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-terraform).
-### PowerShell
-
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using PowerShell:
-
-1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps.md).
-1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps.md).
-1. Use these commands to register your subscription to the Microsoft Defender for Cloud Resource Provider:
-
- ```powershell
- Set-AzContext -Subscription <subscriptionId>
- Register-AzResourceProvider -ProviderNamespace 'Microsoft.Security'
- ```
-
- Replace `<subscriptionId>` with your subscription ID.
-
-1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`Set-AzSecurityPricing` cmdlet:
-
- ```powershell
- Set-AzSecurityPricing -Name "StorageAccounts" -PricingTier "Standard" -subPlan "PerStorageAccount"
- ```
-
-> [!TIP]
-> You can use the [`GetAzSecurityPricing` (Az_Security)](/powershell/module/az.security/get-azsecuritypricing.md) to see all of the Defender for Cloud plans that are enabled for the subscription.
-
-To disable the plan, set the `-PricingTier` property value to `Free` and remove the `subPlan` parameter.
-
-Learn more about the [using PowerShell with Microsoft Defender for Cloud](../../defender-for-cloud/powershell-onboarding.md).
-
-### Azure CLI
-
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using Azure CLI:
-
-1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli).
-1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
-1. Use these commands to set the subscription ID and name:
-
- ```azurecli
- az account set --subscription "<subscriptionId or name>"
- ```
-
- Replace `<subscriptionId>` with your subscription ID.
-
-1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`az security pricing create` command:
-
- ```azurecli
- az security pricing create -n StorageAccounts --tier "standard" --subPlan "PerStorageAccount"
- ```
-
-> [!TIP]
-> You can use the [`az security pricing show`](/cli/azure/security/pricing#az-security-pricing-show) command to see all of the Defender for Cloud plans that are enabled for the subscription.
-
-To disable the plan, set the `-tier` property value to `free` and remove the `subPlan` parameter.
-
-Learn more about the [az security pricing create](/cli/azure/security/pricing.md#az-security-pricing-create.md) command.
- ### REST API
-To enable Microsoft Defender for Storage at the subscription level with the per-storage plan using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
+To enable Microsoft Defender for Storage at the subscription level with the per-storage account plan using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2022-03-01
You can configure Microsoft Defender for Storage on your subscriptions in severa
- [Bicep template](#bicep-template-1) - [ARM template](#arm-template-1) - [Terraform template](#terraform-template-1)-- [PowerShell](#powershell-1)-- [Azure CLI](#azure-cli-1)
+- [PowerShell](#powershell)
+- [Azure CLI](#azure-cli)
- [REST API](#rest-api-1) #### Bicep template
To enable Microsoft Defender for Storage at the subscription level with the per-
1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`Set-AzSecurityPricing` cmdlet: ```powershell
- Set-AzSecurityPricing -Name "StorageAccounts" -PricingTier "Standard" -subPlan "PerTransaction"
+ Set-AzSecurityPricing -Name "StorageAccounts" -PricingTier "Standard"
``` > [!TIP] > You can use the [`GetAzSecurityPricing` (Az_Security)](/powershell/module/az.security/get-azsecuritypricing.md) to see all of the Defender for Cloud plans that are enabled for the subscription.
-To disable the plan, set the `-PricingTier` property value to `Free` and remove the `subPlan` parameter.
+To disable the plan, set the `-PricingTier` property value to `Free`.
Learn more about the [using PowerShell with Microsoft Defender for Cloud](../../defender-for-cloud/powershell-onboarding.md).
To enable Microsoft Defender for Storage at the subscription level with the per-
1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`az security pricing create` command: ```azurecli
- az security pricing create -n StorageAccounts --tier "standard" --subPlan "PerTransaction"
+ az security pricing create -n StorageAccounts --tier "standard"
``` > [!TIP] > You can use the [`az security pricing show`](/cli/azure/security/pricing#az-security-pricing-show) command to see all of the Defender for Cloud plans that are enabled for the subscription.
-To disable the plan, set the `-tier` property value to `free` and remove the `subPlan` parameter.
+To disable the plan, set the `-tier` property value to `free`.
Learn more about the [`az security pricing create`](/cli/azure/security/pricing.md#az-security-pricing-create) command.
You can configure Microsoft Defender for Storage on your accounts in several way
- [Azure portal](#azure-portal-1) - [ARM template](#arm-template-2)-- [PowerShell](#powershell-2)-- [Azure CLI](#azure-cli-2)
+- [PowerShell](#powershell-1)
+- [Azure CLI](#azure-cli-1)
#### Azure portal
Learn more about the [az security atp storage](/cli/azure/security/atp/storage#a
## FAQ - Microsoft Defender for Storage pricing plans
-### Can I switch from an existing per-transaction plan to the per-storage plan?
+### Can I switch from an existing per-transaction plan to the per-storage account plan?
-Yes, you can migrate to the per-storage plan from the Azure portal or all the other supported enablement methods. To migrate to the per-storage plan, [enable the per-storage plan at the subscription level](#set-up-microsoft-defender-for-storage-for-the-per-storage-pricing-plan).
+Yes, you can migrate to the per-storage account plan from the Azure portal or all the other supported enablement methods. To migrate to the per-storage account plan, [enable the per-storage account plan at the subscription level](#set-up-microsoft-defender-for-storage-for-the-per-storage-account-pricing-plan).
-### Can I return to the per-transaction plan after switching to the per-storage plan?
+### Can I return to the per-transaction plan after switching to the per-storage account plan?
-Yes, you can enable the per-transaction to migrate back from the per-storage plan using all enablement methods except for the Azure portal.
+Yes, you can enable the per-transaction to migrate back from the per-storage account plan using all enablement methods except for the Azure portal.
### Will you continue supporting the per-transaction plan? Yes, you can [enable the per-transaction plan](#set-up-microsoft-defender-for-storage-for-the-per-transaction-pricing-plan) from all the enablement methods, except for the Azure portal.
-### Can I exclude specific storage accounts from protections in the per-storage plan?
+### Can I exclude specific storage accounts from protections in the per-storage account plan?
-No, you can only enable the per-storage pricing plan for each subscription. All storage accounts in the subscription are protected.
+No, you can only enable the per-storage account pricing plan for each subscription. All storage accounts in the subscription are protected.
-### How long does it take for the per-storage plan to be enabled?
+### How long does it take for the per-storage account plan to be enabled?
-When you enable Microsoft Defender for Storage at the subscription level for the per-storage or per-transaction plans, it takes up to 24 hours for the plan to be enabled.
+When you enable Microsoft Defender for Storage at the subscription level for the per-storage account or per-transaction plans, it takes up to 24 hours for the plan to be enabled.
-### Is there any difference in the feature set of the per-storage plan compared to the legacy per-transaction plan?
+### Is there any difference in the feature set of the per-storage account plan compared to the legacy per-transaction plan?
-No. Both the per-storage and per-transaction plans include the same features. The only difference is the pricing plan.
+No. Both the per-storage account and per-transaction plans include the same features. The only difference is the pricing plan.
### How can I estimate the cost of the pricing plans?
To estimate the cost of each of the pricing plans for your environment, we creat
## Next steps - Check out the [alerts for Azure Storage](../../defender-for-cloud/alerts-reference.md#alerts-azurestorage)-- Learn about the [features and benefits of Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md)
+- Learn about the [features and benefits of Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md)
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
Last updated 07/28/2022 -+ # Migrate an application to use passwordless connections with Azure services
storage Multiple Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/multiple-identity-scenarios.md
Last updated 09/23/2022 -++ # Configure passwordless connections between multiple Azure apps and services
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
You can also configure an alert rule to notify you when a certain number of requ
Azure Storage logs capture details about requests made against the storage account, including how a request was authorized. You can analyze the logs to determine which clients are authorizing requests with Shared Key or a SAS token.
-To log requests to your Azure Storage account in order to evaluate how they are authorized, you can use Azure Storage logging in Azure Monitor (preview). For more information, see [Monitor Azure Storage](../blobs/monitor-blob-storage.md).
+To log requests to your Azure Storage account in order to evaluate how they are authorized, you can use Azure Storage logging in Azure Monitor. For more information, see [Monitor Azure Storage](../blobs/monitor-blob-storage.md).
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can use an Azure Log Analytics workspace. To learn more about log queries, see [Tutorial: Get started with Log Analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md).
az storage container create \
--auth-mode key ```
-> [!NOTE]
-> Anonymous requests are not authorized and will proceed if you have configured the storage account and container for anonymous public read access. For more information, see [Configure anonymous public read access for containers and blobs](../blobs/anonymous-read-access-configure.md).
- ### Check the Shared Key access setting for multiple accounts To check the Shared Key access setting across a set of storage accounts with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
storage Storage Auth Aad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-aad-app.md
public async Task<IActionResult> Blob()
} ```
-Consent is the process of a user granting authorization to an application to access protected resources on their behalf. The Microsoft identity platform supports incremental consent, meaning that an application can request a minimum set of permissions initially and request more permissions over time as needed. When your code requests an access token, specify the scope of permissions that your app needs. For more information about incremental consent, see [Incremental and dynamic consent](../../active-directory/azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent).
+Consent is the process of a user granting authorization to an application to access protected resources on their behalf. The Microsoft identity platform supports incremental consent, meaning that an application can request a minimum set of permissions initially and request more permissions over time as needed. When your code requests an access token, specify the scope of permissions that your app needs. For more information about incremental consent, see [Incremental and dynamic consent](../../active-directory/develop/permissions-consent-overview.md#consent).
## View and run the completed sample
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Every request to Azure Storage must be authorized. Azure Storage supports the fo
- **Azure AD authorization over SMB for Azure Files.** Azure Files supports identity-based authorization over SMB (Server Message Block) through either Azure Active Directory Domain Services (Azure AD DS) or on-premises Active Directory Domain Services (preview). Your domain-joined Windows VMs can access Azure file shares using Azure AD credentials. For more information, see [Overview of Azure Files identity-based authentication support for SMB access](../files/storage-files-active-directory-overview.md) and [Planning for an Azure Files deployment](../files/storage-files-planning.md#identity). - **Authorization with Shared Key.** The Azure Storage Blob, Files, Queue, and Table services support authorization with Shared Key. A client using Shared Key authorization passes a header with every request that is signed using the storage account access key. For more information, see [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key). - **Authorization using shared access signatures (SAS).** A shared access signature (SAS) is a string containing a security token that can be appended to the URI for a storage resource. The security token encapsulates constraints such as permissions and the interval of access. For more information, see [Using Shared Access Signatures (SAS)](storage-sas-overview.md).-- **Anonymous access to containers and blobs.** A container and its blobs may be publicly available. When you specify that a container or blob is public, anyone can read it anonymously; no authentication is required. For more information, see [Manage anonymous read access to containers and blobs](../blobs/anonymous-read-access-configure.md). - **Active Directory Domain Services with Azure NetApp Files.** Azure NetApp Files features such as SMB volumes, dual-protocol volumes, and NFSv4.1 Kerberos volumes are designed to be used with AD DS. For more information, refer to [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](../../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md) or learn how to [Configure ADDS LDAP over TLS for Azure NetApp Files](../../azure-netapp-files/configure-ldap-over-tls.md). ## Encryption
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
For information about how to specify a particular version of TLS when sending a
When you enforce a minimum TLS version for your storage account, you risk rejecting requests from clients that are sending data with an older version of TLS. To understand how configuring the minimum TLS version may affect client applications, Microsoft recommends that you enable logging for your Azure Storage account and analyze the logs after an interval of time to detect what versions of TLS client applications are using.
-To log requests to your Azure Storage account and determine the TLS version used by the client, you can use Azure Storage logging in Azure Monitor (preview). For more information, see [Monitor Azure Storage](../blobs/monitor-blob-storage.md).
+To log requests to your Azure Storage account and determine the TLS version used by the client, you can use Azure Storage logging in Azure Monitor. For more information, see [Monitor Azure Storage](../blobs/monitor-blob-storage.md).
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can use an Azure Log Analytics workspace. To learn more about log queries, see [Tutorial: Get started with Log Analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md).
To create a policy with a Deny effect for a minimum TLS version that is less tha
After you create the policy with the Deny effect and assign it to a scope, a user cannot create a storage account with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an existing storage account that currently requires a minimum TLS version that is older than 1.2. Attempting to do so results in an error. The required minimum TLS version for the storage account must be set to 1.2 to proceed with account creation or configuration.
-The following image shows the error that occurs if you try to create a storage account with the minimum TLS version set to TLS 1.0 (the default for a new account) when a policy with a Deny effect requires that the minimum TLS version be set to TLS 1.2.
+The following image shows the error that occurs if you try to create a storage account with the minimum TLS version set to TLS 1.0 (the default for a new account) when a policy with a Deny effect requires that the minimum TLS version is set to TLS 1.2.
:::image type="content" source="media/transport-layer-security-configure-minimum-version/deny-policy-error.png" alt-text="Screenshot showing the error that occurs when creating a storage account in violation of policy":::
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Title: Overview - Azure Files identity-based authorization
-description: Azure Files supports identity-based authentication over SMB (Server Message Block) through Active Directory Domain Services (AD DS). Your domain-joined Windows virtual machines (VMs) can then access Azure file shares using Azure AD credentials.
+description: Azure Files supports identity-based authentication over SMB (Server Message Block) with Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities.
Previously updated : 11/03/2022 Last updated : 11/09/2022
This article focuses on how Azure file shares can use domain services, either on-premises or in Azure, to support identity-based access to Azure file shares over SMB. Enabling identity-based access for your Azure file shares allows you to replace existing file servers with Azure file shares without replacing your existing directory service, maintaining seamless user access to shares.
-To learn how to enable on-premises Active Directory Domain Services authentication for Azure file shares, see [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
-
-To learn how to enable Azure AD DS authentication for Azure file shares, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
-
-To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
- ## Applies to | File share type | SMB | NFS | |-|:-:|:-:|
To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Glossary
-It's helpful to understand some key terms relating to identity-based authentication over SMB for Azure file shares:
+It's helpful to understand some key terms relating to identity-based authentication for Azure file shares:
- **Kerberos authentication**
It's helpful to understand some key terms relating to identity-based authenticat
- **Hybrid identities**
- [Hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) are on-premises AD identities that are synced to the cloud.
+ [Hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) are identities in AD DS that are synced to Azure AD using Azure AD Connect.
## Common use cases
-Identity-based authentication and support for Windows ACLs on Azure Files is best leveraged for the following use cases:
+Identity-based authentication with Azure Files can be useful in a variety of scenarios:
### Replace on-premises file servers
Deprecating and replacing scattered on-premises file servers is a common problem
### Lift and shift applications to Azure
-When you lift and shift applications to the cloud, you want to keep the same authentication model for your data. As we extend the identity-based access control experience to Azure file shares, it eliminates the need to change your application to modern auth methods and expedite cloud adoption. Azure file shares provide the option to integrate with either Azure AD DS or on-premises AD DS for authentication. If your plan is to be 100% cloud native and minimize the efforts managing cloud infrastructures, Azure AD DS would be a better fit as a fully managed domain service. If you need full compatibility with AD DS capabilities, you may want to consider extending your AD DS environment to cloud by self-hosting domain controllers on VMs. Either way, we provide the flexibility to choose the domain service that best suits your business needs.
+When you lift and shift applications to the cloud, you want to keep the same authentication model for your data. As we extend the identity-based access control experience to Azure file shares, it eliminates the need to change your application to modern auth methods and expedite cloud adoption. Azure file shares provide the option to integrate with either Azure AD DS or on-premises AD DS for authentication. If your plan is to be 100% cloud native and minimize the efforts managing cloud infrastructures, Azure AD DS might be a better fit as a fully managed domain service. If you need full compatibility with AD DS capabilities, you might want to consider extending your AD DS environment to cloud by self-hosting domain controllers on VMs. Either way, we provide the flexibility to choose the domain service that best suits your business needs.
### Backup and disaster recovery (DR)
-If you are keeping your primary file storage on-premises, Azure file shares can serve as an ideal storage for backup or DR, to improve business continuity. You can use Azure file shares to back up your data from existing file servers, while preserving Windows DACLs. For DR scenarios, you can configure an authentication option to support proper access control enforcement at failover.
+If you're keeping your primary file storage on-premises, Azure file shares can serve as an ideal storage for backup or DR, to improve business continuity. You can use Azure file shares to back up your data from existing file servers while preserving Windows DACLs. For DR scenarios, you can configure an authentication option to support proper access control enforcement at failover.
## Supported scenarios
-This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities. We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already set up on-premises or in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication. Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
+This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities. We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication.
- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS. - **Azure AD DS authentication:** Azure AD DS-joined Windows machines can access Azure file shares with Azure AD credentials over SMB.
This section summarizes the supported Azure file shares authentication scenarios
### Restrictions -- On-premises AD DS authentication and Azure AD DS authentication don't support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC because computer accounts can't be synced to Azure AD. You can either [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) to allow computer accounts to access the share, or consider using a service logon account instead.
+- None of the authentication methods support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC, because computer accounts can't be synced to Azure AD. If you want to allow a computer account to access Azure file shares using identity-based authentication, [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) or consider using a service logon account instead.
- Neither on-premises AD DS authentication nor Azure AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices. - Identity-based authentication isn't supported with Network File System (NFS) shares.
Identity-based authentication for Azure Files offers several benefits over using
## How it works
-Azure file shares use the Kerberos protocol to authenticate with either on-premises AD DS or Azure AD DS. When an identity associated with a user or application running on a client attempts to access data in Azure file shares, the request is sent to the domain service, either AD DS or Azure AD DS, to authenticate the identity. If authentication is successful, it returns a Kerberos token. The client sends a request that includes the Kerberos token and Azure file shares use that token to authorize the request. Azure file shares only receive the Kerberos token, not access credentials.
+Azure file shares use the Kerberos protocol to authenticate with an AD source. When an identity associated with a user or application running on a client attempts to access data in Azure file shares, the request is sent to the AD source to authenticate the identity. If authentication is successful, it returns a Kerberos token. The client sends a request that includes the Kerberos token, and Azure file shares use that token to authorize the request. Azure file shares only receive the Kerberos token, not access credentials.
-Before you can enable identity-based authentication on Azure file shares, you must first set up your domain environment.
+Before you can enable identity-based authentication on your storage account, you must first set up your domain environment.
### AD DS
-For on-premises AD DS authentication, you must set up your AD domain controllers and domain join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain joined clients must have line of sight to the domain service, so they must be within the corporate network or virtual network (VNET) of your domain service.
+For on-premises AD DS authentication, you must set up your AD domain controllers and domain join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain-joined clients must have line of sight to the domain service, so they must be within the corporate network or virtual network (VNET) of your domain service.
The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-premises AD DS must be synced to Azure AD using Azure AD Connect sync or Azure AD Connect cloud sync. Only [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) that exist in both on-premises AD DS and Azure AD can be authenticated and authorized for Azure file share access. This is because the share-level permission is configured against the identity represented in Azure AD, whereas the directory/file-level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against the same hybrid user.
The following diagram represents the workflow for Azure AD DS authentication to
### Azure AD Kerberos for hybrid identities
-Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
+Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows ACLs and permissions might require line-of-sight to the domain controller.
-For more information on this feature, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+For more information, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
## Access control
-Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level permission assignment can be performed on Azure AD users or groups managed through Azure RBAC. With Azure RBAC, the credentials you use for file access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB Share Reader to users or groups in Azure AD to grant read access to an Azure file share.
+Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level permission assignment can be performed on Azure AD users or groups managed through Azure RBAC. With Azure RBAC, the credentials you use for file access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB Share Reader to users or groups in Azure AD to grant access to an Azure file share.
At the directory/file level, Azure Files supports preserving, inheriting, and enforcing [Windows ACLs](/windows/win32/secauthz/access-control-lists) just like any Windows file servers. You can choose to keep Windows ACLs when copying data over SMB between your existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use Azure file shares to back up ACLs along with your data. ### Enable identity-based authentication
-You can enable identity-based authentication with either Azure AD DS or on-premises AD DS for Azure file shares on your new and existing storage accounts. Only one domain service can be used for file access authentication on the storage account, which applies to all file shares in the account. Detailed guidance on setting up your file shares for authentication with Azure AD DS in our article [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) and guidance for on-premises AD DS in our other article, [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
+You can enable identity-based authentication on your new and existing storage accounts using one of three AD sources: AD DS, Azure AD DS, and Azure AD Kerberos for hybrid identities. Only one AD source can be used for file access authentication on the storage account, which applies to all file shares in the account.
+
+To learn how to enable on-premises Active Directory Domain Services authentication for Azure file shares, see [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
+
+To learn how to enable Azure AD DS authentication for Azure file shares, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
+
+To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
### Configure share-level permissions for Azure Files
-Once either Azure AD DS or on-premises AD DS authentication is enabled, you can use Azure built-in roles or configure custom roles for Azure AD identities and assign access rights to any file shares in your storage accounts. The assigned permission allows the granted identity to get access to the share only, nothing else, not even the root directory. You still need to separately configure directory or file-level permissions for Azure file shares.
+Once you've enabled an AD source on your storage account, you can use Azure built-in RBAC roles, or configure custom roles for Azure AD identities and assign access rights to any file shares in your storage accounts. The assigned permission allows the granted identity to get access to the share only, nothing else, not even the root directory. You still need to separately configure directory and file-level permissions for Azure file shares.
### Configure directory or file-level permissions for Azure Files
For more information about Azure Files and identity-based authentication over SM
- [Planning for an Azure Files deployment](storage-files-planning.md) - [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md) - [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md)
+- [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md)
- [FAQ](storage-files-faq.md)
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Title: Control access to Azure file shares - on-premises AD DS authentication
-description: Learn how to assign permissions to an Active Directory Domain Services identity that represents your Azure storage account. This allows you to control access with identity-based authentication.
+description: Learn how to assign permissions to an Active Directory Domain Services identity that represents your Azure storage account. This allows you to control user access with identity-based authentication.
Previously updated : 11/03/2022 Last updated : 11/09/2022 -+ ms.devlang: azurecli
-# Part two: assign share-level permissions to an identity
+# Assign share-level permissions to an identity
Once you've enabled an Active Directory (AD) source for your storage account, you must configure share-level permissions in order to get access to your file share. There are two ways you can assign share-level permissions. You can assign them to [specific Azure AD users/groups](#share-level-permissions-for-specific-azure-ad-users-or-groups), and you can assign them to all authenticated identities as a [default share-level permission](#share-level-permissions-for-all-authenticated-identities).
Share-level permissions must be assigned to the Azure AD identity representing t
> [!TIP] > Optional: Customers who want to migrate SMB server share-level permissions to RBAC permissions can use the `Move-OnPremSharePermissionsToAzureFileShare` PowerShell cmdlet to migrate directory and file-level permissions from on-premises to Azure. This cmdlet evaluates the groups of a particular on-premises file share, then writes the appropriate users and groups to the Azure file share using the three RBAC roles. You provide the information for the on-premises share and the Azure file share when invoking the cmdlet.
-You can use the Azure portal, Azure PowerShell module, or Azure CLI to assign the built-in roles to the Azure AD identity of a user for granting share-level permissions.
+You can use the Azure portal, Azure PowerShell, or Azure CLI to assign the built-in roles to the Azure AD identity of a user for granting share-level permissions.
> [!IMPORTANT] > The share-level permissions will take up to three hours to take effect once completed. Please wait for the permissions to sync before connecting to your file share using your credentials.
You could also assign permissions to all authenticated Azure AD users and specif
## Next steps
-Now that you've assigned share-level permissions, you must configure directory and file-level permissions. Continue to the next article.
-
-[Part three: configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md)
+Now that you've assigned share-level permissions, you must [configure directory and file-level permissions](storage-files-identity-ad-ds-configure-permissions.md).
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Title: Control what a user can do at the directory and file level - Azure Files
-description: Learn how to configure Windows ACLs for directory and file level permissions for AD DS authentication to Azure file shares, allowing you to take advantage of granular access control.
+description: Learn how to configure Windows ACLs for directory and file level permissions for Active Directory authentication to Azure file shares, allowing you to take advantage of granular access control.
Previously updated : 11/08/2022 Last updated : 11/09/2022 +
-# Part three: configure directory and file level permissions over SMB
+# Configure directory and file-level permissions over SMB
-Before you begin this article, make sure you've completed the previous article, [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md), to ensure that your share-level permissions are in place with Azure role-based access control (RBAC).
+Before you begin this article, make sure you've read [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md) to ensure that your share-level permissions are in place with Azure role-based access control (RBAC).
After you assign share-level permissions, you must first connect to the Azure file share using the storage account key and then configure Windows access control lists (ACLs), also known as NTFS permissions, at the root, directory, or file level. While share-level permissions act as a high-level gatekeeper that determines whether a user can access the share, Windows ACLs operate at a more granular level to control what operations the user can do at the directory or file level.
-Both share-level and file/directory level permissions are enforced when a user attempts to access a file/directory, so if there's a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file level, but only read at a share level, then they can only read that file. The same would be true if it was reversed: if a user had read/write access at the share-level, but only read at the file-level, they can still only read the file.
+Both share-level and file/directory-level permissions are enforced when a user attempts to access a file/directory, so if there's a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file level, but only read at a share level, then they can only read that file. The same would be true if it was reversed: if a user had read/write access at the share-level, but only read at the file-level, they can still only read the file.
## Applies to | File share type | SMB | NFS |
Use Windows File Explorer to grant full permission to all directories and files
## Next steps
-Now that the feature is enabled and configured, continue to the next article to learn how to mount your Azure file share from a domain-joined VM.
-
-[Part four: mount a file share from a domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md)
+Now that the feature is enabled and configured, you can [mount a file share from a domain-joined VM](storage-files-identity-ad-ds-mount-file-share.md).
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Title: Enable AD DS authentication to Azure file shares
+ Title: Enable AD DS authentication for Azure file shares
description: Learn how to enable Active Directory Domain Services authentication over SMB for Azure file shares. Your domain-joined Windows virtual machines can then access Azure file shares by using AD DS credentials. Previously updated : 10/24/2022 Last updated : 11/09/2022
-# Part one: enable AD DS authentication for your Azure file shares
+# Enable AD DS authentication for Azure file shares
This article describes the process for enabling Active Directory Domain Services (AD DS) authentication on your storage account. After enabling the feature, you must configure your storage account and your AD DS, to use AD DS credentials for authenticating to your Azure file share.
AzureStorageID:<yourStorageSIDHere>
## Next steps
-You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-level permissions for users and groups. Continue to the next section.
-
-[Part two: assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md)
+You've now successfully enabled the feature on your storage account. To use the feature, you must [assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md).
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Previously updated : 09/27/2022 Last updated : 11/09/2022 +
-# Part four: mount a file share from a domain-joined VM
+# Mount a file share from a domain-joined VM
-Before you begin this article, make sure you complete the previous article, [configure directory and file level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
+Before you begin this article, make sure you've read [configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can access an Azure file share from a domain-joined VM. Share-level role assignment can take some time to take effect.
+The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can access an Azure file share from a domain-joined VM. Remember that share-level role assignment can take some time to take effect.
-Sign in to the client by using the credentials that you granted permissions to, as shown in the following image.
-
-![Screenshot showing Azure AD sign-in screen for user authentication](media/storage-files-aad-permissions-and-mounting/azure-active-directory-authentication-dialog.png)
+Sign in to the client using the credentials of the identity that you granted permissions to.
## Applies to | File share type | SMB | NFS |
if ($connectTestResult.TcpTestSucceeded) {
If you run into issues mounting with AD DS credentials, refer to [Unable to mount Azure Files with AD credentials](storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) for guidance.
-If mounting your file share succeeded, then you've successfully enabled and configured on-premises AD DS authentication for your Azure file share.
- ## Next steps
-If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces password rotation, continue to the next article for instructions on updating your password:
-
-[Update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md)
+If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces password rotation, you might need to [update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md).
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Enabling AD DS authentication for your Azure file shares allows you to authentic
Follow these steps to set up Azure Files for AD DS authentication:
-1. [Part one: enable AD DS authentication on your storage account](storage-files-identity-ad-ds-enable.md)
+1. [Enable AD DS authentication on your storage account](storage-files-identity-ad-ds-enable.md)
-1. [Part two: assign share-level permissions to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity](storage-files-identity-ad-ds-assign-permissions.md)
+1. [Assign share-level permissions to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity](storage-files-identity-ad-ds-assign-permissions.md)
-1. [Part three: configure Windows ACLs over SMB for directories and files](storage-files-identity-ad-ds-configure-permissions.md)
+1. [Configure Windows ACLs over SMB for directories and files](storage-files-identity-ad-ds-configure-permissions.md)
-1. [Part four: mount an Azure file share to a VM joined to your AD DS](storage-files-identity-ad-ds-mount-file-share.md)
+1. [Mount an Azure file share to a VM joined to your AD DS](storage-files-identity-ad-ds-mount-file-share.md)
1. [Update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md)
Identities used to access Azure file shares must be synced to Azure AD to enforc
## Next steps
-To enable on-premises AD DS authentication for your Azure file share, continue to the next article:
-
-[Part one: enable AD DS authentication for your account](storage-files-identity-ad-ds-enable.md)
+To get started, you must [enable AD DS authentication for your storage account](storage-files-identity-ad-ds-enable.md).
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 11/07/2022 Last updated : 11/10/2022 # Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files
-This article focuses on enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
+This article focuses on enabling and configuring Azure Active Directory (Azure AD) for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Azure AD. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs) and permissions for a user or group might require line-of-sight to the domain controller.
For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
+> [!IMPORTANT]
+> You can only use one AD source for identity-based authentication with Azure Files. If Azure AD Kerberos authentication for hybrid identities doesn't fit your requirements, you can use [on-premises Active Directory Domain Service (AD DS)](storage-files-identity-auth-active-directory-enable.md) or [Azure Active Directory Domain Services (Azure AD DS)](storage-files-identity-auth-active-directory-domain-service-enable.md) instead. The configuration steps are different for each method.
+ ## Applies to | File share type | SMB | NFS | |-|:-:|:-:|
For more information on supported options and considerations, see [Overview of A
## Prerequisites
-Before you enable Azure AD over SMB for Azure file shares, make sure you've completed the following prerequisites.
+Before you enable Azure AD Kerberos authentication over SMB for Azure file shares, make sure you've completed the following prerequisites.
> [!NOTE]
-> Your Azure storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. You can only use one AD source. If you've already chosen another AD source for your storage account, you must disable it before enabling Azure AD Kerberos.
+> Your Azure storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. If you've already chosen another AD source for your storage account, you must disable it before enabling Azure AD Kerberos.
The Azure AD Kerberos functionality for hybrid identities is only available on the following operating systems:
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
# What is Azure Files? Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
-Here are some videos on the common use cases of Azure Files:
-* [Replace your file server with a serverless Azure file share](https://sec.ch9.ms/ch9/3358/0addac01-3606-4e30-ad7b-f195f3ab3358/ITOpsTalkAzureFiles_high.mp4)
+Here are some videos on common use cases for Azure Files:
+* [Replace your file server with a serverless Azure file share](https://youtu.be/H04e9AgbcSc)
* [Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop leveraging AD authentication](https://www.youtube.com/embed/9S5A1IJqfOQ) To get started using Azure Files, see [Quickstart: Create and use an Azure file share](storage-how-to-use-files-portal.md).
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
Microsoft Defender for Cloud periodically analyzes the security state of your Az
| Recommendation | Comments | Defender for Cloud | |-|-|--| | Use Azure Active Directory (Azure AD) to authorize access to queue data | Azure AD provides superior security and ease of use over Shared Key authorization for authorizing requests to Queue Storage. For more information, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md). | - |
-| Keep in mind the principal of least privilege when assigning permissions to an Azure AD security principal via Azure RBAC | When assigning a role to a user, group, or application, grant that security principal only those permissions that are necessary for them to perform their tasks. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
+| Keep in mind the principle of least privilege when assigning permissions to an Azure AD security principal via Azure RBAC | When assigning a role to a user, group, or application, grant that security principal only those permissions that are necessary for them to perform their tasks. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
| Secure your account access keys with Azure Key Vault | Microsoft recommends using Azure AD to authorize requests to Azure Storage. However, if you must use Shared Key authorization, then secure your account keys with Azure Key Vault. You can retrieve the keys from the key vault at runtime, instead of saving them with your application. | - | | Regenerate your account keys periodically | Rotating the account keys periodically reduces the risk of exposing your data to malicious actors. | - |
-| Keep in mind the principal of least privilege when assigning permissions to a SAS | When creating a SAS, specify only those permissions that are required by the client to perform its function. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
+| Keep in mind the principle of least privilege when assigning permissions to a SAS | When creating a SAS, specify only those permissions that are required by the client to perform its function. Limiting access to resources helps prevent both unintentional and malicious misuse of your data. | - |
| Have a revocation plan in place for any SAS that you issue to clients | If a SAS is compromised, you will want to revoke that SAS as soon as possible. To revoke a user delegation SAS, revoke the user delegation key to quickly invalidate all signatures associated with that key. To revoke a service SAS that is associated with a stored access policy, you can delete the stored access policy, rename the policy, or change its expiry time to a time that is in the past. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md). | - | | If a service SAS is not associated with a stored access policy, then set the expiry time to one hour or less | A service SAS that is not associated with a stored access policy cannot be revoked. For this reason, limiting the expiry time so that the SAS is valid for one hour or less is recommended. | - |
stream-analytics Stream Analytics Job Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-reliability.md
Previously updated : 11/07/2022 Last updated : 11/10/2022
Part of being a fully managed service is the capability to introduce new service
## How do Azure paired regions address this concern?
-Stream Analytics guarantees jobs in paired regions are updated in separate batches. The deployment of an update to Stream Analytics would not occur at the same time in a set of paired regions. As a result there is a sufficient time gap between the updates to identify potential issues and remediate them.
+Stream Analytics guarantees jobs in paired regions are updated in separate batches. Each batch has one or more regions which may be updated concurrently. The Stream Analytics service ensures any new update passes rigorous internal rings to have the highest quality. The service also proactively looks for many signals after deploying to each batch to get more confidence that there are no bugs introduced. The deployment of an update to Stream Analytics would not occur at the same time in a set of paired regions. As a result there is a sufficient time gap between the updates to identify potential issues and remediate them.
The article on **[availability and paired regions](../availability-zones/cross-region-replication-azure.md)** has the most up-to-date information on which regions are paired.
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table describes the built-in roles and the scopes at which they ca
|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime|
-|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for notebooks and pipeline runs. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
+|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
|Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (may also require compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace| |Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential|
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Commit changes to a KQL script to the Git repo|Requires Git permissions on the r
APACHE SPARK POOLS| Create an Apache Spark pool|Azure Owner or Contributor on the workspace| Monitor Apache Spark applications| Synapse User|read
-View the logs for notebook and job execution |Synapse Monitoring Operator|
+View the logs for completed notebook and job execution |Synapse Monitoring Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute Create a notebook or job definition|Synapse User or </br>Azure Owner or Contributor, or Reader on the workspace</br></br> *Additional permissions are required to run, publish, or commit changes*|read</br></br></br></br></br> List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User or Synapse Monitoring Operator on the workspace|artifacts/read
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Title: Monitor your dedicated SQL pool workload using DMVs
+ Title: Monitor your dedicated SQL pool workload using DMVs
description: Learn how to monitor your Azure Synapse Analytics dedicated SQL pool workload and query execution using DMVs. -++ Last updated : 11/09/2022 + - Previously updated : 11/15/2021-- # Monitor your Azure Synapse Analytics dedicated SQL pool workload using DMVs
-This article describes how to use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in SQL pool.
+This article describes how to use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in a dedicated SQL pool.
## Permissions
GRANT VIEW DATABASE STATE TO myuser;
## Monitor connections
-All logins to your data warehouse are logged to [sys.dm_pdw_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-sessions-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). This DMV contains the last 10,000 logins. The session_id is the primary key and is assigned sequentially for each new logon.
+All logins to your data warehouse are logged to [sys.dm_pdw_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-sessions-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). This DMV contains the last 10,000 logins. The `session_id` is the primary key and is assigned sequentially for each new login.
```sql -- Other Active Connections
SELECT * FROM sys.dm_pdw_exec_sessions where status <> 'Closed' and session_id <
## Monitor query execution
-All queries executed on SQL pool are logged to [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). This DMV contains the last 10,000 queries executed. The request_id uniquely identifies each query and is the primary key for this DMV. The request_id is assigned sequentially for each new query and is prefixed with QID, which stands for query ID. Querying this DMV for a given session_id shows all queries for a given logon.
+All queries executed on SQL pool are logged to [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). This DMV contains the last 10,000 queries executed. The `request_id` uniquely identifies each query and is the primary key for this DMV. The `request_id` is assigned sequentially for each new query and is prefixed with QID, which stands for query ID. Querying this DMV for a given `session_id` shows all queries for a given login.
-> [!NOTE]
+> [!NOTE]
> Stored procedures use multiple Request IDs. Request IDs are assigned in sequential order. Here are steps to follow to investigate query execution plans and times for a particular query.
ORDER BY submit_time DESC;
SELECT TOP 10 * FROM sys.dm_pdw_exec_requests ORDER BY total_elapsed_time DESC;- ``` From the preceding query results, **note the Request ID** of the query that you would like to investigate. Queries in the **Suspended** state can be queued due to a large number of active running queries. These queries also appear in the [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). In that case, look for waits such as UserConcurrencyResourceType. For information on concurrency limits, see [Memory and concurrency limits](memory-concurrency-limits.md) or [Resource classes for workload management](resource-classes-for-workload-management.md). Queries can also wait for other reasons such as for object locks. If your query is waiting for a resource, see [Investigating queries waiting for resources](#monitor-waiting-queries) further down in this article.
-To simplify the lookup of a query in the [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) table, use [LABEL](/sql/t-sql/queries/option-clause-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to assign a comment to your query, which can be looked up in the sys.dm_pdw_exec_requests view.
+To simplify the lookup of a query in the [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) table, use [LABEL](/sql/t-sql/queries/option-clause-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to assign a comment to your query, which can be looked up in the `sys.dm_pdw_exec_requests` view.
```sql -- Query with Label
ORDER BY step_index;
When a DSQL plan is taking longer than expected, the cause can be a complex plan with many DSQL steps or just one step taking a long time. If the plan is many steps with several move operations, consider optimizing your table distributions to reduce data movement. The [Table distribution](sql-data-warehouse-tables-distribute.md) article explains why data must be moved to solve a query. The article also explains some distribution strategies to minimize data movement.
-To investigate further details about a single step, the *operation_type* column of the long-running query step and note the **Step Index**:
+To investigate further details about a single step, the `operation_type` column of the long-running query step and note the **Step Index**:
* Proceed with Step 3 for **SQL operations**: OnOperation, RemoteOperation, ReturnOperation. * Proceed with Step 4 for **Data Movement operations**: ShuffleMoveOperation, BroadcastMoveOperation, TrimMoveOperation, PartitionMoveOperation, MoveOperation, CopyOperation.
SELECT * FROM sys.dm_pdw_dms_workers
WHERE request_id = 'QID####' AND step_index = 2; ```
-* Check the *total_elapsed_time* column to see if a particular distribution is taking significantly longer than others for data movement.
-* For the long-running distribution, check the *rows_processed* column to see if the number of rows being moved from that distribution is significantly larger than others. If so, this finding might indicate skew of your underlying data. One cause for data skew is distributing on a column with many NULL values (whose rows will all land in the same distribution). Prevent slow queries by avoiding distribution on these types of columns or filtering your query to eliminate NULLs when possible.
+* Check the `total_elapsed_time` column to see if a particular distribution is taking significantly longer than others for data movement.
+* For the long-running distribution, check the `rows_processed` column to see if the number of rows being moved from that distribution is significantly larger than others. If so, this finding might indicate skew of your underlying data. One cause for data skew is distributing on a column with many NULL values (whose rows will all land in the same distribution). Prevent slow queries by avoiding distribution on these types of columns or filtering your query to eliminate NULLs when possible.
If the query is running, you can use [DBCC PDW_SHOWEXECUTIONPLAN](/sql/t-sql/database-console-commands/dbcc-pdw-showexecutionplan-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to retrieve the SQL Server estimated plan from the SQL Server plan cache for the currently running SQL Step within a particular distribution.
If you discover that your query is not making progress because it is waiting for
-- Replace request_id with value from Step 1. SELECT waits.session_id,
- waits.request_id,
+ waits.request_id,
requests.command, requests.status,
- requests.start_time,
+ requests.start_time,
waits.type, waits.state, waits.object_type,
If the query is actively waiting on resources from another query, then the state
## Monitor tempdb
-Tempdb is used to hold intermediate results during query execution. High utilization of the tempdb database can lead to slow query performance. For every DW100c configured, 399 GB of tempdb space is allocated (DW1000c would have 3.99 TB of total tempdb space). Below are tips for monitoring tempdb usage and for decreasing tempdb usage in your queries.
+The `tempdb` database is used to hold intermediate results during query execution. High utilization of the `tempdb` database can lead to slow query performance. For every DW100c configured, 399 GB of `tempdb` space is allocated (DW1000c would have 3.99 TB of total `tempdb` space). Below are tips for monitoring `tempdb` usage and for decreasing `tempdb` usage in your queries.
-### Monitoring tempdb with views
+### Monitor tempdb with views
-To monitor tempdb usage, first install the [microsoft.vw_sql_requests](https://github.com/Microsoft/sql-data-warehouse-samples/blob/master/solutions/monitoring/scripts/views/microsoft.vw_sql_requests.sql) view from the [Microsoft Toolkit for SQL pool](https://github.com/Microsoft/sql-data-warehouse-samples/tree/master/solutions/monitoring). You can then execute the following query to see the tempdb usage per node for all executed queries:
+To monitor `tempdb` usage, first install the [microsoft.vw_sql_requests](https://github.com/Microsoft/sql-data-warehouse-samples/blob/master/solutions/monitoring/scripts/views/microsoft.vw_sql_requests.sql) view from the [Microsoft Toolkit for SQL pool](https://github.com/Microsoft/sql-data-warehouse-samples/tree/master/solutions/monitoring). You can then execute the following query to see the `tempdb` usage per node for all executed queries:
```sql -- Monitor tempdb
WHERE DB_NAME(ssu.database_id) = 'tempdb'
ORDER BY sr.request_id; ```
-If you have a query that is consuming a large amount of memory or have received an error message related to the allocation of tempdb, it could be due to a very large [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) or [INSERT SELECT](/sql/t-sql/statements/insert-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement running that is failing in the final data movement operation. This can usually be identified as a ShuffleMove operation in the distributed query plan right before the final INSERT SELECT. Use [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to monitor ShuffleMove operations.
+> [!NOTE]
+> Data Movement uses a hidden database called `QTABLE`. When that database is filled, the query will also return an error message about `tempdb` being out of space. Details about `QTABLE` are not returned in the above query.
-The most common mitigation is to break your CTAS or INSERT SELECT statement into multiple load statements so the data volume will not exceed the 2TB per node tempdb limit (when at or above DW500c). You can also scale your cluster to a larger size which will spread the tempdb size across more nodes reducing the tempdb on each individual node.
+If you have a query that is consuming a large amount of memory or have received an error message related to the allocation of `tempdb`, it could be due to a very large [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) or [INSERT SELECT](/sql/t-sql/statements/insert-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement running that is failing in the final data movement operation. This can usually be identified as a ShuffleMove operation in the distributed query plan right before the final INSERT SELECT. Use [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to monitor ShuffleMove operations.
-In addition to CTAS and INSERT SELECT statements, large, complex queries running with insufficient memory can spill into tempdb causing queries to fail. Consider running with a larger [resource class](resource-classes-for-workload-management.md) to avoid spilling into tempdb.
+The most common mitigation is to break your CTAS or INSERT SELECT statement into multiple load statements so that the data volume will not exceed the 399 GB per 100DWUc `tempdb` limit. You can also scale your cluster to a larger size to increase how much `tempdb` space you have.
+
+In addition to CTAS and INSERT SELECT statements, large, complex queries running with insufficient memory can spill into `tempdb` causing queries to fail. Consider running with a larger [resource class](resource-classes-for-workload-management.md) to avoid spilling into `tempdb`.
## Monitor memory
GROUP BY t.pdw_node_id, nod.[type]
The following query provides an approximate estimate of the progress of your load. The query only shows files currently being processed. ```sql- -- To track bytes and files SELECT r.command,
ORDER BY
## Monitor query blockings
-The following query provides the top 500 blocked queries in the environment.
+The following query provides the top 500 blocked queries in the environment.
```sql- --Collect the top blocking
-SELECT
+SELECT
TOP 500 waiting.request_id AS WaitingRequestId, waiting.object_type AS LockRequestType, waiting.object_name AS ObjectLockRequestName, waiting.request_time AS ObjectLockRequestTime, blocking.session_id AS BlockingSessionId, blocking.request_id AS BlockingRequestId
-FROM
+FROM
sys.dm_pdw_waits waiting INNER JOIN sys.dm_pdw_waits blocking
- ON waiting.object_type = blocking.object_type
- AND waiting.object_name = blocking.object_name
-WHERE
- waiting.state = 'Queued'
+ ON waiting.object_type = blocking.object_type
+ AND waiting.object_name = blocking.object_name
+WHERE
+ waiting.state = 'Queued'
AND blocking.state = 'Granted'
-ORDER BY
+ORDER BY
ObjectLockRequestTime ASC;
-
-```
+```
## Retrieve query text from waiting and blocking queries The following query provides the query text and identifier for the waiting and blocking queries to easily troubleshoot. ```sql- -- To retrieve query text from waiting and blocking queries SELECT waiting.session_id AS WaitingSessionId, waiting.request_id AS WaitingRequestId,
- COALESCE(waiting_exec_request.command,waiting_exec_request.command2) AS WaitingExecRequestText,
+ COALESCE(waiting_exec_request.command,waiting_exec_request.command2) AS WaitingExecRequestText,
blocking.session_id AS BlockingSessionId, blocking.request_id AS BlockingRequestId, COALESCE(blocking_exec_request.command,blocking_exec_request.command2) AS BlockingExecRequestText,
SELECT waiting.session_id AS WaitingSessionId,
waiting.object_type AS Blocking_Object_Type, waiting.type AS Lock_Type, waiting.request_time AS Lock_Request_Time,
- datediff(ms, waiting.request_time, getdate())/1000.0 AS Blocking_Time_sec
+ datediff(ms, waiting.request_time, getdate())/1000.0 AS Blocking_Time_sec
FROM sys.dm_pdw_waits waiting INNER JOIN sys.dm_pdw_waits blocking ON waiting.object_type = blocking.object_type
- AND waiting.object_name = blocking.object_name
+ AND waiting.object_name = blocking.object_name
INNER JOIN sys.dm_pdw_exec_requests blocking_exec_request ON blocking.request_id = blocking_exec_request.request_id
- INNER JOIN sys.dm_pdw_exec_requests waiting_exec_request
+ INNER JOIN sys.dm_pdw_exec_requests waiting_exec_request
ON waiting.request_id = waiting_exec_request.request_id WHERE waiting.state = 'Queued' AND blocking.state = 'Granted'
ORDER BY Lock_Request_Time DESC;
## Next steps
-For more information about DMVs, see [System views](../sql/reference-tsql-system-views.md).
+- For more information about DMVs, see [System views](../sql/reference-tsql-system-views.md).
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
Previously updated : 11/02/2022 Last updated : 11/09/2022
This is the list of known limitations for Azure Synapse Link for SQL.
* Source table row size can't exceed 7,500 bytes. For tables where variable-length columns are stored off-row, a 24-byte pointer is stored in the main record. * Tables enabled for Azure Synapse Link for SQL can have a maximum of 1,020 columns (not 1,024). * While a database can have multiple links enabled, a given table can't belong to multiple links.
-* When a database owner doesn't have a mapped log in, Azure Synapse link for SQL will run into an error when enabling a link connection. User can set database owner to a valid user with the `ALTER AUTHORIZATION` command to fix this issue.
+* When a database owner doesn't have a mapped login, Azure Synapse Link for SQL will run into an error when enabling a link connection. User can set database owner to a valid user with the `ALTER AUTHORIZATION` command to fix this issue.
* If the source table contains computed columns or columns with data types that aren't supported by Azure Synapse Analytics dedicated SQL pools, these columns won't be replicated to Azure Synapse Analytics. Unsupported columns include: * image * text
This is the list of known limitations for Azure Synapse Link for SQL.
* geometry * geography * A maximum of 5,000 tables can be added to a single link connection.
-* When a source column is of type datetime2(7) or time(7), the last digit will be truncated when data is replicated to Azure Synapse Analytics.
* The following table DDL operations aren't allowed on source tables when they are enabled for Azure Synapse Link for SQL. All other DDL operations are allowed, but they won't be replicated to Azure Synapse Analytics. * Switch Partition * Add/Drop/Alter Column
This is the list of known limitations for Azure Synapse Link for SQL.
* System tables can't be replicated. * The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool. * Enabling Azure Synapse Link for SQL will create a new schema called `changefeed`. Don't use this schema, as it is reserved for system use.
-* Source tables with collations that are unsupported by Synapse SQL dedicated pool, such as UTF8 and certain Japanese collations, canΓÇÖt be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
-* Single row updates (including off-page storage) of > 370MB are not supported.
+* Azure Synapse Link for SQL will **NOT** work and can't be enabled if your database contains a schema or user named `changefeed`.
+* Source tables with collations that are unsupported by dedicated SQL pools, such as UTF8 and certain Japanese collations, can't be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
+ * Additionally, some Thai language collations are currently supported by Azure Synapse Link for SQL. These unsupported collations include:
+ * Thai100CaseInsensitiveAccentInsensitiveKanaSensitive
+ * Thai100CaseInsensitiveAccentSensitiveSupplementaryCharacters
+ * Thai100CaseSensitiveAccentInsensitiveKanaSensitive
+ * Thai100CaseSensitiveAccentInsensitiveKanaSensitiveWidthSensitiveSupplementaryCharacters
+ * Thai100CaseSensitiveAccentSensitiveKanaSensitive
+ * Thai100CaseSensitiveAccentSensitiveSupplementaryCharacters
+ * ThaiCaseSensitiveAccentInsensitiveWidthSensitive
+* Single row updates (including off-page storage) of > 370 MB are not supported.
### Azure SQL DB only * Azure Synapse Link for SQL isn't supported on Free, Basic or Standard tier with fewer than 100 DTUs.
This is the list of known limitations for Azure Synapse Link for SQL.
* Service principal isn't supported for authenticating to source Azure SQL DB, so when creating Azure SQL DB linked Service, choose SQL authentication, user-assigned managed identity (UAMI) or service assigned managed Identity (SAMI). * If the Azure SQL Database logical server has both a SAMI and UAMI configured, Azure Synapse Link will use SAMI. * Azure Synapse Link can't be enabled on the secondary database once a GeoDR failover has happened if the secondary database has a different name from the primary database.
-* If you enabled Azure Synapse Link for SQL on your database as an Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user.
+* If you enabled Azure Synapse Link for SQL on your database as a Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user.
* If you create a database as an Azure AD user and enable Azure Synapse Link for SQL, a SQL authentication user (for example, even sysadmin role) won't be able to disable/make changes to Azure Synapse Link for SQL artifacts. However, another Azure AD user will be able to enable/disable Azure Synapse Link for SQL on the same database. Similarly, if you create a database as an SQL authentication user, enabling/disabling Azure Synapse Link for SQL as an Azure AD user won't work. * When enabling Azure Synapse Link for SQL on your Azure SQL Database, you should ensure that aggressive log truncation is disabled.
This is the list of known limitations for Azure Synapse Link for SQL.
> Azure Synapse Link for SQL is not supported on databases that are also using Azure SQL Managed Instance Link. Caution that in these scenarios, when the managed instance transitions to read-write mode, you may encounter transaction log full issues. ## Known issues
-### Deleting an Azure Synapse Analytics workspace with a running link could cause log in source database to fill
-* Applies To - Azure SQL Database and SQL Server 2022
-* Issue - When you delete an Azure Synapse Analytics workspace it is possible that running links might not be stopped, which will cause the source database to think that the link is still operational and could lead to the log filling and not being truncated.
+
+### Deleting an Azure Synapse Analytics workspace with a running link could cause the transaction log in the source database to fill
+
+* Applies To - Azure Synapse Link for Azure SQL Database and SQL Server 2022
+* Issue - When you delete an Azure Synapse Analytics workspace it is possible that running links might not be stopped, which will cause the source database to think that the link is still operational and could lead to the transaction log to not be truncated, and fill.
* Resolution - There are two possible resolutions to this situation: 1. Stop any running links prior to deleting the Azure Synapse Analytics workspace. 1. Manually clean up the link definition in the source database.
- 1. Find the table_group_id for the link(s) that need to be stopped using the following query:
+ 1. Find the `table_group_id` for the link(s) that need to be stopped using the following query:
```sql SELECT table_group_id, workspace_id, synapse_workgroup_name FROM [changefeed].[change_feed_table_groups]
This is the list of known limitations for Azure Synapse Link for SQL.
```sql EXEC sys.sp_change_feed_disable_db
-### DateTime2(7) and Time(7) Could Cause Snapshot Hang
-* Applies To - Azure SQL Database
-* Issue - One of the preview limitations with the data types DateTime2(7) and Time(7) is the loss of precision (only 6 digits are supported). When certain database settings are turned on (`NUMERIC_ROUNDABORT`, `ANSI_WARNINGS`, and `ARITHABORT`), the snapthot process can hang, requiring a database failover to recover.
-* Resolution - To resolve this situation, take the following steps:
-1. Turn off all three database settings.
- ```sql
- ALTER DATABASE <logical_database_name> SET NUMERIC_ROUNDABORT OFF
- ALTER DATABASE <logical_database_name> SET ANSI_WARNINGS OFF
- ALTER DATABASE <logical_database_name> SET ARITHABORT OFF
- ```
-1. Run the following query to verify that the settings are in fact turned off.
- ```sql
- SELECT name, is_numeric_roundabort_on, is_ansi_warnings_on, is_arithabort_on
- FROM sys.databases
- WHERE name = 'logical_database_name'
- ```
-1. Open an Azure support ticket requesting a database failover. Alternately, you could change the Service Level Objective (SLO) of your database instead of opening a ticket.
+### Trying to re-enable change feed on a table for that was recently disabled table will show an error. This is an uncommon behavior.
+
+* Applies To - Azure Synapse Link for Azure SQL Database and SQL Server 2022
+* Issue - When you try to enable a table that has been recently disabled with its metadata not yet been cleaned up and state marked as DISABLED, an error will be thrown stating `A table can only be enabled once among all table groups`.
+* Resolution - Wait for sometime for the disabled table system procedure to complete and then try to re-enable the table again.
+
+### Attempt to enable Synapse Link on database imported using SSDT, SQLPackage for Import/Export and Extract/Deploy operations
+
+* Applies To - Azure Synapse Link for Azure SQL Database and SQL Server 2022
+* Issue - For SQL databases enabled with Azure Synapse Link, when you use SSDT Import/Export and Extract/Deploy operations to import/setup a new database, the `changefeed` schema and user do not get excluded in the new database. However, the tables for the changefeed *are* ignored by DaxFX because they are marked as `is_ms_shipped=1` in `sys.objects`, and those objects never included in SSDT Import/Export and Extract/Deploy operations. When enabling Synapse Link on the imported/deployed database, the system stored procedure `sys.sp_change_feed_enable_db` fails if the `changefeed` user and schema already exist. This issue will also be encountered if you have created a user or schema named `changefeed` that is not related to Synapse Link change feed capability.
+* Resolution -
+ * Manually drop the empty `changefeed` schema and `changefeed` user. Then, Synapse Link can be enabled successfully on the imported/deployed database.
+ * If you have defined a custom schema or user named `changefeed` in your database that is not related to Azure Synapse Link, and you do not intend to use Azure Synapse Link for SQL, it is not necessary to drop your `changefeed` schema or user.
+ * If you have defined a customer schema or user named `changedfeed` in your database, currently, this database cannot participate in the Azure Synapse Link for SQL.
## Next steps
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 10/31/2022 Last updated : 11/09/2022
The following table lists the features of Azure Synapse Analytics that are curre
| **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | | **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| | **Distributed Deep Neural Network Training** | Learn more about new distributed training libraries like Horovod, Petastorm, TensorFlow, and PyTorch in [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
-| **Embed ADX dashboards** | Azure Data Explorer dashboards be [embedded in an iFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
+| **Embed ADX dashboards** | Azure Data Explorer dashboards be [embedded in an IFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
| **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information on this preview feature, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). | | **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).| | **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).|
The following table lists the features of Azure Synapse Analytics that have tran
| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator RBAC (role-based access control) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| | March 2022 | **Flowlets** | Flowlets help you design portions of new data flow logic, or to extract portions of an existing data flow, and save them as separate artifact inside your Synapse workspace. Then, you can reuse these Flowlets can inside other data flows. To learn more, review the [Flowlets GA announcement blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md). | | March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Azure Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).|
-| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. |
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. |
| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | | November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).| | October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).|
This section summarizes new Azure Synapse Analytics community opportunities and
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| October 2022 | **Azure Synapse MVP Corner** | October highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-october-2022/ba-p/3668048).|
+| September 2022 | **Azure Synapse MVP Corner** | September highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-september-2022/ba-p/3643960).|
| May 2022 | **Azure Synapse influencer program** | Sign up for our free [Azure Synapse Influencers program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencers Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). | | March 2022 | **Azure Synapse Analytics and Microsoft MVP YouTube video series** | A joint activity with the Azure Synapse product team and the Microsoft MVP community, a new [YouTube MVP Video Series about the Azure Synapse features](https://www.youtube.com/playlist?list=PLzUAjXZBFU9MEK2trKw_PGk4o4XrOzw4H) has launched. See more at the [Azure Synapse Analytics YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).|
This section summarizes recent new quality of life and feature improvements for
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **Synapse CICD for publishing workspace artifacts** | Integrating Synapse Studio with a Source Control System such as [Azure DevOps Git](https://dev.azure.com/) or [GitHub](https://github.com/) has been shown as one of Synapse Studio's preferred features to collaborate and provide [source control for Azure Synapse](cicd/source-control.md). The Visual Studio marketplace has a [Synapse workspace deployment task](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) to automate publishing.|
| July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it's now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).| | July 2022 | **Mssparkutils now has spark.stop() method** | A new API `mssparkutils.session.stop()` has been added to the mssparkutils package. This feature becomes handy when there are multiple sessions running against the same Spark pool. The new API is available for Scala and Python. To learn more, see [Stop an interactive session](spark/microsoft-spark-utilities.md#stop-an-interactive-session).| | May 2022 | **Updated Azure Synapse Analyzer Report** | Learn about the new features in [version 2.0 of the Synapse Analyzer report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/updated-synapse-analyzer-report-workload-management-and-ability/ba-p/3580269).|
This section summarizes new guidance and sample project resources for Azure Syna
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| November 2022 | **Synapse Spark Delta Time Travel** | Delta Lake [time travel enables point-in-time query snapshots or even rolls back erroneous updates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-spark-delta-time-travel/ba-p/3646789). |
| September 2022 | **What is the difference between Synapse dedicated SQL pool (formerly SQL DW) and Serverless SQL pool?** | Understand dedicated vs serverless pools and their concurrency. Read more at [basic concepts of dedicated SQL pools and serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/understand-synapse-dedicated-sql-pool-formerly-sql-dw-and/ba-p/3594628).| | September 2022 | **Reading Delta Lake in dedicated SQL Pool** | [Sample script](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/Delta%20Lake) to import Delta Lake files directly into the dedicated SQL Pool and support features like time-travel. For an explanation, see [Reading Delta Lake in dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/reading-delta-lake-in-dedicated-sql-pool/ba-p/3571053).| | September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).|
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
| September 2022 | **Kafka support for Protobuf format** | The [ADX Kafka sink connector](https://www.confluent.io/hub/microsoftcorporation/kafka-sink-azure-kusto) leverages the Kafka Connect framework and provides an adapter to ingest data from Kafka in JSON, Avro, String, and now the [Protobuf format](https://developers.google.com/protocol-buffers) in the latest update. Read more about [Ingesting Protobuf data from Kafka to Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ingesting-protobuf-data-from-kafka-to-azure-data-explorer/ba-p/3595793). | | September 2022 | **Funnel visuals** | [Funnel is the latest visual we added to Azure Data Explorer dashboards](/azure/data-explorer/dashboard-customize-visuals#funnel) following the feedback we received from customers. | | September 2022 | **.NET and Node.js support in Sample App Generator** | The [Azure Data Explorer (ADX) sample app generator wizard](https://dataexplorer.azure.com/oneclick/generatecode?sourceType=file&programingLang=C) is a tool that allows you to [create a working app to ingest and query your data](/azure/data-explorer/sample-app-generator-wizard) in your preferred programming language. Now, generating sample apps in .NET and Node.js is supported along with the previously available options Java and Python. |
-| August 2022 | **Embed ADX dashboards** | The ADX web UI and dashboards be [embedded in an iFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
+| August 2022 | **Embed ADX dashboards** | The ADX web UI and dashboards be [embedded in an IFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
| August 2022 | **Free cluster upgrade option** | You can now [upgrade your Azure Data Explorer free cluster to a full cluster](/azure/data-explorer/start-for-free-upgrade) that removes the storage limitation allowing you more capacity to grow your data. | | August 2022 | **Analyze fresh ADX data from Excel pivot table** | Now you can [Use fresh and unlimited volume of ADX data (Kusto) from your favorite analytic tool, Excel pivot tables](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/use-fresh-and-unlimited-volume-of-adx-data-kusto-from-your/ba-p/3588894). MDX queries generated by the Pivot code, will find their way to the Kusto backend as KQL statements that will aggregate the data as needed by the pivot and back to Excel.| | August 2022 | **Query results - color by value** | Highlight unique data at-a-glance in query results to visually group rows that share identical values for a specific column. Use **Explore results** and **Color by value** to [apply color to rows based on the selected column](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_14).|
This section summarizes recent improvements and features in SQL pools in Azure S
| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It's no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. | | April 2022 | **Cross-subscription restore for Azure Synapse SQL GA** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Restore a dedicated SQL pool to a different subscription](sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff)| | April 2022 | **Recover SQL pool from dropped server or workspace** | With the PowerShell Restore cmdlets in `Az.Sql` and `Az.Synapse` modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, see [Restore a dedicated SQL pool from a deleted Azure Synapse workspace](backuprestore/restore-sql-pool-from-deleted-workspace.md) or [Restore a standalone dedicated SQL pools (formerly SQL DW) from a deleted server](backuprestore/restore-sql-pool-from-deleted-workspace.md), depending on your scenario. |
-| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.|
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.|
| March 2022 | **Parallel execution for CETAS** | Better performance for [CREATE TABLE AS SELECT](sql/develop-tables-cetas.md) (CETAS) and subsequent SELECT statements now made possible by use of parallel execution plans. For examples, see [Better performance for CETAS and subsequent SELECTs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7).| ## Learn more
time-series-insights Time Series Insights Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-authentication-and-authorization.md
Request a token for Azure Time Series Insights using C# and the Azure Identity c
### App registration
-* Developers may use the [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (MSAL) to obtain tokens for app registrations.
+* Use the [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (MSAL) to obtain tokens for app registrations.
MSAL can be used in many application scenarios, including, but not limited to:
MSAL can be used in many application scenarios, including, but not limited to:
For sample C# code showing how to acquire a token as an app registration and query data from a Gen2 environment, view the sample app on [GitHub](https://github.com/Azure-Samples/Azure-Time-Series-Insights/blob/master/gen2-sample/csharp-tsi-gen2-sample/DataPlaneClientSampleApp/Program.cs) > [!IMPORTANT]
-> If you are using [Azure Active Directory Authentication Library (ADAL)](../active-directory/azuread-dev/active-directory-authentication-libraries.md) read about [migrating to MSAL](../active-directory/develop/msal-net-migration.md).
+> If you are using Azure Active Directory Authentication Library (ADAL), [migrate to MSAL](../active-directory/develop/msal-net-migration.md).
## Common headers and parameters
virtual-machines Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/availability.md
Previously updated : 03/08/2021 Last updated : 10/18/2022
Site Recovery can manage replication for:
## Next steps - [Create a virtual machine in an availability zone](./linux/create-cli-availability-zone.md) - [Create a virtual machine in an availability set](./linux/tutorial-availability-sets.md)-- [Create a virtual machine scale set](../virtual-machine-scale-sets/quick-create-portal.md)
+- [Create a virtual machine scale set](../virtual-machine-scale-sets/quick-create-portal.md)
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv2-series.md
Last updated 02/20/2020 +
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Update-AzVM -VM $vm -ResourceGroupName $resourceGroupName
# [Azure portal](#tab/portal)
-> [!IMPORTANT]
-> Premium SSD v2 managed disks can only be deployed and managed in the Azure portal from the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
-
-1. Sign in to the Azure portal with the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Navigate to **Virtual machines** and follow the normal VM creation process. 1. On the **Basics** page, select a [supported region](#regional-availability) and set **Availability options** to **Availability zone**. 1. Select one of the zones.
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The Key Vault VM extension provides automatic refresh of certificates stored in
The Key Vault VM extension supports below versions of Windows:
+- Windows Server 2022
- Windows Server 2019 - Windows Server 2016 - Windows Server 2012
The Key Vault VM extension is also supported on custom local VM that is uploaded
## Extension schema
-The following JSON shows the schema for the Key Vault VM extension. The extension does not require protected settings - all its settings are considered public information. The extension requires a list of monitored certificates, polling frequency, and the destination certificate store. Specifically:
+The following JSON shows the schema for the Key Vault VM extension. The extension doesn't require protected settings - all its settings are considered public information. The extension requires a list of monitored certificates, polling frequency, and the destination certificate store. Specifically:
```json {
The JSON configuration for a virtual machine extension must be nested inside the
``` ### Extension Dependency Ordering
-The Key Vault VM extension supports extension ordering if configured. By default the extension reports that it has successfully started as soon as it has started polling. However, it can be configured to wait until it has successfully downloaded the complete list of certificates before reporting a successful start. If other extensions depend on having the full set of certificates install before they start, then enabling this setting will allow those extension to declare a dependency on the Key Vault extension. This will prevent those extensions from starting until all certificates they depend on have been installed. The extension will retry the initial download indefinitely and remain in a `Transitioning` state.
+The Key Vault VM extension supports extension ordering if configured. By default the extension reports that it has successfully started as soon as it has started polling. However, it can be configured to wait until it has successfully downloaded the complete list of certificates before reporting a successful start. If other extensions depend on having the full set of certificates installed before they start, then enabling this setting will allow those extensions to declare a dependency on the Key Vault extension. This will prevent those extensions from starting until all certificates they depend on have been installed. The extension will retry the initial download indefinitely and remain in a `Transitioning` state.
To turn this on set the following: ```
Please be aware of the following restrictions/requirements:
### Frequently Asked Questions
-* Is there is a limit on the number of observedCertificates you can setup?
+* Is there a limit on the number of observedCertificates you can set up?
No, Key Vault VM Extension doesnΓÇÖt have limit on the number of observedCertificates. ### Troubleshoot
virtual-machines Hbv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-series.md
+
+ Title: HBv4-series - Azure Virtual Machines
+description: Specifications for the HBv4-series VMs.
+++++ Last updated : 11/1/2022+++
+# HBv4-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+HBv4-series VMs are optimized for various HPC workloads such as computational fluid dynamics, finite element analysis, frontend and backend EDA, rendering, molecular dynamics, computational geoscience, weather simulation, and financial risk analysis. During preview, HBv4 VMs will feature up to 176 AMD EPYCΓäó 7004-series (Genoa) CPU cores, 688 GB of RAM, and no simultaneous multithreading. HBv4-series VMs also provide 800 GB/s of DDR5 memory bandwidth and 768MB L3 cache per VM, up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
+
+> [!NOTE]
+> At General Availability, Azure HBv4-series VMs will automatically be upgraded to Genoa-X processors featuring 3D V-Cache. Updates to technical specifications for HBv4 will be posted at that time.
+
+All HBv4-series VMs feature 400 GB/s NDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. NDR continues to support features like Adaptive Routing and the Dynamically Connected Transport (DCT). This newest generation of InfiniBand also brings greater support for offload of MPI collectives, optimized real-world latencies due to congestion control intelligence, and enhanced adaptive routing capabilities. These features enhance application performance, scalability, and consistency, and their usage is recommended.
+
+[Premium Storage](premium-storage-performance.md): Supported\
+[Premium Storage caching](premium-storage-performance.md): Supported\
+[Ultra Disks](disks-types.md#ultra-disks): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage and performance)\
+[Live Migration](maintenance-and-updates.md): Not Supported\
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported\
+[VM Generation Support](generation-2.md): Generation 1 and 2\
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported at preview\
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+<br>
+
+|Size |Physical CPU cores |Processor |Memory (GB) |Memory bandwidth (GB/s) |Base CPU frequency (GHz) |Single-core frequency (GHz, peak) |RDMA performance (GB/s) |MPI support |Temp storage (TB) |Max data disks |Max Ethernet vNICs |
+|-|-|-|-|-|-|-|-|-|-|-|-|
+|Standard_HB176rs_v4 |176 |AMD EPYC Genoa |688 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HB176-144rs_v4|144 |AMD EPYC Genoa |688 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HB176-96rs_v4 |96 |AMD EPYC Genoa |688 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HB176-48rs_v4 |48 |AMD EPYC Genoa |688 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HB176-24rs_v4 |24 |AMD EPYC Genoa |688 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
++++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
++
+## Next steps
+
+- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Hx Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-series.md
+
+ Title: HX-series - Azure Virtual Machines
+description: Specifications for the HX-series VMs.
+++++ Last updated : 11/01/2022+++
+# HX-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+HX-series VMs are optimized for workloads that require significant memory capacity with twice the memory capacity as HBv4. For example, workloads such as silicon design can use HX-series VMs to enable EDA customers targeting the most advanced manufacturing processes to run their most memory-intensive workloads.
+
+During preview, HX VMs will feature up to 176 AMD EPYCΓäó 7004-series (Genoa) CPU cores, 1408 GB of RAM, and no simultaneous multithreading. HX-series VMs also provide 800 GB/s of DDR5 memory bandwidth and 768 MB L3 cache per VM, up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance, and clock frequencies up to 3.7 GHz.
+
+> [!Note]
+> At General Availability, Azure HX-series VMs will automatically be upgraded to Genoa-X processors featuring 3D V-Cache. Updates to technical specifications for HX will be posted at that time.
+
+All HX-series VMs feature 400 GB/s NDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. NDR continues to support features like Adaptive Routing and the Dynamically Connected Transport (DCT). This newest generation of InfiniBand also brings greater support for offload of MPI collectives, optimized real-world latencies due to congestion control intelligence, and enhanced adaptive routing capabilities. These features enhance application performance, scalability, and consistency, and their usage is recommended.
+
+[Premium Storage](premium-storage-performance.md): Supported\
+[Premium Storage caching](premium-storage-performance.md): Supported\
+[Ultra Disks](disks-types.md#ultra-disks): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage and performance)\
+[Live Migration](maintenance-and-updates.md): Not Supported\
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported\
+[VM Generation Support](generation-2.md): Generation 1 and 2\
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported at preview\
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+<br>
+
+|Size |Physical CPU cores |Processor |Memory (GB) |Memory per core (GB) |Memory bandwidth (GB/s) |Base CPU frequency (GHz) |Single-core frequency (GHz, peak) |RDMA performance (GB/s) |MPI support |Temp storage (TB) |Max data disks |Max Ethernet vNICs |
+|-|-|-|-|-|-|-|-|-|-|-|-|-|
+|Standard_HX176rs |176 |AMD EPYC Genoa |1408 |8 |800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HX176-144rs|144 |AMD EPYC Genoa |1408 |10|800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HX176-96rs |96 |AMD EPYC Genoa |1408 |15|800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HX176-48rs |48 |AMD EPYC Genoa |1408 |29|800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+|Standard_HX176-24rs |24 |AMD EPYC Genoa |1408 |59|800 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
+++++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
++
+## Next steps
+
+- Read about the latest announcements, HPC workload examples and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
+- For a high-level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
There are three main ways to share images in an Azure Compute Gallery, depending
There are some limitations for sharing your gallery to the community: - Encrypted images aren't supported.
+- TrustedLaunch and TVMSupported Image is not supported
+- CVMSuppored image is not supported
- For the preview, image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available during the public preview. - For the preview, you can't share [VM Applications](vm-applications.md) to the community.-- The gallery must be created as a community gallery. For the preview, there is no way to migrate an existing gallery to be a community gallery.-- To find images shared to the community from the Azure portal, you need to go through the VM create or scale set creation pages. You can't search the portal or Azure Marketplace for the images.
+- The gallery must be created as a community gallery. For the preview, there is no way to migrate an existing private gallery to be a community gallery
+- The image version region in the gallery should be same as the region home region, creating of cross-region version where the home region is different than the gallery is not supported, however once the image is in the home region it can be replicated to other regions
+- To find images shared to the community from the Azure portal, you need to go through the VM create or scale set creation pages. You can't search the portal or Azure Marketplace for the images
> [!IMPORTANT] > Microsoft does not provide support for images you share to the community.
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
During the preview:
- You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated. - TrustedLaunch and ConfidentialVM are not supported - PowerShell, Ansible, and Terraform aren't supported at this time.
+- The image version region in the gallery should be same as the region home region, creating of cross-region version where the home region is different than the gallery is not supported, however once the image is in the home region it can be replicated to other regions
- Not available in Government clouds - For consuming direct shared images in target subscription, Direct shared images can be found from VM/VMSS creation blade only. - **Known issue**: When creating a VM from a direct shared image using the Azure portal, if you select a region, select an image, then change the region, you will get an error message: "You can only create VM in the replication regions of this image" even when the image is replicated to that region. To get rid of the error, select a different region, then switch back to the region you want. If the image is available, it should clear the error message.
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Sign in to Azure using `az login`.
az login ```
-Create an image definition with TrustedLaunch security type
+Create an image definition with `TrustedLaunch` security type
```azurecli-interactive az sig image-definition create --resource-group MyResourceGroup --location eastus \
For VMs created with trusted launch enabled, you can view the trusted launch con
:::image type="content" source="media/trusted-launch/overview-properties.png" alt-text="Screenshot of the Trusted Launch properties of the VM.":::
-To change the trusted launch configuration, in the left menu, select **Configuration** under the **Settings** section. You can enable or disable Secure Boot and vTPM from the Trusted LaunchSecurity type section. Select Save at the top of the page when you are done.
+To change the trusted launch configuration, in the left menu, under the **Settings** section, select **Configuration**. You can enable or disable Secure Boot and vTPM from the **Security type** section. Select **Save** at the top of the page when you are done.
:::image type="content" source="media/trusted-launch/update.png" alt-text="Screenshot showing check boxes to change the Trusted Launch settings.":::
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Grant Azure image builder permissions to create images in the specified resource
```azurepowershell-interactive $SrcObjParams = @{
- SourceTypePlatformImage = $true
+ PlatformImageSource = $true
Publisher = 'MicrosoftWindowsServer' Offer = 'WindowsServer' Sku = '2019-Datacenter'
Grant Azure image builder permissions to create images in the specified resource
```azurepowershell-interactive $ImgCustomParams01 = @{ PowerShellCustomizer = $true
- CustomizerName = 'settingUpMgmtAgtPath'
+ Name = 'settingUpMgmtAgtPath'
RunElevated = $false Inline = @("mkdir c:\\buildActions", "mkdir c:\\buildArtifacts", "echo Azure-Image-Builder-Was-Here > c:\\buildActions\\buildActionsOutput.txt") }
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
In this article, you'll learn about the different types of configurations you ca
## Mesh network topology
-A mesh network is a topology in which all the virtual networks in the [network group](concept-network-groups.md) are connected to each other. All virtual networks are connected and can pass traffic bi-directionally to one another. By default, the mesh is a regional mesh, therefore only virtual networks in the same region can communicate with each other. **Global mesh** can be enabled to establish connectivity of virtual networks across all Azure regions. A virtual network can be part of up to two connected groups. Virtual network address spaces can't overlap in a mesh configuration, unlike in virtual network peerings. However, traffic to the specific overlapping subnets will be dropped, since routing is non-deterministic.
+A mesh network is a topology in which all the virtual networks in the [network group](concept-network-groups.md) are connected to each other. All virtual networks are connected and can pass traffic bi-directionally to one another. By default, the mesh is a regional mesh, therefore only virtual networks in the same region can communicate with each other. **Global mesh** can be enabled to establish connectivity of virtual networks across all Azure regions. A virtual network can be part of up to two connected groups. Virtual network address spaces can overlap in a mesh configuration, unlike in virtual network peerings. However, traffic to the specific overlapping subnets will be dropped, since routing is non-deterministic.
:::image type="content" source="./media/concept-configuration-types/mesh-topology.png" alt-text="Diagram of a mesh network topology.":::
When you deploy a hub and spoke topology from the Azure portal, the **Use hub as
- Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance. - Learn about [configuration deployments](concept-deployments.md) in Azure Virtual Network Manager.-- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
Title: Configure cross-tenant connection in Azure Virtual Network Manager - CLI
-description: Learn to connect Azure subscriptions in Azure Virtual Network Manager using cross-tenant connections for the management of virtual networks across subscriptions.
+ Title: Configure a cross-tenant connection in Azure Virtual Network Manager Preview - CLI
+description: Learn how to connect Azure subscriptions in Azure Virtual Network Manager by using cross-tenant connections for the management of virtual networks across subscriptions.
Last updated 11/1/2022
-#customerintent: As a cloud admin, in need to manage multi tenants from a single network manager instance. Cross tenant functionality will give me this so I can easily manage all network resources governed by azure virtual network manager
+#customerintent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
-# Configure cross-tenant connection in Azure Virtual Network Manager
+# Configure a cross-tenant connection in Azure Virtual Network Manager Preview - CLI
-In this article, youΓÇÖll learn how-to create [cross-tenant connections](concept-cross-tenant.md) in Azure Virtual Network Manager using [Azure CLI](/cli/azure/network/manager/scope-connection). Cross-tenant support allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants and verify. Once completed, You can centrally manage the resources of other tenants from a central network manager instance.
+In this article, you'll learn how to create [cross-tenant connections](concept-cross-tenant.md) in Azure Virtual Network Manager by using the [Azure CLI](/cli/azure/network/manager/scope-connection). Cross-tenant support allows organizations to use a central network manager for managing virtual networks across tenants and subscriptions.
+
+First, you'll create the scope connection on the central network manager. Then, you'll create the network manager connection on the connecting tenant and verify the connection. Last, you'll add virtual networks from different tenants and verify. After you complete all the tasks, you can centrally manage the resources of other tenants from your network manager.
> [!IMPORTANT]
-> Azure Virtual Network Manager is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Virtual Network Manager is currently in public preview. We provide this preview version without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites -- Two Azure tenants with virtual networks needing to be managed by Azure Virtual Network Manager Deploy. During the how-to, the tenants will be referred to as follows:
- - **Central management tenant** - The tenant where an Azure Virtual Network Manager instance is installed, and you'll centrally manage network groups from cross-tenant connections.
- - **Target managed tenant** - The tenant containing virtual networks to be managed. This tenant will be connected to the central management tenant.
+- Two Azure tenants with virtual networks that you want to manage through Azure Virtual Network Manager. This article refers to the tenants as follows:
+ - **Central management tenant**: The tenant where an Azure Virtual Network Manager instance is installed, and where you'll centrally manage network groups from cross-tenant connections.
+ - **Target managed tenant**: The tenant that contains virtual networks to be managed. This tenant will be connected to the central management tenant.
- Azure Virtual Network Manager deployed in the central management tenant.-- Required permissions include:
- - Administrator of central management tenant has guest account in target managed tenant.
- - Administrator guest account has *Network Contributor* permissions applied at appropriate scope level(Management group, subscription, or virtual network).
+- These permissions:
+ - The administrator of the central management tenant has a guest account in the target managed tenant.
+ - The administrator guest account has *Network Contributor* permissions applied at the appropriate scope level (management group, subscription, or virtual network).
+
+Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md) and how to [assign user roles to resources in the Azure portal](../role-based-access-control/role-assignments-portal.md).
-Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md), and how to [assign user roles to resources in Azure portal](../role-based-access-control/role-assignments-portal.md)
+## Create a scope connection within a network manager
-## Create scope connection within network manager
+Creation of the scope connection begins on the central management tenant with a network manager deployed. This is the network manager where you plan to manage all of your resources across tenants.
-Creation of the scope connection begins on the central management tenant with a network manager deployed, which is the network manager where you plan to manage all of your resources across tenants. In this task, you'll set up a scope connection to add a subscription from a target tenant. If you wish to use a management group, you'll modify the `ΓÇôresource-id` argument to look like `/providers/Microsoft.Management/managementGroups/{mgId}`.
+In this task, you set up a scope connection to add a subscription from a target tenant. If you want to use a management group, modify the `ΓÇôresource-id` argument to look like `/providers/Microsoft.Management/managementGroups/{mgId}`.
```azurecli
-# Create scope connection in network manager in the central management tenant
+# Create a scope connection in the network manager in the central management tenant
az network manager scope-connection create --resource-group "myRG" --network-manager-name "myAVNM" --name "ToTargetManagedTenant" --description "This is a connection to manage resources in the target managed tenant" --resource-id "/subscriptions/13579864-1234-5678-abcd-0987654321ab" --tenant-id "24680975-1234-abcd-56fg-121314ab5643" ```
-## Create network manager connection on subscription in other tenant
-Once the scope connection is created, you'll switch to your target tenant for the network manager connection. During this task, you'll connect the target tenant to the scope connection created previously and verify the connection state.
+## Create a network manager connection on a subscription in another tenant
+
+After you create the scope connection, you switch to your target tenant for the network manager connection. In this task, you connect the target tenant to the scope connection that you created previously. You also verify the connection state.
-1. Enter the following command to connect to the target managed tenant with your administrative account:
+1. Enter the following command to connect to the target managed tenant by using your administrative account:
```azurecli
- # Login to target managed tenant
- # Note: Change the --tenant value to the appropriate tenant ID
+ # Log in to the target managed tenant
+ # Change the --tenant value to the appropriate tenant ID
az login --tenant "12345678-12a3-4abc-5cde-678909876543" ```
- You'll be required to complete authentication with your organization based on your organizations policies.
+
+ You're required to complete authentication with your organization, based on your organization's policies.
-1. Enter the following command to create the cross tenant connection on the central management.
-Set the subscription (note itΓÇÖs the same as the one the connection references in step 1).
+1. Enter the following commands to set the subscription and to create the cross-tenant connection on the central management tenant. The subscription is the same as the one that the connection referenced in the previous step.
```azurecli # Set the Azure subscription az account set --subscription 87654321-abcd-1234-1def-0987654321ab
- # Create cross-tenant connection to central management tenant
+ # Create a cross-tenant connection to the central management tenant
az network manager connection subscription create --connection-name "toCentralManagementTenant" --description "This connection allows management of the tenant by a central management tenant" --network-manager-id "/subscriptions/13579864-1234-5678-abcd-0987654321ab/resourceGroups/myRG/providers/Microsoft.Network/networkManagers/myAVNM" ```
-## Verify the connection state
+## Verify the connection status
-1. Enter the following command to check the connection Status:
+1. Enter the following command to check the connection status:
```azurecli # Check connection status az network manager connection subscription show --name "toCentralManagementTenant" ```
-1. Switch back to the central management tenant, and performing a get on the network manager shows the subscription added via the cross tenant scopes property.
+1. Switch back to the central management tenant. Use the `show` command for the network manager to show the subscription added via the property for cross-tenant scopes:
```azurecli
- # View subscription added to network manager
+ # View the subscription added to the network manager
az network manager show --resource-group myAVNMResourceGroup --name myAVNM ```
-## Add static members to your network group
-In this task, you'll add a cross-tenant virtual network to your network group with static membership. The virtual network subscription used below is the same as referenced when creating connections above.
+## Add static members to a network group
+
+In this task, you add a cross-tenant virtual network to your network group by using static membership. In the following command, the virtual network subscription is the same as the one that you referenced when you created connections earlier.
```azurecli
-# Create network group with static member from target managed tenant
+# Create a network group with a static member from the target managed tenant
az network manager group static-member create --network-group-name "CrossTenantNetworkGroup" --network-manager-name "myAVNM" --resource-group "myAVNMResourceGroup" --static-member-name "targetVnet01" --resource-id="/subscriptions/87654321-abcd-1234-1def-0987654321ab /resourceGroups/myScopeAVNM/providers/Microsoft.Network/virtualNetworks/targetVnet01" ```
-## Delete virtual network manager configurations
+## Delete network manager configurations
-Now that the virtual network is in the network group, configurations will be applied. To remove the static member or cross-tenant resources, use the corresponding delete commands.
+Now that the virtual network is in the network group, configurations are applied. To remove the static member or cross-tenant resources, use the corresponding `delete` commands:
```azurecli
-# Delete static member group
+# Delete the static member group
az network manager group static-member delete --network-group-name "CrossTenantNetworkGroup" --network-manager-name " myAVNM" --resource-group "myRG" --static-member-name "targetVnet01ΓÇ¥ # Delete scope connections az network manager scope-connection delete --resource-group "myRG" --network-manager-name "myAVNM" --name "ToTargetManagedTenant"
-# Switch to ΓÇÿmanaged tenantΓÇÖ if needed
-#
+# Switch to a managed tenant if needed
az network manager connection subscription delete --name "toCentralManagementTenant" ```
az network manager connection subscription delete --name "toCentralManagementTen
> [!div class="nextstepaction"] -- Learn more about [Security admin rules](concept-security-admins.md).
+- Learn more about [security admin rules](concept-security-admins.md).
-- Learn how to [create a mesh network topology with Azure Virtual Network Manager using the Azure portal](how-to-create-mesh-network.md)
+- Learn how to [create a mesh network topology with Azure Virtual Network Manager by using the Azure portal](how-to-create-mesh-network.md).
-- Check out the [Azure Virtual Network Manager FAQ](faq.md)
+- Check out the [Azure Virtual Network Manager FAQ](faq.md).
virtual-network-manager How To Configure Cross Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-portal.md
Title: Configure cross-tenant connection in Azure Virtual Network Manager (Preview) - Portal
+ Title: Configure a cross-tenant connection in Azure Virtual Network Manager Preview - Portal
description: Learn how to create cross-tenant connections in Azure Virtual Network Manager to support virtual networks across subscriptions and management groups in different tenants.
Last updated 09/19/2022
-#customerintent: As a cloud admin, in need to manage multiple tenants from a single network manager instance. Cross tenant functionality will give me this so I can easily manage all network resources governed by azure virtual network manager.
+#customerintent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
+# Configure a cross-tenant connection in Azure Virtual Network Manager Preview - portal
-# Configure cross-tenant connection in Azure Virtual Network Manager (Preview) - portal
-
-In this article, you'll learn to create [cross-tenant connections](concept-cross-tenant.md) in the Azure portal with Azure Virtual Network Manager. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants to your network group and verify. Once completed, You can centrally manage the resources of other tenants from single network manager instance.
+In this article, you'll learn how to create [cross-tenant connections](concept-cross-tenant.md) in Azure Virtual Network Manager by using the Azure portal. First, you'll create the scope connection on the central network manager. Then, you'll create the network manager connection on the connecting tenant and verify the connection. Last, you'll add virtual networks from different tenants to your network group and verify. After you complete all the tasks, you can centrally manage the resources of other tenants from a single network manager.
> [!IMPORTANT]
-> Azure Virtual Network Manager is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Virtual Network Manager is currently in public preview. We provide this preview version without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites -- Two Azure tenants with virtual networks needing to be managed by an Azure Virtual Network Manager instance. During the how-to, the tenants will be referred to as follows:
- - **Central management tenant** - The tenant where an Azure Virtual Network Manager instance is installed, and you'll centrally manage network groups from cross-tenant connections.
- - **Target managed tenant** - The tenant containing virtual networks to be managed. This tenant will be connected to the central management tenant.
+- Two Azure tenants with virtual networks that you want to manage through Azure Virtual Network Manager. This article refers to the tenants as follows:
+ - **Central management tenant**: The tenant where an Azure Virtual Network Manager instance is installed, and where you'll centrally manage network groups from cross-tenant connections.
+ - **Target managed tenant**: The tenant that contains virtual networks to be managed. This tenant will be connected to the central management tenant.
- Azure Virtual Network Manager deployed in the central management tenant.-- Required permissions include:
- - Administrator of central management tenant has guest account in target managed tenant.
- - Administrator guest account has *Network Contributor* permissions applied at appropriate scope level(Management group, subscription, or virtual network).
-
-Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md), and how to [assign user roles to resources in Azure portal](../role-based-access-control/role-assignments-portal.md)
-
-## Create scope connection within network manager
-Creation of the scope connection begins on the central management tenant with a network manager deployed. This is the network manager where you plan to manager all of your resources across tenants. In this task, you'll set up a scope connection to add a subscription from a target tenant.
-1. Go to your Azure Virtual Network Manager instance.
-1. Under **Settings**, select **Cross-tenant connections** and select **Create cross-tenant connection**.
-1. On the **Create a connection** page, enter the connection name and target tenant information, and select **Create** when completed.
-1. Verify the scope connection is listed under **Cross-tenant connections** and the status is **Pending**
-
-## Create network manager connection on subscription in other tenant
-Once the scope connection is created, you'll switch to the target managed tenant, and you'll connect to the target managed tenant by creating another cross-tennant connection in the **Virtual Network Manager** hub.
+- These permissions:
+ - The administrator of the central management tenant has a guest account in the target managed tenant.
+ - The administrator guest account has *Network Contributor* permissions applied at the appropriate scope level (management group, subscription, or virtual network).
+
+Need help with setting up permissions? Check out how to [add guest users in the Azure portal](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md) and how to [assign user roles to resources in the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+## Create a scope connection within a network manager
+
+Creation of the scope connection begins on the central management tenant with a network manager deployed. This is the network manager where you plan to manage all of your resources across tenants.
+
+In this task, you set up a scope connection to add a subscription from a target tenant:
+
+1. Go to Azure Virtual Network Manager.
+1. Under **Settings**, select **Cross-tenant connections**, and then select **Create cross-tenant connection**.
+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/create-cross-tenant-connection.png" alt-text="Screenshot of cross-tenant connections in a network manager.":::
+1. On the **Create a connection** page, enter the connection name and target tenant information, and then select **Create**.
+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/create-connection-settings.png" alt-text="Screenshot of settings entered to create a connection.":::
+1. Verify that the scope connection is listed under **Cross-tenant connections** and the status is **Pending**.
+
+## Create a network manager connection on a subscription in another tenant
+
+After you create the scope connection, switch to the target managed tenant. Connect to the target managed tenant by creating another cross-tenant connection in the **Virtual Network Manager** hub:
+ 1. In the target tenant, search for **virtual network manager** and select **Virtual Network Manager**. 1. Under **Virtual network manager**, select **Cross-tenant connections**.+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/virtual-network-manager-overview.png" alt-text="Screenshot of network managers in Virtual Network Manager on a target tenant.":::
1. Select **Create a connection**.
-1. On the **Create a connection** page, enter the information for your central network manager tenant, and select **Create** when complete.
-## Verify the connection state
-Once both connections are created, it's time to verify the connection on the central management tenant.
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/create-connection-target.png" alt-text="Screenshot of the pane for cross-tenant connections.":::
+1. On the **Create a connection** page, enter the information for your central management tenant, and then select **Create**.
+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/create-connection-settings-target.png" alt-text="Screenshot of settings for creating a cross-tenant connection.":::
+
+## Verify the connection status
+
+After you create both connections, it's time to verify the connection on the central management tenant:
+ 1. On your central management tenant, select your network manager.
-1. Select **Cross-tenant connections** under **Settings**, and verify your cross-tenant connection is listed as **Connected**.
+1. Select **Cross-tenant connections** under **Settings**, and verify that your cross-tenant connection is listed as **Connected**.
+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/verify-status.png" alt-text="Screenshot that shows a cross-connection status of Connected.":::
+
+## Add static members to a network group
-## Add static members to your network group
-Now, you'll add virtual networks from both tenants into a static member network group.
+Now, add virtual networks from both tenants into a network group for static members.
> [!NOTE]
-> Currently, cross-tenant connections only support static memberships within a network group. Dynamic membership with Azure Policy is not supported.
+> Currently, cross-tenant connections support only static memberships within a network group. Dynamic membership with Azure Policy is not supported.
1. From your network manager, add a network group if needed.
-1. Select your network group and select **Add virtual networks** under **Manually add members**.
-1. On the **Manually add members** page, select **Tenant:...** next to the search box, select the linked tenant from the list, and select **Apply**.
+1. Select your network group, and then select **Add virtual networks** under **Manually add members**.
+1. On the **Manually add members** page, select **Tenant:...** next to the search box, select the linked tenant from the list, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/select-target-tenant-network-group.png" alt-text="Screenshot of available tenants to choose for static network group membership.":::
1. To view the available virtual networks from the target managed tenant, select **authenticate** and proceed through the authentication process. If you have multiple Azure accounts, select the one you're currently signed in with that has permissions to the target managed tenant.
-1. Select the VNets to include in the network group and select **Add**.
+1. Select the virtual networks to include in the network group, and then select **Add**.
## Verify group members
-In the final step, you'll verify the virtual networks that are now members of the network group.
-1. On the **Overview** page of the network group, select **View group members** and verify the VNets you added manually are listed.
+In the final step, you verify the virtual networks that are now members of the network group.
+
+On the **Overview** page of the network group, select **View group members**. Verify that the virtual networks that you added manually are listed.
+ :::image type="content" source="media/how-to-configure-cross-tenant-portal/network-group-membership.png" alt-text="Screenshot of network group membership." lightbox="media/how-to-configure-cross-tenant-portal/network-group-membership-thumb.png":::+ ## Next steps+ In this article, you deployed a cross-tenant connection between two Azure subscriptions. To learn more about using Azure Virtual Network Manager, see: - [Common uses cases for Azure Virtual Network Manager](concept-use-cases.md) - [Learn to build a secure hub-and-spoke network](tutorial-create-secured-hub-and-spoke.md)
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Go to your Azure Virtual Network Manager instance. This tutorial assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Add** to create a new network group.
+1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Basics* tab, enter the following information:
+1. On the **Create a network group** screen, enter the following information:
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-basics.png" alt-text="Screenshot of the Basics tab on Create a network group page.":::
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Name | Enter **myNetworkGroupB** for the network group name. | | Description | Provide a description about this network group. |
-1. Select **Add** to create the virtual network group.
+1. Select **Create** to create the virtual network group.
1. From the **Network groups** page, select the created network group from above to configure the network group.
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
Title: Create, change, or delete an Azure network security group
-description: Learn where to find information about security rules and how to create, change, or delete a network security group.
+description: Learn how to create, change, or delete a network security group (NSG).
- Previously updated : 03/13/2020 Last updated : 11/09/2022 + # Create, change, or delete a network security group
Security rules in network security groups enable you to filter the type of netwo
## Before you begin -
-If you don't have one, set up an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Complete one of these tasks before starting the remainder of this article:
+If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. -- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to create a connection with Azure.
+- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
-- **Azure CLI users**: Run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash), or the Azure CLI running locally. Use Azure CLI version 2.0.28 or later if you're running the Azure CLI locally. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+ If you're running Azure CLI locally, use Azure CLI version 2.0.28 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
The account you log into, or connect to Azure with must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions listed in [Permissions](#permissions). ## Work with network security groups
-You can create, [view all](#view-all-network-security-groups), [view details of](#view-details-of-a-network-security-group), [change](#change-a-network-security-group), and [delete](#delete-a-network-security-group) a network security group. You can also [associate or dissociate](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet-or-network-interface) a network security group from a network interface or subnet.
+You can create, [view all](#view-all-network-security-groups), [view details of](#view-details-of-a-network-security-group), [change](#change-a-network-security-group), and [delete](#delete-a-network-security-group) a network security group. You can also associate or dissociate a network security group from [a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface) or [subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet).
### Create a network security group
-There's a limit to how many network security groups you can create for each Azure location and subscription. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
+There's a limit to how many network security groups you can create for each Azure region and subscription. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
-1. On the [Azure portal](https://portal.azure.com) menu or from the **Home** page, select **Create a resource**.
+# [**Portal**](#tab/network-security-group-portal)
-2. Select **Networking**, then select **Network security group**.
+1. In the search box at the top of the portal, enter *Network security group*. Select **Network security groups** in the search results.
-3. In the **Create network security group** page, under the **Basics** tab, set values for the following settings:
+2. Select **+ Create**.
+
+3. In the **Create network security group** page, under the **Basics** tab, enter or select the following values:
| Setting | Action | | | |
- | **Subscription** | Choose your subscription. |
- | **Resource group** | Choose an existing resource group, or select **Create new** to create a new resource group. |
- | **Name** | Enter a unique text string within a resource group. |
- | **Region** | Choose the location you want. |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses **myResourceGroup** resource group. |
+ | **Instance details** | |
+ | Network security group name | Enter a name for the network security group you're creating. |
+ | Region | Select the region you want. |
+
+ :::image type="content" source="./media/manage-network-security-group/create-network-security-group.png" alt-text="Screenshot of create network security group in Azure portal.":::
4. Select **Review + create**. 5. After you see the **Validation passed** message, select **Create**.
-#### Commands
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create a network security group named **myNSG** in **East US** region. **myNSG** is created in the existing **myResourceGroup** resource group.
+
+```azurepowershell-interactive
+New-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup -Location eastus
+```
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create a network security group named **myNSG** in the existing **myResourceGroup** resource group.
+
+```azurecli-interactive
+az network nsg create --resource-group MyResourceGroup --name myNSG
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) |
-| PowerShell | [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) |
+ ### View all network security groups
-Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**. The list of network security groups appears for your subscription.
+# [**Portal**](#tab/network-security-group-portal)
+
+In the search box at the top of the portal, enter *Network security group*. Select **Network security groups** in the search results to see the list of network security groups in your subscription.
++
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to list all network security groups in your subscription.
-#### Commands
+```azurepowershell-interactive
+Get-AzNetworkSecurityGroup | format-table Name, Location, ResourceGroupName, ProvisioningState, ResourceGuid
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) |
-| PowerShell | [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) to list all network security groups in your subscription.
+
+```azurecli-interactive
+az network nsg list --out table
+```
++ ### View details of a network security group
-1. Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
2. Select the name of your network security group.
-In the menu bar of the network security group, under **Settings**, you can view the **Inbound security rules**, **Outbound security rules**, **Network interfaces**, and **Subnets** that the network security group is associated to.
+Under **Settings**, you can view the **Inbound security rules**, **Outbound security rules**, **Network interfaces**, and **Subnets** that the network security group is associated to.
-Under **Monitoring**, you can enable or disable **Diagnostic settings**. Under **Support + troubleshooting**, you can view **Effective security rules**. To learn more, see [Diagnostic logging for a network security group](virtual-network-nsg-manage-log.md) and [Diagnose a VM network traffic filter problem](diagnose-network-traffic-filter-problem.md).
+Under **Monitoring**, you can enable or disable **Diagnostic settings**. For more information, see [Resource logging for a network security group](virtual-network-nsg-manage-log.md).
+
+Under **Help**, you can view **Effective security rules**. For more information, see [Diagnose a virtual machine network traffic filter problem](diagnose-network-traffic-filter-problem.md).
+ To learn more about the common Azure settings listed, see the following articles: - [Activity log](../azure-monitor/essentials/platform-logs-overview.md) - [Access control (IAM)](../role-based-access-control/overview.md)-- [Tags](../azure-resource-manager/management/tag-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Locks](../azure-resource-manager/management/lock-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- [Tags](../azure-resource-manager/management/tag-resources.md)
+- [Locks](../azure-resource-manager/management/lock-resources.md)
- [Automation script](../azure-resource-manager/templates/export-template-portal.md)
-#### Commands
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) to view details of a network security group.
+
+```azurepowershell-interactive
+Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+```
+
+To learn more about the common Azure settings listed, see the following articles:
+
+- [Activity log](../azure-monitor/essentials/platform-logs-overview.md)
+- [Access control (IAM)](../role-based-access-control/overview.md)
+- [Tags](../azure-resource-manager/management/tag-resources.md)
+- [Locks](../azure-resource-manager/management/lock-resources.md)
+
+# [**Azure CLI**](#tab/network-security-group-cli)
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show) |
-| PowerShell | [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) |
+Use [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show) to view details of a network security group.
+
+```azurecli-interactive
+az network nsg show --resource-group myResourceGroup --name myNSG
+```
+
+To learn more about the common Azure settings listed, see the following articles:
+
+- [Activity log](../azure-monitor/essentials/platform-logs-overview.md)
+- [Access control (IAM)](../role-based-access-control/overview.md)
+- [Tags](../azure-resource-manager/management/tag-resources.md)
+- [Locks](../azure-resource-manager/management/lock-resources.md)
++ ### Change a network security group
-1. Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**.
+The most common changes to a network security group are:
+- [Associate or dissociate a network security group to or from a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface)
+- [Associate or dissociate a network security group to or from a subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet)
+- [Create a security rule](#create-a-security-rule)
+- [Delete a security rule](#delete-a-security-rule)
+
+### Associate or dissociate a network security group to or from a network interface
-2. Select the name of the network security group you want to change.
+To associate a network security group to, or dissociate a network security group from a network interface, see [Associate a network security group to, or dissociate a network security group from a network interface](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group).
-The most common changes are to [add a security rule](#create-a-security-rule), [remove a rule](#delete-a-security-rule), and [associate or dissociate a network security group to or from a subnet or network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet-or-network-interface).
+### Associate or dissociate a network security group to or from a subnet
-#### Commands
+# [**Portal**](#tab/network-security-group-portal)
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg update](/cli/azure/network/nsg#az-network-nsg-update) |
-| PowerShell | [Set-AzNetworkSecurityGroup](/powershell/module/az.network/set-aznetworksecuritygroup) |
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
-### Associate or dissociate a network security group to or from a subnet or network interface
+2. Select the name of your network security group, then select **Subnets**.
-To associate a network security group to, or dissociate a network security group from a network interface, see [Associate a network security group to, or dissociate a network security group from a network interface](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group). To associate a network security group to, or dissociate a network security group from a subnet, see [Change subnet settings](virtual-network-manage-subnet.md#change-subnet-settings).
+To associate a network security group to the subnet, select **+ Associate**, then select your virtual network and the subnet that you want to associate the network security group to. Select **OK**.
++
+To dissociate a network security group from the subnet, select the three dots next to the subnet that you want to dissociate the network security group from, and then select **Dissociate**. Select **Yes**.
++
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to associate or dissociate a network security group to or from a subnet.
+
+```azurepowershell-interactive
+## Place the virtual network configuration into a variable. ##
+$virtualNetwork = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## Update the subnet configuration. ##
+Set-AzVirtualNetworkSubnetConfig -Name mySubnet -VirtualNetwork $virtualNetwork -AddressPrefix 10.0.0.0/24 -NetworkSecurityGroup $networkSecurityGroup
+## Update the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $virtualNetwork
+```
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to associate or dissociate a network security group to or from a subnet.
+
+```azurecli-interactive
+az network vnet subnet update --resource-group myResourceGroup --vnet-name myVNet --name mySubnet --network-security-group myNSG
+```
++ ### Delete a network security group If a network security group is associated to any subnets or network interfaces, it can't be deleted. Dissociate a network security group from all subnets and network interfaces before attempting to delete it.
-1. Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+
+2. Select the network security group you want to delete.
+
+3. Select **Delete**, then select **Yes** in the confirmation dialog box.
+
+ :::image type="content" source="./media/manage-network-security-group/delete-network-security-group.png" alt-text="Screenshot of delete a network security group in Azure portal.":::
-2. Select the name of the network security group you want to delete.
+# [**PowerShell**](#tab/network-security-group-powershell)
-3. In the network security group's toolbar, select **Delete**. Then select **Yes** in the confirmation dialog box.
+Use [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup) to delete a network security group.
-#### Commands
+```azurepowershell-interactive
+Remove-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg delete](/cli/azure/network/nsg#az-network-nsg-delete) |
-| PowerShell | [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup) |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg delete](/cli/azure/network/nsg#az-network-nsg-delete) to delete a network security group.
+
+```azurecli-interactive
+az network nsg delete --resource-group myResourceGroup --name myNSG
+```
++ ## Work with security rules
-A network security group contains zero or more security rules. You can create, [view all](#view-all-security-rules), [view details of](#view-details-of-a-security-rule), [change](#change-a-security-rule), and [delete](#delete-a-security-rule) a security rule.
+A network security group contains zero or more security rules. You can [create](#create-a-security-rule), [view all](#view-all-security-rules), [view details of](#view-details-of-a-security-rule), [change](#change-a-security-rule), and [delete](#delete-a-security-rule) a security rule.
### Create a security rule There's a limit to how many rules per network security group you can create for each Azure location and subscription. To learn more, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
-1. Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
2. Select the name of the network security group you want to add a security rule to.
-3. In the network security group's menu bar, choose **Inbound security rules** or **Outbound security rules**.
+3. Select **Inbound security rules** or **Outbound security rules**.
Several existing rules are listed, including some you may not have added. When you create a network security group, several default security rules are created in it. To learn more, see [default security rules](./network-security-groups-overview.md#default-security-rules). You can't delete default security rules, but you can override them with rules that have a higher priority.
-4. <a name="security-rule-settings"></a>Select **Add**. Select or add values for the following settings, and then select **OK**:
+4. <a name="security-rule-settings"></a>Select **+ Add**. Select or add values for the following settings, and then select **Add**:
| Setting | Value | Details | | - | -- | - |
- | **Source** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**Service Tag** (inbound security rule) or **VirtualNetwork** (outbound security rule)</li><li>**Application&nbsp;security&nbsp;group**</li></ul> | <p>If you choose **IP Addresses**, you must also specify **Source IP addresses/CIDR ranges**.</p><p>If you choose **Service Tag**, you may also pick a **Source service tag**.</p><p>If you choose **Application security group**, you must also pick an existing application security group. If you choose **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network.</p> |
- | **Source IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and Classless Interdomain Routing (CIDR) ranges | <p>This setting appears if you change **Source** to **IP Addresses**. You must specify a single value or comma-separated list of multiple values. An example of multiple values is `10.0.0.0/16, 192.188.1.1`. There are limits to the number of values you can specify. For more details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, specify its private IP address, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before it translates a private IP address to a public IP address for outbound rules. To learn more about public and private IP addresses in Azure, see [IP address types](./ip-services/public-ip-addresses.md).</p> |
- | **Source service tag** | A service tag from the dropdown list | This optional setting appears if you set **Source** to **Service Tag** for an inbound security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](./network-security-groups-overview.md#service-tags). |
+ | **Source** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**My IP address**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you choose **IP Addresses**, you must also specify **Source IP addresses/CIDR ranges**.</p><p>If you choose **Service Tag**, you must also pick a **Source service tag**.</p><p>If you choose **Application security group**, you must also pick an existing application security group. If you choose **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
+ | **Source IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and Classless Interdomain Routing (CIDR) ranges | <p>This setting appears if you set **Source** to **IP Addresses**. You must specify a single value or comma-separated list of multiple values. An example of multiple values is `10.0.0.0/16, 192.188.1.1`. There are limits to the number of values you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, specify its private IP address, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before it translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
+ | **Source service tag** | A service tag from the dropdown list | This setting appears if you set **Source** to **Service Tag** for a security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](../virtual-network/service-tags-overview.md). |
| **Source application security group** | An existing application security group | This setting appears if you set **Source** to **Application security group**. Select an application security group that exists in the same region as the network interface. Learn how to [create an application security group](#create-an-application-security-group). |
- | **Source port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | This setting specifies the ports on which the rule allows or denies traffic. There are limits to the number of ports you can specify. For more details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits). |
- | **Destination** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**Service Tag** (outbound security rule) or **VirtualNetwork** (inbound security rule)</li><li>**Application&nbsp;security&nbsp;group**</li></ul> | <p>If you choose **IP addresses**, then also specify **Destination IP addresses/CIDR ranges**.</p><p>If you choose **VirtualNetwork**, traffic is allowed to all IP addresses within the virtual network's address space. **VirtualNetwork** is a service tag.</p><p>If you select **Application security group**, you must then select an existing application security group. Learn how to [create an application security group](#create-an-application-security-group).</p> |
- | **Destination IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and CIDR ranges | <p>This setting appears if you change **Destination** to **IP Addresses**. Similar to **Source** and **Source IP addresses/CIDR ranges**, you can specify single or multiple addresses or ranges. There are limits to the number you can specify. For more details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, ensure that you specify its private IP, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before Azure translates a private IP address to a public IP address for outbound rules. To learn more about public and private IP addresses in Azure, see [IP address types](./ip-services/public-ip-addresses.md).</p> |
- | **Destination service tag** | A service tag from the dropdown list | This optional setting appears if you change **Destination** to **Service Tag** for an outbound security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](./network-security-groups-overview.md#service-tags). |
+ | **Source port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | This setting specifies the ports on which the rule allows or denies traffic. There are limits to the number of ports you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
+ | **Destination** | One of:<ul><li>**Any**</li><li>**IP Addresses**</li><li>**Service Tag**</li><li>**Application security group**</li></ul> | <p>If you choose **IP addresses**, you must also specify **Destination IP addresses/CIDR ranges**.</p><p>If you choose **Service Tag**, you must also pick a **Destination service tag**.</p><p>If you choose **Application security group**, you must also select an existing application security group. If you choose **Application security group** for both **Source** and **Destination**, the network interfaces within both application security groups must be in the same virtual network. Learn how to [create an application security group](#create-an-application-security-group).</p> |
+ | **Destination IP addresses/CIDR ranges** | A comma-delimited list of IP addresses and CIDR ranges | <p>This setting appears if you change **Destination** to **IP Addresses**. Similar to **Source** and **Source IP addresses/CIDR ranges**, you can specify single or multiple addresses or ranges. There are limits to the number you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).</p><p>If the IP address you specify is assigned to an Azure VM, ensure that you specify its private IP, not its public IP address. Azure processes security rules after it translates the public IP address to a private IP address for inbound security rules, but before Azure translates a private IP address to a public IP address for outbound rules. To learn more about IP addresses in Azure, see [Public IP addresses](./ip-services/public-ip-addresses.md) and [Private IP addresses](./ip-services/private-ip-addresses.md).</p> |
+ | **Destination service tag** | A service tag from the dropdown list | This setting appears if you set **Destination** to **Service Tag** for a security rule. A service tag is a predefined identifier for a category of IP addresses. To learn more about available service tags, and what each tag represents, see [Service tags](../virtual-network/service-tags-overview.md). |
| **Destination application security group** | An existing application security group | This setting appears if you set **Destination** to **Application security group**. Select an application security group that exists in the same region as the network interface. Learn how to [create an application security group](#create-an-application-security-group). |
- | **Destination port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | As with **Source port ranges**, you can specify single or multiple ports and ranges. There are limits to the number you can specify. For more details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits). |
- | **Protocol** | **Any**, **TCP**, **UDP**, or **ICMP** | You may restrict the rule to the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP). The default is for the rule to apply to all protocols. |
+ | **Service** | A destination protocol from the dropdown list | This setting specifies the destination protocol and port range for the security rule. You can choose a predefined service, like **RDP**, or choose **Custom** and provide the port range in **Destination port ranges**. |
+ | **Destination port ranges** | One of:<ul><li>A single port, such as `80`</li><li>A range of ports, such as `1024-65535`</li><li>A comma-separated list of single ports and/or port ranges, such as `80, 1024-65535`</li><li>An asterisk (`*`) to allow traffic on any port</li></ul> | As with **Source port ranges**, you can specify single or multiple ports and ranges. There are limits to the number you can specify. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). |
+ | **Protocol** | **Any**, **TCP**, **UDP**, or **ICMP** | You may restrict the rule to the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Internet Control Message Protocol (ICMP). The default is for the rule to apply to all protocols (Any). |
| **Action** | **Allow** or **Deny** | This setting specifies whether this rule allows or denies access for the supplied source and destination configuration. | | **Priority** | A value between 100 and 4096 that's unique for all security rules within the network security group | Azure processes security rules in priority order. The lower the number, the higher the priority. We recommend that you leave a gap between priority numbers when you create rules, such as 100, 200, and 300. Leaving gaps makes it easier to add rules in the future, so that you can give them higher or lower priority than existing rules. | | **Name** | A unique name for the rule within the network security group | The name can be up to 80 characters. It must begin with a letter or number, and it must end with a letter, number, or underscore. The name may contain only letters, numbers, underscores, periods, or hyphens. |
- | **Description** | A text description | You may optionally specify a text description for the security rule. The description cannot be longer than 140 characters. |
+ | **Description** | A text description | You may optionally specify a text description for the security rule. The description can't be longer than 140 characters. |
+
+ :::image type="content" source="./media/manage-network-security-group/add-security-rule.png" alt-text="Screenshot of add a security rule to a network security group in Azure portal.":::
+
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Add-AzNetworkSecurityRuleConfig](/powershell/module/az.network/add-aznetworksecurityruleconfig) to create a network security group rule.
+
+```azurepowershell-interactive
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## Create the security rule. ##
+Add-AzNetworkSecurityRuleConfig -Name RDP-rule -NetworkSecurityGroup $networkSecurityGroup `
+-Description "Allow RDP" -Access Allow -Protocol Tcp -Direction Inbound -Priority 300 `
+-SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 3389
+## Updates the network security group. ##
+Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
+```
-#### Commands
+# [**Azure CLI**](#tab/network-security-group-cli)
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) |
-| PowerShell | [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) |
+Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create a network security group rule.
+
+```azurecli-interactive
+az network nsg rule create --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule --priority 300 \
+ --destination-address-prefixes '*' --destination-port-ranges 3389 --protocol Tcp --description "Allow RDP"
+```
++ ### View all security rules
-A network security group contains zero or more rules. To learn more about the information listed when viewing rules, see [Network security group overview](./network-security-groups-overview.md).
+A network security group contains zero or more rules. To learn more about the information listed when viewing rules, see [Security rules](./network-security-groups-overview.md#security-rules).
-1. Go to the [Azure portal](https://portal.azure.com) to view the rules of a network security group. Search for and select **Network security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
2. Select the name of the network security group that you want to view the rules for.
-3. In the network security group's menu bar, choose **Inbound security rules** or **Outbound security rules**.
+3. Select **Inbound security rules** or **Outbound security rules**.
+
+ The list contains any rules you've created and the [default security rules](./network-security-groups-overview.md#default-security-rules) of your network security group.
-The list contains any rules you've created and the network security group's [default security rules](./network-security-groups-overview.md#default-security-rules).
+ :::image type="content" source="./media/manage-network-security-group/view-security-rules.png" alt-text="Screenshot of inbound security rules of a network security group in Azure portal.":::
-#### Commands
+# [**PowerShell**](#tab/network-security-group-powershell)
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg rule list](/cli/azure/network/nsg/rule#az-network-nsg-rule-list) |
-| PowerShell | [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) |
+Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view security rules of a network security group.
+
+```azurepowershell-interactive
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## List security rules of the network security group in a table. ##
+Get-AzNetworkSecurityRuleConfig -NetworkSecurityGroup $networkSecurityGroup | format-table Name, Protocol, Access, Priority, Direction, SourcePortRange, DestinationPortRange, SourceAddressPrefix, DestinationAddressPrefix
+```
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg rule list](/cli/azure/network/nsg/rule#az-network-nsg-rule-list) to view security rules of a network security group.
+
+```azurecli-interactive
+az network nsg rule list --resource-group myResourceGroup --nsg-name myNSG
+```
++ ### View details of a security rule
-1. Go to the [Azure portal](https://portal.azure.com) to view the rules of a network security group. Search for and select **Network security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
-2. Select the name of the network security group that you want to view the details of a rule for.
+2. Select the name of the network security group that you want to view the rules for.
-3. In the network security group's menu bar, choose **Inbound security rules** or **Outbound security rules**.
+3. Select **Inbound security rules** or **Outbound security rules**.
4. Select the rule you want to view details for. For an explanation of all settings, see [Security rule settings](#security-rule-settings). > [!NOTE] > This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
-#### Commands
+ :::image type="content" source="./media/manage-network-security-group/view-security-rule-details.png" alt-text="Screenshot of details of an inbound security rule of a network security group in Azure portal.":::
+
+# [**PowerShell**](#tab/network-security-group-powershell)
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg rule show](/cli/azure/network/nsg/rule#az-network-nsg-rule-show) |
-| PowerShell | [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) |
+Use [Get-AzNetworkSecurityRuleConfig](/powershell/module/az.network/get-aznetworksecurityruleconfig) to view details of a security rule.
+
+```azurepowershell-interactive
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## View details of the security rule. ##
+Get-AzNetworkSecurityRuleConfig -Name RDP-rule -NetworkSecurityGroup $networkSecurityGroup
+```
+
+> [!NOTE]
+> This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg rule show](/cli/azure/network/nsg/rule#az-network-nsg-rule-show) to view details of a security rule.
+
+```azurecli-interactive
+az network nsg rule show --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule
+```
+
+> [!NOTE]
+> This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
++ ### Change a security rule
-1. Complete the steps in [View details of a security rule](#view-details-of-a-security-rule).
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+
+2. Select the name of the network security group that you want to view the rules for.
+
+3. Select **Inbound security rules** or **Outbound security rules**.
+
+4. Select the rule you want to change.
-2. Change the settings as needed, and then select **Save**. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
+5. Change the settings as needed, and then select **Save**. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
+
+ :::image type="content" source="./media/manage-network-security-group/change-security-rule.png" alt-text="Screenshot of change of an inbound security rule details of a network security group in Azure portal.":::
> [!NOTE] > This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
-#### Commands
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Set-AzNetworkSecurityRuleConfig](/powershell/module/az.network/set-aznetworksecurityruleconfig) to update a network security group rule.
+
+```azurepowershell-interactive
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## Make changes to the security rule. ##
+Set-AzNetworkSecurityRuleConfig -Name RDP-rule -NetworkSecurityGroup $networkSecurityGroup `
+-Description "Allow RDP" -Access Allow -Protocol Tcp -Direction Inbound -Priority 200 `
+-SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 3389
+## Updates the network security group. ##
+Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
+```
+
+> [!NOTE]
+> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg rule update](/cli/azure/network/nsg/rule#az-network-nsg-rule-update) to update a network security group rule.
+
+```azurecli-interactive
+az network nsg rule update --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule --priority 200
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg rule update](/cli/azure/network/nsg/rule#az-network-nsg-rule-update) |
-| PowerShell | [Set-AzNetworkSecurityRuleConfig](/powershell/module/az.network/set-aznetworksecurityruleconfig) |
+> [!NOTE]
+> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
++ ### Delete a security rule
-1. Complete the steps in [View details of a security rule](#view-details-of-a-security-rule).
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Network security group* and select **Network security groups** in the search results.
+
+2. Select the name of the network security group that you want to view the rules for.
+
+3. Select **Inbound security rules** or **Outbound security rules**.
+
+4. Select the rules you want to delete.
+
+5. Select **Delete**, and then select **Yes**.
-2. Select **Delete**, and then select **Yes**.
+ :::image type="content" source="./media/manage-network-security-group/delete-security-rule.png" alt-text="Screenshot of delete of an inbound security rule of a network security group in Azure portal.":::
> [!NOTE] > This procedure only applies to a custom security rule. You aren't allowed to delete a default security rule.
-#### Commands
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Remove-AzNetworkSecurityRuleConfig](/powershell/module/az.network/remove-aznetworksecurityruleconfig) to delete a security rule from a network security group.
+
+```azurepowershell-interactive
+## Place the network security group configuration into a variable. ##
+$networkSecurityGroup = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+## Remove the security rule. ##
+Remove-AzNetworkSecurityRuleConfig -Name RDP-rule -NetworkSecurityGroup $networkSecurityGroup
+## Updates the network security group. ##
+Set-AzNetworkSecurityGroup -NetworkSecurityGroup $networkSecurityGroup
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network nsg rule delete](/cli/azure/network/nsg/rule#az-network-nsg-rule-delete) |
-| PowerShell | [Remove-AzNetworkSecurityRuleConfig](/powershell/module/az.network/remove-aznetworksecurityruleconfig) |
+> [!NOTE]
+> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
+
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network nsg rule delete](/cli/azure/network/nsg/rule#az-network-nsg-rule-delete) to delete a security rule from a network security group.
+
+```azurecli-interactive
+az network nsg rule delete --resource-group myResourceGroup --nsg-name myNSG --name RDP-rule
+```
+
+> [!NOTE]
+> This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
++ ## Work with application security groups
An application security group contains zero or more network interfaces. To learn
### Create an application security group
-1. On the [Azure portal](https://portal.azure.com) menu or from the **Home** page, select **Create a resource**.
+# [**Portal**](#tab/network-security-group-portal)
-2. In the search box, enter *Application security group*.
+1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
-3. In the **Application security group** page, select **Create**.
+2. Select **+ Create**.
-4. In the **Create an application security group** page, under the **Basics** tab, set values for the following settings:
+3. In the **Create an application security group** page, under the **Basics** tab, enter or select the following values:
| Setting | Action | | | |
- | **Subscription** | Choose your subscription. |
- | **Resource group** | Choose an existing resource group, or select **Create new** to create a new resource group. |
- | **Name** | Enter a unique text string within a resource group. |
- | **Region** | Choose the location you want. |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select an existing resource group, or create a new one by selecting **Create new**. This example uses **myResourceGroup** resource group. |
+ | **Instance details** | |
+ | Name | Enter a name for the application security group you're creating. |
+ | Region | Select the region you want to create the application security group in. |
+
+ :::image type="content" source="./media/manage-network-security-group/create-network-security-group.png" alt-text="Screenshot of create an application security group in Azure portal.":::
5. Select **Review + create**.
-6. Under the **Review + create** tab, after you see the **Validation passed** message, select **Create**.
+6. After you see the **Validation passed** message, select **Create**.
+
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [New-AzApplicationSecurityGroup](/powershell/module/az.network/new-azapplicationsecuritygroup)
-#### Commands
+```azurepowershell-interactive
+New-AzApplicationSecurityGroup -ResourceGroupName myResourceGroup -Name myASG -Location eastus
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network asg create](/cli/azure/network/asg#az-network-asg-create) |
-| PowerShell | [New-AzApplicationSecurityGroup](/powershell/module/az.network/new-azapplicationsecuritygroup) |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network asg create](/cli/azure/network/asg#az-network-asg-create)
+
+```azurecli-interactive
+az network asg create --resource-group myResourceGroup --name myASG --location eastus
+```
++ ### View all application security groups
-Go to the [Azure portal](https://portal.azure.com) to view your application security groups. Search for and select **Application security groups**. The Azure portal displays a list of your application security groups.
+# [**Portal**](#tab/network-security-group-portal)
+
+In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results. The Azure portal displays a list of your application security groups.
++
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) to list all application security groups in your Azure subscription.
+
+```azurepowershell-interactive
+Get-AzApplicationSecurityGroup | format-table Name, ResourceGroupName, Location
+```
+
+# [**Azure CLI**](#tab/network-security-group-cli)
-#### Commands
+Use [az network asg list](/cli/azure/network/asg#az-network-asg-list) to list all application security groups in a resource group.
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network asg list](/cli/azure/network/asg#az-network-asg-list) |
-| PowerShell | [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) |
+```azurecli-interactive
+az network asg list --resource-group myResourceGroup --out table
+```
++ ### View details of a specific application security group
-1. Go to the [Azure portal](https://portal.azure.com) to view an application security group. Search for and select **Application security groups**.
+# [**Portal**](#tab/network-security-group-portal)
+
+1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+
+2. Select the application security group that you want to view the details of.
+
+# [**PowerShell**](#tab/network-security-group-powershell)
-2. Select the name of the application security group that you want to view the details of.
+Use [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup)
-#### Commands
+```azurepowershell-interactive
+Get-AzApplicationSecurityGroup -Name myASG
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network asg show](/cli/azure/network/asg#az-network-asg-show) |
-| PowerShell | [Get-AzApplicationSecurityGroup](/powershell/module/az.network/get-azapplicationsecuritygroup) |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network asg show](/cli/azure/network/asg#az-network-asg-show) to view details of an application security group.
+
+```azurecli-interactive
+az network asg show --resource-group myResourceGroup --name myASG
+```
++ ### Change an application security group
-1. Go to the [Azure portal](https://portal.azure.com) to view an application security group. Search for and select **Application security groups**.
+# [**Portal**](#tab/network-security-group-portal)
-2. Select the name of the application security group that you want to change.
+1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
-3. Select **change** next to the setting that you want to modify. For example, you can add or remove **Tags**, or you can change the **Resource group** or **Subscription**.
+2. Select the application security group that you want to change.
- > [!NOTE]
- > You can't change the location.
+Select **move** next to **Resource group** or **Subscription** to change the resource group or subscription respectively.
+
+Select **Edit** next to **Tags** to add or remove tags. to learn more, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md)
++
+> [!NOTE]
+> You can't change the location of an application security group.
+
+Select **Access control (IAM)** to assign or remove permissions to the application security group.
- In the menu bar, you can also select **Access control (IAM)**. In the **Access control (IAM)** page, you can assign or remove permissions to the application security group.
+# [**PowerShell**](#tab/network-security-group-powershell)
-#### Commands
+> [!NOTE]
+> You can't change an application security group using PowerShell.
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network asg update](/cli/azure/network/asg#az-network-asg-update) |
-| PowerShell | No PowerShell cmdlet |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network asg update](/cli/azure/network/asg#az-network-asg-update) to update the tags for an application security group.
+
+```azurecli-interactive
+az network asg update --resource-group myResourceGroup --name myASG --tags Dept=Finance
+```
+
+> [!NOTE]
+> You can't change the resource group, subscription or location of an application security group using the Azure CLI.
++ ### Delete an application security group You can't delete an application security group if it contains any network interfaces. To remove all network interfaces from the application security group, either change the network interface settings or delete the network interfaces. To learn more, see [Add or remove from application security groups](virtual-network-network-interface.md#add-or-remove-from-application-security-groups) or [Delete a network interface](virtual-network-network-interface.md#delete-a-network-interface).
-1. Go to the [Azure portal](https://portal.azure.com) to manage your application security groups. Search for and select **Application security groups**.
+# [**Portal**](#tab/network-security-group-portal)
-2. Select the name of the application security group that you want to delete.
+1. In the search box at the top of the portal, enter *Application security group*. Select **Application security groups** in the search results.
+
+2. Select the application security group you want to delete.
3. Select **Delete**, and then select **Yes** to delete the application security group.
-#### Commands
+ :::image type="content" source="./media/manage-network-security-group/delete-application-security-group.png" alt-text="Screenshot of delete application security group in Azure portal.":::
++
+# [**PowerShell**](#tab/network-security-group-powershell)
+
+Use [Remove-AzApplicationSecurityGroup](/powershell/module/az.network/remove-azapplicationsecuritygroup) to delete an application security group.
+
+```azurepowershell-interactive
+Remove-AzApplicationSecurityGroup -ResourceGroupName myResourceGroup -Name myASG
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network asg delete](/cli/azure/network/asg#az-network-asg-delete) |
-| PowerShell | [Remove-AzApplicationSecurityGroup](/powershell/module/az.network/remove-azapplicationsecuritygroup) |
+# [**Azure CLI**](#tab/network-security-group-cli)
+
+Use [az network asg delete](/cli/azure/network/asg#az-network-asg-delete) to delete an application security group.
+
+```azurecli-interactive
+az network asg delete --resource-group myResourceGroup --name myASG
+```
++ ## Permissions
To do tasks on network security groups, security rules, and application security
## Next steps -- Create a network or application security group using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or Azure [Resource Manager templates](template-samples.md)
+- Add or remove [a network interface to or from an application security group](./virtual-network-network-interface.md?tabs=network-interface-portal#add-or-remove-from-application-security-groups).
- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
na Previously updated : 09/08/2020 Last updated : 11/10/2022 -+ # Network security groups
You can use an Azure network security group to filter network traffic between Azure resources in an Azure virtual network. A network security group contains [security rules](#security-rules) that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
-This article describes properties of a network security group rule, the [default security rules](#default-security-rules) that are applied, and the rule properties that you can modify to create an [augmented security rule](#augmented-security-rules).
+This article describes the properties of a network security group rule, the [default security rules](#default-security-rules) that are applied, and the rule properties that you can modify to create an [augmented security rule](#augmented-security-rules).
## <a name="security-rules"></a> Security rules
A network security group contains zero, or as many rules as desired, within Azur
|Name|A unique name within the network security group.| |Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities aren't processed.| |Source or destination| Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. If you specify an address for an Azure resource, specify the private IP address assigned to the resource. Network security groups are processed after Azure translates a public IP address to a private IP address for inbound traffic, and before Azure translates a private IP address to a public IP address for outbound traffic. Fewer security rules are needed when you specify a range, a service tag, or application security group. The ability to specify multiple individual IP addresses and ranges (you can't specify multiple service tags or application groups) in a rule is referred to as [augmented security rules](#augmented-security-rules). Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple IP addresses and IP address ranges in network security groups created through the classic deployment model.|
-|Protocol | TCP, UDP, ICMP, ESP, AH, or Any. The ESP and AH protocols are not currently available via the Azure Portal but can be used via ARM templates. |
+|Protocol | TCP, UDP, ICMP, ESP, AH, or Any. The ESP and AH protocols aren't currently available via the Azure portal but can be used via ARM templates. |
|Direction| Whether the rule applies to inbound, or outbound traffic.| |Port range |You can specify an individual or range of ports. For example, you could specify 80 or 10000-10005. Specifying ranges enables you to create fewer security rules. Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple ports or port ranges in the same security rule in network security groups created through the classic deployment model. | |Action | Allow or deny |
Existing connections may not be interrupted when you remove a security rule that
Modifying NSG rules will only impact the new connections that are formed. When a new rule is created or an existing rule is updated in a network security group, it will only apply to new flows and new connections. Existing workflow connections are not updated with the new rules.
+Modifying network security group rules will only affect the new connections that are formed. When a new rule is created or an existing rule is updated in a network security group, it will only apply to new flows and new connections. Existing workflow connections aren't updated with the new rules.
+ There are limits to the number of security rules you can create in a network security group. For details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits). ### <a name="default-security-rules"></a> Default security rules
Application security groups enable you to configure network security as a natura
If you created your Azure subscription prior to November 15, 2017, in addition to being able to use SMTP relay services, you can send email directly over TCP port 25. If you created your subscription after November 15, 2017, you may not be able to send email directly over port 25. The behavior of outbound communication over port 25 depends on the type of subscription you have, as follows:
- - **Enterprise Agreement**: For VMs that are deployed in standard Enterprise Agreement subscriptions, the outbound SMTP connections on TCP port 25 will not be blocked. However, there is no guarantee that external domains will accept the incoming emails from the VMs. If your emails are rejected or filtered by the external domains, you should contact the email service providers of the external domains to resolve the problems. These problems are not covered by Azure support.
+ - **Enterprise Agreement**: For VMs that are deployed in standard Enterprise Agreement subscriptions, the outbound SMTP connections on TCP port 25 won't be blocked. However, there's no guarantee that external domains will accept the incoming emails from the VMs. If your emails are rejected or filtered by the external domains, you should contact the email service providers of the external domains to resolve the problems. These problems aren't covered by Azure support.
- For Enterprise Dev/Test subscriptions, port 25 is blocked by default. It is possible to have this block removed. To request to have the block removed, go to the Cannot send email (SMTP-Port 25) section of the Diagnose and Solve blade in the Azure Virtual Network resource in the Azure portal and run the diagnostic. This will exempt the qualified enterprise dev/test subscriptions automatically.
+ For Enterprise Dev/Test subscriptions, port 25 is blocked by default. It's possible to have this block removed. To request to have the block removed, go to the **Can't send email (SMTP-Port 25)** section of the **Diagnose and Solve** settings page for the Azure Virtual Network resource in the Azure portal and run the diagnostic. This will exempt the qualified enterprise dev/test subscriptions automatically.
After the subscription is exempted from this block and the VMs are stopped and restarted, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that is routed directly to the internet.
Application security groups enable you to configure network security as a natura
* If you've never created a network security group, you can complete a quick [tutorial](tutorial-filter-network-traffic.md) to get some experience creating one. * If you're familiar with network security groups and need to manage them, see [Manage a network security group](manage-network-security-group.md). * If you're having communication problems and need to troubleshoot network security groups, see [Diagnose a virtual machine network traffic filter problem](diagnose-network-traffic-filter-problem.md).
-* Learn how to enable [network security group flow logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to analyze network traffic to and from resources that have an associated network security group.
+* Learn how to enable [network security group flow logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md) to analyze network traffic to and from resources that have an associated network security group.
vpn-gateway Vpn Gateway Peering Gateway Transit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-peering-gateway-transit.md
Previously updated : 04/28/2021 Last updated : 11/09/2022
This article helps you configure gateway transit for virtual network peering. [Virtual network peering](../virtual-network/virtual-network-peering-overview.md) seamlessly connects two Azure virtual networks, merging the two virtual networks into one for connectivity purposes. [Gateway transit](../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity) is a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity. The following diagram shows how gateway transit works with virtual network peering.
-![Gateway transit diagram](./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png)
-In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. The transit option is available for peering between the same, or different deployment models. If you are configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the classic deployment model.
+In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the classic deployment model.
> In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks will propagate to the routing tables for the peered virtual networks using gateway transit. You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md).
Before you begin, verify that you have the following virtual networks and permis
### <a name="vnet"></a>Virtual networks
-|VNet|Deployment model| Virtual network gateway|
-|||||
-| Hub-RM| [Resource Manager](./tutorial-site-to-site-portal.md)| [Yes](tutorial-create-gateway-portal.md)|
-| Spoke-RM | [Resource Manager](./tutorial-site-to-site-portal.md)| No |
-| Spoke-Classic | [Classic](vpn-gateway-howto-site-to-site-classic-portal.md#CreatVNet) | No |
+| VNet | Deployment model | Virtual network gateway |
+||--||
+| Hub-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | [Yes](tutorial-create-gateway-portal.md) |
+| Spoke-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | No |
+| Spoke-Classic | [Classic](vpn-gateway-howto-site-to-site-classic-portal.md#CreatVNet) | No |
### <a name="permissions"></a>Permissions
In this scenario, the virtual networks are both in the Resource Manager deployme
* Traffic forwarded from remote virtual network: **Allow** * Virtual network gateway: **Use this virtual network's gateway**
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png" alt-text="Screenshot shows add peering.":::
+ :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png" alt-text="Screenshot shows add peering." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png":::
1. On the same page, continue on to configure the values for the **Remote virtual network**.
In this scenario, the virtual networks are both in the Resource Manager deployme
* Traffic forwarded from remote virtual network: **Allow** * Virtual network gateway: **Use the remote virtual network's gateway**
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-remote.png" alt-text="Screenshot shows values for remote virtual network.":::
+ :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-remote.png" alt-text="Screenshot shows values for remote virtual network." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-remote.png":::
1. Select **Add** to create the peering. 1. Verify the peering status as **Connected** on both virtual networks.
If the peering was already created, you can modify the peering for transit.
1. Navigate to the virtual network. Select **Peerings** and select the peering that you want to modify.
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-modify.png" alt-text="Screenshot shows select peerings.":::
+ :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-modify.png" alt-text="Screenshot shows select peerings." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-modify.png":::
1. Update the VNet peering.
If the peering was already created, you can modify the peering for transit.
* Traffic forwarded to virtual network; **Allow** * Virtual network gateway: **Use remote virtual network's gateway**
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png" alt-text="Screenshot shows modify peering gateway.":::
+ :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png" alt-text="Screenshot shows modify peering gateway." lightbox="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png":::
1. **Save** the peering settings.
For this configuration, you only need to configure the **Hub-RM** virtual networ
* Virtual network gateway: **Use this virtual network's gateway** * Remote virtual network: **Classic**
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-classic.png" alt-text="Add peering page for Spoke-Classic":::
+ :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-classic.png" alt-text="Add peering page for Spoke-Classic" lightbox="./media/vpn-gateway-peering-gateway-transit/peering-classic.png":::
1. Verify the subscription is correct, then select the virtual network from the dropdown. 1. Select **Add** to add the peering. 1. Verify the peering status as **Connected** on the Hub-RM virtual network.
-For this configuration, you do not need to configure anything on the **Spoke-Classic** virtual network. Once the status shows **Connected**, the spoke virtual network can use the connectivity through the VPN gateway in the hub virtual network.
+For this configuration, you don't need to configure anything on the **Spoke-Classic** virtual network. Once the status shows **Connected**, the spoke virtual network can use the connectivity through the VPN gateway in the hub virtual network.
### <a name="ps-different"></a>PowerShell sample
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
WAF customers can choose to run from one of the actions when a request matches a
- **Block:** The request is blocked and WAF sends a response to the client without forwarding the request to the back-end. - **Log:** Request is logged in the WAF logs and WAF continues evaluating lower priority rules. - **Redirect:** WAF redirects the request to the specified URI. The URI specified is a policy level setting. Once configured, all requests that match the **Redirect** action will be sent to that URI.
+- **Anomaly score:** This is the default action for Default Rule Set (DRS) 2.0 or later and is not applicable for the Bot Manager ruleset. The total anomaly score is increased incramentally when a rule with this action is matched.
## WAF rules
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
DRS is enabled by default in Detection mode in your WAF policies. You can disabl
Sometimes you might need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You may configure an exclusion list for a managed rule, rule group, or for the entire rule set. For more information, see [Web Application Firewall (WAF) with Front Door exclusion lists](./waf-front-door-exclusion.md).
-By default, DRS blocks requests that trigger the rules. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Default Rule Set.
+By default, DRS versions 2.0 and above will leverage anomaly scoring when a request matches a rule, DRS versions earlier than 2.0 blocks requests that trigger the rules. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Default Rule Set.
Custom rules are always applied before rules in the Default Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Default Rule Set are processed. You can also remove the Default Rule Set from your WAF policies.
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
The Application Gateway WAF can be configured to run in the following two modes:
The Azure web application firewall (WAF) engine is the component that inspects traffic and determines whether a request includes a signature that represents a potential attack. When you use CRS 3.2 or later, your WAF runs the new [WAF engine](waf-engine.md), which gives you higher performance and an improved set of features. When you use earlier versions of the CRS, your WAF runs on an older engine. New features will only be available on the new Azure WAF engine.
+### WAF actions
+
+WAF customers can choose which action is run when a request matches a rules conditions. Below is the listed of supported actions.
+
+* Allow: Request passes through the WAF and is forwarded to back-end. No further lower priority rules can block this request. Allow actions are only applicable to the Bot Manager ruleset, and are not applicable to the Core Rule Set.
+* Block: The request is blocked and WAF sends a response to the client without forwarding the request to the back-end.
+* Log: Request is logged in the WAF logs and WAF continues evaluating lower priority rules.
+* Anomaly score: This is the default action for CRS ruleset where total anomaly score is incremented when a rule with this action is matched. Anomaly scoring is not applicable for the Bot Manager ruleset.
+ ### Anomaly Scoring mode OWASP has two modes for deciding whether to block traffic: Traditional mode and Anomaly Scoring mode.
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
# Web Application Firewall CRS rule groups and rules
-Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits. This is done through rules that are defined based on the OWASP core rule sets 3.2, 3.1, 3.0, or 2.2.9. These rules can be disabled on a rule-by-rule basis. This article contains the current rules and rule sets offered. In the rare occasion that a published ruleset needs to be updated, it will be documented here.
+Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits. This is done through rules that are defined based on the OWASP core rule sets 3.2, 3.1, 3.0, or 2.2.9. Rules can be disabled on a rule-by-rule basis, or you can set specific actions by individual rule. This article contains the current rules and rule sets offered. In the rare occasion that a published ruleset needs to be updated, it will be documented here.
## Core rule sets
The WAF protects against the following web vulnerabilities:
- Bots, crawlers, and scanners - Common application misconfigurations (for example, Apache and IIS)
+CRS is enabled by default in Detection mode in your WAF policies. You can disable or enable individual rules within the Core Rule Set to meet your application requirements. You can also set specific actions per rule. The CRS supports block, log and anomaly score actions. The Bot Manager ruleset supports the allow, block and log actions.
+
+Sometimes you might need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You can configure exclusions to apply when specific WAF rules are evaluated, or to apply globally to the evaluation of all WAF rules. Exclusion rules apply to your whole web application. For more information, see [Web Application Firewall (WAF) with Application Gateway exclusion lists](application-gateway-waf-configuration.md).
+
+By default, CRS version 3.2 and above will leverage anomaly scoring when a request matches a rule, CRS 3.1 and below will block matching requests by default. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Core Rule Set.
+
+Custom rules are always applied before rules in the Core Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Core Rule Set are processed.
+
+### Anomaly scoring
+
+When you use CRS, your WAF is configured to use anomaly scoring by default. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: Critical, Error, Warning, or Notice. The severity affects a numeric value for the request, which is called the anomaly score:
+
+| Rule severity | Value contributed to anomaly score |
+|-|-|
+| Critical | 5 |
+| Error | 4 |
+| Warning | 3 |
+| Notice | 2 |
+
+If the anomaly score is 5 or greater, WAF blocks the request.
+
+For example, a single *Critical* rule match is enough for the WAF to block a request, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. For more information, please see [Anomaly Scoring mode](ag-overview.md#anomaly-scoring-mode).
+ ### OWASP CRS 3.2 CRS 3.2 includes 14 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled. The ruleset is based off OWASP CRS 3.2.0 version.
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Previously updated : 10/28/2022 Last updated : 11/10/2022 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
# Tutorial: Create an application gateway with a Web Application Firewall using the Azure portal
-This tutorial shows you how to use the Azure portal to create an Application Gateway with a Web Application Firewall (WAF). The WAF uses [OWASP](https://owasp.org/www-project-modsecurity-core-rule-set/) rules to protect your application. These rules include protection against attacks such as SQL injection, cross-site scripting attacks, and session hijacks. After creating the application gateway, you test it to make sure it's working correctly. With Azure Application Gateway, you direct your application web traffic to specific resources by assigning listeners to ports, creating rules, and adding resources to a backend pool. For the sake of simplicity, this tutorial uses a simple setup with a public front-end IP, a basic listener to host a single site on this application gateway, two virtual machines used for the backend pool, and a basic request routing rule.
+This tutorial shows you how to use the Azure portal to create an Application Gateway with a Web Application Firewall (WAF). The WAF uses [OWASP](https://owasp.org/www-project-modsecurity-core-rule-set/) rules to protect your application. These rules include protection against attacks such as SQL injection, cross-site scripting attacks, and session hijacks. After creating the application gateway, you test it to make sure it's working correctly. With Azure Application Gateway, you direct your application web traffic to specific resources by assigning listeners to ports, creating rules, and adding resources to a backend pool. For the sake of simplicity, this tutorial uses a simple setup with a public front-end IP, a basic listener to host a single site on this application gateway, two Linux virtual machines used for the backend pool, and a basic request routing rule.
In this tutorial, you learn how to:
In this example, you'll use virtual machines as the target backend. You can eith
To do this, you'll:
-1. Create two new VMs, *myVM* and *myVM2*, to be used as backend servers.
-2. Install IIS on the virtual machines to verify that the application gateway was created successfully.
+1. Create two new Linux VMs, *myVM* and *myVM2*, to be used as backend servers.
+2. Install NGINX on the virtual machines to verify that the application gateway was created successfully.
3. Add the backend servers to the backend pool. ### Create a virtual machine
-1. On the Azure portal, select **Create a resource**. The **New** window appears.
-2. Select **Windows Server 2019 Datacenter** in the **Popular** list. The **Create a virtual machine** page appears.<br>Application Gateway can route traffic to any type of virtual machine used in its backend pool. In this example, you use a Windows Server 2019 Datacenter.
+1. On the Azure portal, select **Create a resource**. The **Create a resource** window appears.
+2. Under **Virtual machine**, select **Create**.
3. Enter these values in the **Basics** tab for the following virtual machine settings: - **Resource group**: Select **myResourceGroupAG** for the resource group name. - **Virtual machine name**: Enter *myVM* for the name of the virtual machine.
+ - **Image**: Ubuntu Server 20.04 LTS - Gen2.
+ - **Authentication type**: Password
- **Username**: Enter a name for the administrator username. - **Password**: Enter a password for the administrator password. - **Public inbound ports**: Select **None**.
To do this, you'll:
1. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**. 1. Wait for the virtual machine creation to complete before continuing.
-### Install IIS for testing
+### Install NGINX for testing
-In this example, you install IIS on the virtual machines only to verify Azure created the application gateway successfully.
+In this example, you install NGINX on the virtual machines only to verify Azure created the application gateway successfully.
-1. Open [Azure PowerShell](../../cloud-shell/quickstart-powershell.md). To do so, select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
+1. Open a Bash Cloud Shell. To do so, select the **Cloud Shell** icon from the top navigation bar of the Azure portal and then select **Bash** from the drop-down list.
- :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-extension.png" alt-text="Screenshot of accessing PowerShell from Portal Cloud shell.":::
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/bash-shell.png" alt-text="Screenshot showing the Bash Cloud Shell.":::
-2. Set the location parameter for your environment, and then run the following command to install IIS on the virtual machine:
+2. Run the following command to install NGINX on the virtual machine:
- ```azurepowershell-interactive
- $location = 'east us'
+ ```azurecli-interactive
+ az vm extension set \
+ --publisher Microsoft.Azure.Extensions \
+ --version 2.0 \
+ --name CustomScript \
+ --resource-group myResourceGroupAG \
+ --vm-name myVM \
+ --settings '{ "fileUris": ["https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/application-gateway/iis/install_nginx.sh"], "commandToExecute": "./install_nginx.sh" }'
+ ```
- Set-AzVMExtension `
- -ResourceGroupName myResourceGroupAG `
- -ExtensionName IIS `
- -VMName myVM `
- -Publisher Microsoft.Compute `
- -ExtensionType CustomScriptExtension `
- -TypeHandlerVersion 1.4 `
- -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
- -Location $location
- ```
-
-3. Create a second virtual machine and install IIS by using the steps that you previously completed. Use *myVM2* for the virtual machine name and for the **VMName** setting of the **Set-AzVMExtension** cmdlet.
+3. Create a second virtual machine and install NGINX using these steps that you previously completed. Use *myVM2* for the virtual machine name and for the **--vm-name** setting of the cmdlet.
### Add backend servers to backend pool
In this example, you install IIS on the virtual machines only to verify Azure cr
## Test the application gateway
-Although IIS isn't required to create the application gateway, you installed it to verify whether Azure successfully created the application gateway. Use IIS to test the application gateway:
+Although NGINX isn't required to create the application gateway, you installed it to verify whether Azure successfully created the application gateway. Use the web service to test the application gateway:
1. Find the public IP address for the application gateway on its **Overview** page. :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-record-ag-address.png" alt-text="Screenshot of Application Gateway public IP address on the Overview page.":::