Updates from: 11/04/2022 02:13:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Api Connector Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/api-connector-samples.md
Previously updated : 07/16/2021 Last updated : 11/03/2022
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
If you want to get the `family_name` and `given_name` claims from Azure AD, you
1. Select **Add optional claim**. 1. For the **Token type**, select **ID**. 1. Select the optional claims to add, `family_name` and `given_name`.
-1. Select **Add**. If **Turn on the Microsoft Graph email permission (required for claims to appear in token)** appears, enable it, and then select **Add** again.
+1. Select **Add**. If **Turn on the Microsoft Graph profile permission (required for claims to appear in token)** appears, enable it, and then select **Add** again.
## [Optional] Verify your app authenticity
active-directory-b2c Sign In Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md
Title: Sign-in options supported by Azure AD B2C
-description: Learn about the options for sign-up and sign-in you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers.
+description: Learn about the sign-up and sign-in options you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers.
Previously updated : 05/10/2021 Last updated : 11/03/2022
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md
Previously updated : 04/08/2021 Last updated : 11/03/2022
In Azure AD B2C, there are two ways to provide identity user experiences:
* **User flows** are predefined, built-in, configurable policies that we provide so you can create sign-up, sign-in, and policy editing experiences in minutes.
-* **Custom policies** enable you to create your own user journeys for complex identity experience scenarios.
+* **Custom policies** enable you to create your own user journeys for complex identity experience scenarios that are not supported by user flows. Azure AD B2C uses custom policies to provide extensibility.
The following screenshot shows the user flow settings UI, versus custom policy configuration files.
Each user journey is defined by a policy. You can build as many or as few polici
![Diagram showing an example of a complex user journey enabled by IEF](media/user-flow-overview/custom-policy-diagram.png)
-A custom policy is defined by several XML files that refer to each other in a hierarchical chain. The XML elements define the claims schema, claims transformations, content definitions, claims providers, technical profiles, user journey orchestration steps, and other aspects of the identity experience.
+A custom policy is defined by multiple XML files that refer to each other in a hierarchical chain. The XML elements define the claims schema, claims transformations, content definitions, claims providers, technical profiles, user journey orchestration steps, and other aspects of the identity experience.
The powerful flexibility of custom policies is most appropriate for when you need to build complex identity scenarios. Developers configuring custom policies must define the trusted relationships in careful detail to include metadata endpoints, exact claims exchange definitions, and configure secrets, keys, and certificates as needed by each identity provider.
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md
Previously updated : 04/27/2021 Last updated : 11/03/2022
After pre migration of the accounts is complete, your custom policy and REST API
To see an example custom policy and REST API, see the [seamless user migration sample](https://aka.ms/b2c-account-seamless-migration) on GitHub.
-![Flowchart diagram of the seamless migration approach to user migration](./media/user-migration/diagram-01-seamless-migration.png)<br />*Diagram: Seamless migration flow*
## Security
The seamless migration approach uses your own custom REST API to validate a user
Not all information in the legacy identity provider should be migrated to your Azure AD B2C directory. Identify the appropriate set of user attributes to store in Azure AD B2C before migrating. -- **DO** store in Azure AD B2C
+- **DO** store in Azure AD B2C:
- Username, password, email addresses, phone numbers, membership numbers/identifiers. - Consent markers for privacy policy and end-user license agreements.-- **DO NOT** store in Azure AD B2C
+- **DON'T** store in Azure AD B2C:
- Sensitive data like credit card numbers, social security numbers (SSN), medical records, or other data regulated by government or industry compliance bodies. - Marketing or communication preferences, user behaviors, and insights.
active-directory Concept Certificate Based Authentication Mobile Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile-android.md
Previously updated : 10/05/2022 Last updated : 10/27/2022
Certain Exchange ActiveSync applications on Android 5.0 (Lollipop) or later are
To determine if your email application supports Azure AD CBA, contact your application developer.
+## Support for certificates on hardware security key (preview)
+
+Certificates can be provisioned in external devices like hardware security keys along with a PIN to protect private key access. Azure AD supports CBA with YubiKey.
+
+### Advantages of certificates on hardware security key
+
+Security keys with certificates:
+
+- Has the roaming nature of security key, which allows users to use the same certificate on different devices
+- Are hardware-secured with a PIN, which makes them phishing-resistant
+- Provide multifactor authentication with a PIN as second factor to access the private key of the certificate
+- Satisfy the industry requirement to have MFA on separate device
+- Help in future proofing where multiple credentials can be stored including Fast Identity Online 2 (FIDO2) keys.
+
+### Azure AD CBA on Android mobile
+
+Android needs a middleware application to be able to support smartcard or security keys with certificates. To support YubiKeys with Azure AD CBA, YubiKey Android SDK has been integrated into the Microsoft broker code which can be leveraged through the latest MSAL
+
+### Azure AD CBA on Android mobile with YubiKey
+
+Since Azure AD CBA with YubiKey on Android mobile is enabled via the latest MSAL, YubiKey Authenticator app is not a requirement for Android support.
+
+Steps to test YubiKey on Microsoft apps on Android:
+
+1. Install the latest Microsoft Authenticator app.
+1. Open Outlook and plug in your YubiKey.
+1. Select **Add account** and enter your user principal name (UPN).
+1. Click **Continue**. A dialog should immediately pop up asking for permission to access your YubiKey. Click **OK**.
+1. Select **Use Certificate or smart card**. A custom certificate picker will appear.
+1. Select the certificate associated with the userΓÇÖs account. Click **Continue**.
+1. Enter the PIN to access YubiKey and select **Unlock**.
+
+The user should be successfully logged in and redirected to the Outlook homepage.
+
+>[!NOTE]
+>For a smooth CBA flow, plug in YubiKey as soon as the application is opened and accept the consent dialog from YubiKey before selecting the link **Use Certificate or smart card**.
+
+### Troubleshoot certificates on hardware security key
+
+#### What will happen if the user has certificates both on the Android device and YubiKey?
+
+- If the user has certificates both on the android device and YubiKey, then if the YubiKey is plugged in before user clicks **Use Certificate or smart card**, the user will be shown the certificates in the YubiKey.
+- If the YubiKey is not plugged in before user clicks **Use Certificate or smart card**, the user will be shown all the certificates on the device. The user can **Cancel** the certificate picker, plug in the YubiKey, and restart the CBA process with YubiKey.
+
+#### My YubiKey is locked after incorrectly typing PIN three times. How do I fix it?
+
+- Users should see a dialog informing you that too many PIN attempts have been made. This dialog also pops up during subsequent attempts to select **Use Certificate or smart card**.
+- [YubiKey Manager](https://www.yubico.com/support/download/yubikey-manager/) can reset a YubiKeyΓÇÖs PIN.
+
+#### I have installed Microsoft authenticator but still do not see an option to do Certificate based authentication with YubiKey
+
+Before installing Microsoft Authenticator, uninstall Company Portal and install it after Microsoft Authenticator installation.
+
+#### Does Azure AD CBA support YubiKey via NFC?
+
+This feature currently only supports using YubiKey with USB and not NFC. We are working to add support for NFC.
+
+#### Once CBA fails, clicking on the CBA option again in the ΓÇÿOther ways to signinΓÇÖ link on the error page fails.
+
+This issue happens because of certificate caching. We are working to add a fix to clear the cache. As a workaround, clicking cancel and restarting the login flow will let the user choose a new certificate and successfully login.
+
+#### Azure AD CBA with YubiKey is failing. What information would help debug the issue?
+
+1. Open Microsoft Authenticator app, click the three dots icon in the top right corner and select **Send Feedback**.
+1. Click **Having Trouble?**.
+1. For **Select an option**, select **Add or sign into an account**.
+1. Describe any details you want to add.
+1. Click the send arrow in the top right corner. Note the code provided in the dialog that appears.
+
+### Known Issues
+
+- Sometimes, plugging in the YubiKey and providing permission via the permission dialog and clicking **Use Certificate or smart card** will still take the user to on-device CBA picker pop up (instead of the smart card CBA picker). The user will need to cancel out of the picker, unplug their key, and re-plugin their key before attempting to sign in again.
+- With the Most Recently Used (MRU) feature, once a user uses CBA for authentication, MRU auth method will be set to CBA. Since the user will be directly taken into CBA flow, there may not be enough time for the user to accept the Android USB consent dialog. As a workaround user needs to remove and re-plugin the YubiKey, accept the consent dialog from YubiKey then click the back button and try again to complete CBA authentication flow.
+- Azure AD CBA with YubiKey on latest Outlook and Teams fail at times. This could be due to a keyboard configuration change when the YubiKey is plugged in. This can be solved by:
+ - Plug in YubiKey as soon as the application is opened.
+ - Accept the consent dialog from YubiKey before selecting the link **Use Certificate or smart card**.
+
+### Supported platforms
+
+- Applications using the latest Microsoft Authentication Library (MSAL) or Microsoft Authenticator can do CBA
+- Microsoft first-party apps with latest MSAL libraries or Microsoft Authenticator can do CBA
+
+#### Supported operating systems
+
+|Operating system | Certificate on-device/Derived PIV | Smart cards |
+|:-|::|::|
+| Android | &#x2705; | Supported vendors only|
+
+#### Supported browsers
+
+|Operating system | Chrome certificate on-device | Chrome smart card | Safari certificate on-device | Safari smart card | Edge certificate on-device | Edge smart card |
+|:-|::|::|::|::|::|::|
+| Android | &#x2705; | &#10060;|N/A | N/A | &#10060; | &#10060;|
+
+### Security key providers
+
+|Provider | Android |
+|:-|::|
+| YubiKey | &#x2705; |
++ ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md)
active-directory Concept Certificate Based Authentication Mobile Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile-ios.md
Previously updated : 10/05/2022 Last updated : 10/27/2022
Azure AD CBA is supported for certificates on-device on native browsers and on M
On-device certificates are provisioned on the device. Customers can use Mobile Device Management (MDM) to provision the certificates on the device. Since iOS doesn't support hardware protected keys out of the box, customers can use external storage devices for certificates.
-## Advantages of external storage for certificates
-
-Customers can use external security keys to store their certificates. Security keys with certificates:
--- Enable the usage on any device and doesn't require the provision on every device the user has-- Are hardware secured with a PIN, which makes them phishing resistant-- Provide multifactor authentication with a PIN as second factor to access the private key of the certificate in the key-- Satisfy the industry requirement to have MFA on separate device-- Future proofing where multiple credentials can be stored including FIDO2 keys- ## Supported platforms - Only native browsers are supported
Customers can use external security keys to store their certificates. Security k
|--|||-| |&#10060; | &#10060; | &#x2705; |&#10060; |
-### Vendors for External storage
-
-Azure AD CBA will support certificates on YubiKeys. Users can install YubiKey authenticator application from YubiKey and do Azure AD CBA. Applications that don't use latest MSAL libraries need to also install Microsoft Authenticator.
- ## Microsoft mobile applications support | Applications | Support |
On iOS 9 or later, the native iOS mail client is supported.
To determine if your email application supports Azure AD CBA, contact your application developer.
+## Support for certificates on hardware security key (preview)
+
+Certificates can be provisioned in external devices like hardware security keys along with a PIN to protect private key access.
+Microsoft's mobile certificate-based solution coupled with the hardware security keys is a simple, convenient, FIPS (Federal Information Processing Standards) certified phishing-resistant MFA method.
+
+As for iOS 16/iPadOS 16.1, Apple devices provide native driver support for USB-C or Lightning connected CCID-compliant smart cards. This means Apple devices on iOS 16/iPadOS 16.1 will see a USB-C or Lightning connected CCID-compliant device as a smart card without the use of additional drivers or 3rd party apps. Azure AD CBA will work on these USB-A or USB-C, or Lightning connected CCID-compliant smart cards.
++
+### Advantages of certificates on hardware security key
+
+Security keys with certificates:
+
+- Can be used on any device, and don't need a certificate to be provisioned on every device the user has
+- Are hardware-secured with a PIN, which makes them phishing-resistant
+- Provide multifactor authentication with a PIN as second factor to access the private key of the certificate
+- Satisfy the industry requirement to have MFA on separate device
+- Help in future proofing where multiple credentials can be stored including Fast Identity Online 2 (FIDO2) keys
+
+### Azure AD CBA on iOS mobile with YubiKey
+
+Even though the native Smartcard/CCID driver is available on iOS/iPadOS for Lightning connected CCID-compliant smart cards, the YubiKey 5Ci Lightning connector is not seen as a connected smart card on these devices without the use of PIV (Personal Identity Verification) middleware like the Yubico Authenticator.
+
+### One-time registration prerequisite
+
+- Have a PIV-enabled YubiKey with a smartcard certificate provisioned on it
+- Download the [Yubico Authenticator for iOS app](https://apps.apple.com/app/yubico-authenticator/id1476679808) on your iPhone with v14.2 or later
+- Open the app, insert the YubiKey or tap over near field communication (NFC) and follow steps to upload the certificate to iOS keychain
+
+### Steps to test YubiKey on Microsoft apps on iOS mobile
+
+1. Install the latest Microsoft Authenticator app.
+1. Open Outlook and plug in your YubiKey.
+1. Select **Add account** and enter your user principal name (UPN).
+1. Click **Continue** and the iOS certificate picker will appear.
+1. Select the public certificate copied from YubiKey that is associated with the userΓÇÖs account.
+1. Click **YubiKey required** to open the YubiKey authenticator app.
+1. Enter the PIN to access YubiKey and select the back button at the top left corner.
+
+The user should be successfully logged in and redirected to the Outlook homepage.
+
+### Troubleshoot certificates on hardware security key
+
+#### What will happen if the user has certificates both on the iOS device and YubiKey?
+
+The iOS certificate picker will show all the certificates on both iOS device and the ones copied from YubiKey into iOS device. Depending on the certificate user picks they will be either taken to YubiKey authenticator to enter PIN or directly authenticated.
+
+#### My YubiKey is locked after incorrectly typing PIN 3 times. How do I fix it?
+
+- Users should see a dialog informing you that too many PIN attempts have been made. This dialog also pops up during subsequent attempts to select **Use Certificate or smart card**.
+- [YubiKey Manager](https://www.yubico.com/support/download/yubikey-manager/) can reset a YubiKeyΓÇÖs PIN.
+
+#### Once CBA fails, clicking on the CBA option again in the ΓÇÿOther ways to signinΓÇÖ link on the error page fails.
+
+This issue happens because of certificate caching. We are working to add a fix to clear the cache. As a workaround, clicking cancel and restarting the login flow will let the user choose a new certificate and successfully login.
+
+#### Azure AD CBA with YubiKey is failing. What information would help debug the issue?
+
+1. Open Microsoft Authenticator app, click the three dots icon in the top right corner and select **Send Feedback**.
+1. Click **Having Trouble?**.
+1. For **Select an option**, select **Add or sign into an account**.
+1. Describe any details you want to add.
+1. Click the send arrow in the top right corner. Note the code provided in the dialog that appears.
+
+#### How can I enforce phishing-resistant MFA using a hardware security key on browser-based applications on mobile?
+
+Certificate based authentication and Conditional Access authentication strength capability makes it powerful for customers to enforce authentication needs. Edge as a profile (add an account) will work with a hardware security key like YubiKey and conditional access policy with authentication strength capability can enforce phishing-resistant authentication with CBA.
+
+CBA support for YubiKey is available in the latest Microsoft Authentication Library (MSAL) libraries, any third-party application that integrates the latest MSAL, and all Microsoft first party applications can leverage CBA and Conditional Access authentication strength.
+
+### Supported operating systems
+
+|Operating system | Certificate on-device/Derived PIV | Smart cards |
+|:-|::|::|
+| iOS | &#x2705; | Supported vendors only|
+
+### Supported browsers
+
+|Operating system | Chrome certificate on-device | Chrome smart card | Safari certificate on-device | Safari smart card | Edge certificate on-device | Edge smart card |
+|:-|::|::|::|::|::|::|
+| iOS | &#10060; | &#10060;|&#x2705; | &#x2705; | &#10060; | &#10060;|
+
+### Security key providers
+
+|Provider | iOS |
+|:-|::|
+| YubiKey | &#x2705; |
+ ## Known issue On iOS, users will see a "double prompt", where they must click the option to use certificate-based authentication twice. We're working to create a seamless user experience.
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 10/27/2022 Last updated : 11/03/2022
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. >[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will begin to be enabled by default for all users starting February 28, 2023.<br>
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will begin to be enabled by default for all users starting February 27, 2023.<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
To enable number matching in the Azure AD portal, complete the following steps:
### When will my tenant see number matching if I don't use the Azure portal or Graph API to roll out the change?
-Number match will be enabled for all users of Microsoft Authenticator app after February 28, 2023. Relevant services will begin deploying these changes after February 28, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
+Number match will be enabled for all users of Microsoft Authenticator app after February 27, 2023. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
### Can I opt out of number matching?
-Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. Microsoft will enable number matching for all tenants by Feb 28, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
+Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. Microsoft will enable number matching for all tenants by Feb 27, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
### What about my Apple Watch?
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Previously updated : 10/13/2022 Last updated : 10/18/2022
The following limitations apply to using SSPR from the Windows sign-in screen:
- Explorer.exe is replaced with a custom shell - Interactive logon: Require smart card is set to enabled or 1 - The combination of the following specific three settings can cause this feature to not work.
- - Interactive logon: Do not require CTRL+ALT+DEL = Disabled
+ - Interactive logon: Do not require CTRL+ALT+DEL = Disabled (only for Windows 10 version 1710 and earlier)
- *DisableLockScreenAppNotifications* = 1 or Enabled - Windows SKU is Home edition
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
When creating Conditional Access policies, administrators have asked for the abi
There are multiple scenarios that organizations can now enable using filter for devices condition. Below are some core scenarios with examples of how to use this new condition. -- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privilged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies:
+- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privileged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies:
- Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant. - Policy 2: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http&preserve-view=true). - **Block access to organization resources from devices running an unsupported Operating System**. For this example, lets say you want to block access to resources from Windows OS version older than Windows 10. For this scenario, organizations would create the following Conditional Access policy:
The following steps will help create two Conditional Access policies to support
Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **Directory roles** and choose **Global Administrator**. > [!WARNING]
Policy 2: All users with the directory role of Global Administrator, accessing t
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **Directory roles** and choose **Global Administrator**. > [!WARNING]
Setting extension attributes is made possible through the Graph API. For more in
### Filter for devices Graph API
-The filter for devices API is available in Microsoft Graph v1.0 endpoint and can be accessed using https://graph.microsoft.com/v1.0/identity/conditionalaccess/policies/. You can configure a filter for devices when creating a new Conditional Access policy or you can update an existing policy to configure the filter for devices condition. To update an existing policy, you can do a patch call on the Microsoft Graph v1.0 endpoint mentioned above by appending the policy ID of an existing policy and executing the following request body. The example here shows configuring a filter for devices condition excluding device that are not marked as SAW devices. The rule syntax can consist of more than one single expression. To learn more about the syntax, see [dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+The filter for devices API is available in Microsoft Graph v1.0 endpoint and can be accessed using https://graph.microsoft.com/v1.0/identity/conditionalaccess/policies/. You can configure a filter for devices when creating a new Conditional Access policy or you can update an existing policy to configure the filter for devices condition. To update an existing policy, you can do a patch call on the Microsoft Graph v1.0 endpoint mentioned above by appending the policy ID of an existing policy and executing the following request body. The example here shows configuring a filter for devices condition excluding devices that aren't marked as SAW devices. The rule syntax can consist of more than one single expression. To learn more about the syntax, see [dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
```json {
The following device attributes can be used with the filter for devices conditio
## Policy behavior with filter for devices
-The filter for devices condition in Conditional Access evaluates policy based on device attributes of a registered device in Azure AD and hence it is important to understand under what circumstances the policy is applied or not applied. The table below illustrates the behavior when a filter for devices condition are configured.
+The filter for devices condition in Conditional Access evaluates policy based on device attributes of a registered device in Azure AD and hence it's important to understand under what circumstances the policy is applied or not applied. The table below illustrates the behavior when a filter for devices condition is configured.
| Filter for devices condition | Device registration state | Device filter Applied | | | |
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps are confirmed to support this setting:
- Microsoft Teams - Microsoft To Do - Microsoft Word-- Microsoft Power Apps - Microsoft Field Service (Dynamics 365) - MultiLine for Intune - Nine Mail - Email and Calendar
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Last updated 08/22/2022
-+
Conditional Access templates are designed to provide a convenient method to depl
The 14 policy templates are split into policies that would be assigned to user identities or devices. Find the templates in the **Azure portal** > **Azure Active Directory** > **Security** > **Conditional Access** > **Create new policy from template**.
+Organizations not comfortable allowing Microsoft to create these policies can create them manually by copying the settings from **View policy summary** or use the linked articles to create policies themselves.
+ :::image type="content" source="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png" alt-text="Create a Conditional Access policy from a preconfigured template in the Azure portal." lightbox="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png"::: > [!IMPORTANT]
The 14 policy templates are split into policies that would be assigned to user i
- [Securing security info registration](howto-conditional-access-policy-registration.md) - [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)\* - [Require multi-factor authentication for all users](howto-conditional-access-policy-all-users-mfa.md)\*
- - Require multi-factor authentication for guest access
+ - [Require multi-factor authentication for guest access](howto-policy-guest-mfa.md)
- [Require multi-factor authentication for Azure management](howto-conditional-access-policy-azure-management.md)\* - [Require multi-factor authentication for risky sign-in](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2** - [Require password change for high-risk users](howto-conditional-access-policy-risk-user.md) **Requires Azure AD Premium P2** - Devices
- - [Require compliant or Hybrid Azure AD joined device for admins](howto-conditional-access-policy-compliant-device.md)
- - Block access for unknown or unsupported device platform
- - No persistent browser session
+ - [Require compliant or hybrid Azure AD joined device or multifactor authentication for all users](howto-conditional-access-policy-compliant-device.md)
+ - [Block access for unknown or unsupported device platform](howto-policy-unknown-unsupported-device.md)
+ - [No persistent browser session](howto-policy-persistent-browser-session.md)
- [Require approved client apps or app protection](howto-policy-approved-app-or-app-protection.md)
- - Require compliant or Hybrid Azure AD joined device or multi-factor authentication for all users
- - Use application enforced restrictions for unmanaged devices
+ - [Require compliant or Hybrid Azure AD joined device for administrators](howto-conditional-access-policy-compliant-device-admin.md)
+ - [Use application enforced restrictions for unmanaged devices](howto-policy-app-enforced-restriction.md)
> \* These four policies when configured together, provide similar functionality enabled by [security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
-Organizations not comfortable allowing Microsoft to create these policies can create them manually by copying the settings from **View policy summary** or use the linked articles to create policies themselves.
- ### Other policies * [Block access by location](howto-conditional-access-policy-location.md)
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Title: Conditional Access - Require MFA for administrators - Azure Active Directory
+ Title: Require MFA for administrators with Conditional Access - Azure Active Directory
description: Create a custom Conditional Access policy to require administrators to perform multifactor authentication
Last updated 08/22/2022
-+
-# Conditional Access: Require MFA for administrators
+# Common Conditional Access policy: Require MFA for administrators
Accounts that are assigned administrative rights are targeted by attackers. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.
Microsoft recommends you require MFA on the following roles at a minimum, based
Organizations can choose to include or exclude roles as they see fit.
-## User exclusions
-Conditional Access policies are powerful tools, we recommend excluding the following accounts from your policy:
--- **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant to take steps to recover access.
- - More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-- **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
- - If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy.
-
-## Template deployment
-
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
## Create a Conditional Access policy The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Title: Conditional Access - Require MFA for all users - Azure Active Directory
+ Title: Require MFA for all users with Conditional Access - Azure Active Directory
description: Create a custom Conditional Access policy to require all users do multifactor authentication
Last updated 08/22/2022
-+
-# Conditional Access: Require MFA for all users
+# Common Conditional Access policy: Require MFA for all users
As Alex Weinert, the Directory of Identity Security at Microsoft, mentions in his blog post [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Your-Pa-word-doesn-t-matter/ba-p/731984):
As Alex Weinert, the Directory of Identity Security at Microsoft, mentions in hi
The guidance in this article will help your organization create an MFA policy for your environment.
-## User exclusions
-
-Conditional Access policies are powerful tools, we recommend excluding the following accounts from your policy:
-
-* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access.
- * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
- * If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy.
## Application exclusions
Organizations may have many cloud applications in use. Not all of those applicat
Organizations that use [Subscription Activation](/windows/deployment/windows-10-subscription-activation) to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their all users all cloud apps MFA policy.
-## Template deployment
-
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
## Create a Conditional Access policy The following steps will help create a Conditional Access policy to require all users do multifactor authentication.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
The following steps will help create a Conditional Access policy to require all
1. Select **Create** to create to enable your policy. After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.+ ### Named locations Organizations may choose to incorporate known network locations known as **Named locations** to their Conditional Access policies. These named locations may include trusted IPv4 networks like those for a main office location. For more information about configuring named locations, see the article [What is the location condition in Azure Active Directory Conditional Access?](location-condition.md)
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
Title: Conditional Access - Require MFA for Azure management - Azure Active Directory
+ Title: Require MFA for Azure management with Conditional Access - Azure Active Directory
description: Create a custom Conditional Access policy to require multifactor authentication for Azure management tasks
Last updated 08/22/2022
-+
-# Conditional Access: Require MFA for Azure management
+# Common Conditional Access policy: Require MFA for Azure management
Organizations use many Azure services and manage them from Azure Resource Manager based tools like:
These tools can provide highly privileged access to resources that can make the
To protect these privileged resources, Microsoft recommends requiring multifactor authentication for any user accessing these resources. In Azure AD, these tools are grouped together in a suite called [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management). For Azure Government, this suite should be the Azure Government Cloud Management API app.
-## User exclusions
-Conditional Access policies are powerful tools, we recommend excluding the following accounts from your policy:
-
-* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access.
- * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
- * If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy.
-
-## Template deployment
-
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
## Create a Conditional Access policy
The following steps will help create a Conditional Access policy to require user
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
Last updated 08/22/2022
-+
For organizations with a conservative cloud migration approach, the block all po
Policies like these can have unintended side effects. Proper testing and validation are vital before enabling. Administrators should utilize tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes.
-## User exclusions
-
-Conditional Access policies are powerful tools, we recommend excluding the following accounts from your policy:
-
-* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access.
- * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
- * If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy.
## Create a Conditional Access policy
The following steps will help create Conditional Access policies to block access
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Title: Conditional Access - Block legacy authentication - Azure Active Directory
+ Title: Block legacy authentication with Conditional Access - Azure Active Directory
description: Create a custom Conditional Access policy to block legacy authentication protocols
Last updated 08/22/2022
-+
-# Conditional Access: Block legacy authentication
+# Common Conditional Access policy: Block legacy authentication
Due to the increased risk associated with legacy authentication protocols, Microsoft recommends that organizations block authentication requests using these protocols and require modern authentication. For more information about why blocking legacy authentication is important, see the article [How to: Block legacy authentication to Azure AD with Conditional Access](block-legacy-authentication.md).
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Compliant Device Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md
+
+ Title: Require administrators use compliant or hybrid joined devices - Azure Active Directory
+description: Create a custom Conditional Access policy to require compliant or hybrid joined devices for admins
+++++ Last updated : 09/30/2022++++++++
+# Common Conditional Access policy: Require compliant or hybrid Azure AD joined device for administrators
+
+Accounts that are assigned administrative rights are targeted by attackers. Requiring users with these highly privileged rights to perform actions from devices marked as compliant or hybrid Azure AD joined can help limit possible exposure.
+
+More information about device compliance policies can be found in the article, [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started)
+
+Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md).
+
+Microsoft recommends you require enable this policy for the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md):
+
+- Global administrator
+- Application administrator
+- Authentication Administrator
+- Billing administrator
+- Cloud application administrator
+- Conditional Access administrator
+- Exchange administrator
+- Helpdesk administrator
+- Password administrator
+- Privileged authentication administrator
+- Privileged Role Administrator
+- Security administrator
+- SharePoint administrator
+- User administrator
+
+Organizations can choose to include or exclude roles as they see fit.
+++
+## Create a Conditional Access policy
+
+The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **Directory roles** and choose built-in roles like:
+ - Global Administrator
+ - Application administrator
+ - Authentication Administrator
+ - Billing Administrator
+ - Cloud application Administrator
+ - Conditional Access Administrator
+ - Exchange Administrator
+ - Helpdesk Administrator
+ - Password Administrator
+ - Privileged authentication Administrator
+ - Privileged Role Administrator
+ - Security Administrator
+ - SharePoint Administrator
+ - User Administrator
+
+ > [!WARNING]
+ > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
+
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Access controls** > **Grant**.
+ 1. Select **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device**
+ 1. **For multiple controls** select **Require one of the selected controls**.
+ 1. Select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+> [!NOTE]
+> You can enroll your new devices to Intune even if you select **Require device to be marked as compliant** for **All users** and **All cloud apps** using the steps above. **Require device to be marked as compliant** control does not block Intune enrollment.
+
+### Known behavior
+
+On Windows 7, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
+
+#### Subscription activation
+
+Organizations that use the [Subscription Activation](/windows/deployment/windows-10-subscription-activation) feature to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their device compliance policy.
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+
+[Device compliance policies work with Azure AD](/intune/device-compliance-get-started#device-compliance-policies-work-with-azure-ad)
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Title: Conditional Access - Require compliant or hybrid joined devices - Azure Active Directory
-description: Create a custom Conditional Access policy to require compliant or hybrid joined devices
+ Title: Require compliant, hybrid joined devices, or MFA - Azure Active Directory
+description: Create a custom Conditional Access policy to require compliant, hybrid joined devices, or multifactor authentication
Previously updated : 08/22/2022 Last updated : 09/30/2022 -+
-# Conditional Access: Require compliant or hybrid Azure AD joined device
+# Common Conditional Access policy: Require a compliant device, hybrid Azure AD joined device, or multifactor authentication for all users
Organizations who have deployed Microsoft Intune can use the information returned from their devices to identify devices that meet compliance requirements such as:
Policy compliance information is sent to Azure AD where Conditional Access decid
Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md).
-## Template deployment
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require devices accessing resources be marked as compliant with your organization's Intune compliance policies.
+The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
The following steps will help create a Conditional Access policy to require devi
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. If you must exclude specific applications from your policy, you can choose them from the **Exclude** tab under **Select excluded cloud apps** and choose **Select**. 1. Under **Access controls** > **Grant**.
- 1. Select **Require device to be marked as compliant** and **Require Hybrid Azure AD joined device**
+ 1. Select **Require multifactor authentication**, **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device**
1. **For multiple controls** select **Require one of the selected controls**. 1. Select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
Last updated 08/22/2022
-+
With the location condition in Conditional Access, you can control access to you
## Define locations
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. 1. Choose **New location**. 1. Give your location a name.
More information about the location condition in Conditional Access can be found
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Title: Conditional Access - Combined security information - Azure Active Directory
+ Title: Control security information registration with Conditional Access - Azure Active Directory
description: Create a custom Conditional Access policy for security info registration
Last updated 08/22/2022
-+
-# Conditional Access: Securing security info registration
+# Common Conditional Access policy: Securing security info registration
Securing when and how users register for Azure AD multifactor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience. Users signing in to the Microsoft Authenticator app or enabling passwordless phone sign-in are subject to this policy.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Title: User risk-based Conditional Access - Azure Active Directory
+ Title: User risk-based password change - Azure Active Directory
description: Create Conditional Access policies using Identity Protection user risk
Last updated 08/22/2022
-+
-# Conditional Access: User risk-based Conditional Access
+# Common Conditional Access policy: User risk-based password change
Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md).
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Title: Sign-in risk-based Conditional Access - Azure Active Directory
+ Title: Sign-in risk-based multifactor authentication - Azure Active Directory
description: Create Conditional Access policies using Identity Protection sign-in risk
Last updated 08/22/2022
-+
-# Conditional Access: Sign-in risk-based Conditional Access
+# Common Conditional Access policy: Sign-in risk-based multifactor authentication
Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multifactor authentication to prove that they're really who they say they are.
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
To make sure that your policy works as expected, the recommended best practice i
### Policy 1: Sign-in frequency control
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
### Policy 2: Persistent browser session
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
### Policy 3: Sign-in frequency control every time risky user
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy App Enforced Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-app-enforced-restriction.md
+
+ Title: Conditional Access - Use application enforced restrictions for unmanaged devices - Azure Active Directory
+description: Create a custom Conditional Access policy for unmanaged devices
+++++ Last updated : 09/27/2022++++++++
+# Common Conditional Access policy: Use application enforced restrictions for unmanaged devices
+
+Block or limit access to SharePoint, OneDrive, and Exchange content from unmanaged devices.
+++
+## Create a Conditional Access policy
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions**, select the following options:
+ 1. Under **Include**, choose **Select apps**.
+ 1. Choose **Office 365**, then select **Select**.
+1. Under **Access controls** > **Session**, select **Use app enforced restrictions**, then select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
Last updated 08/22/2022
-+
-# Conditional Access: Require approved client apps or app protection policy
+# Common Conditional Access policy: Require approved client apps or app protection policy
People regularly use their mobile devices for both personal and work tasks. While making sure staff can be productive, organizations also want to prevent data loss from applications on devices they may not manage fully.
The following steps will help create a Conditional Access policy requiring an ap
Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
After confirming your settings using [report-only mode](howto-conditional-access
This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Guest Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-guest-mfa.md
+
+ Title: Require MFA for guest users with Conditional Access - Azure Active Directory
+description: Create a custom Conditional Access policy requiring guest users perform multifactor authentication
+++++ Last updated : 09/27/2022++++++++
+# Common Conditional Access policy: Require multifactor authentication for guest access
+
+Require guest users perform multifactor authentication when accessing your organization's resources.
+++
+## Create a Conditional Access policy
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All guest and external users**
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+ 1. Under **Exclude**, select any applications that don't require multifactor authentication.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Howto Policy Persistent Browser Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md
+
+ Title: Require reauthentication with Conditional Access - Azure Active Directory
+description: Create a custom Conditional Access policy requiring reauthentication
+++++ Last updated : 09/27/2022++++++++
+# Common Conditional Access policy: Require reauthentication and disable browser persistence
+
+Protect user access on unmanaged devices by preventing browser sessions from remaining signed in after the browser is closed and setting a sign-in frequency to 1 hour.
+++
+## Create a Conditional Access policy
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Conditions** > **Filter for devices**, set **Configure** to **Yes**.
+ 1. Under **Devices matching the rule:**, set to **Include filtered devices in policy**.
+ 1. Under **Rule syntax** select the **Edit** pencil and paste the following expressing in the box, then select **Apply**.
+ 1. device.trustType -ne "ServerAD" -or device.isCompliant -ne True
+ 1. Select **Done**.
+1. Under **Access controls** > **Session**
+ 1. Select **Sign-in frequency**, specify **Periodic reauthentication**, and set the duration to **1** and the period to **Hours**.
+ 1. Select **Persistent browser session**, and set **Persistent browser session** to **Never persistent**.
+ 1. Select, **Select**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Howto Policy Unknown Unsupported Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-unknown-unsupported-device.md
+
+ Title: Block unsupported platforms with Conditional Access - Azure Active Directory
+description: Create a custom Conditional Access policy to block unsupported platforms
+++++ Last updated : 09/27/2022++++++++
+# Common Conditional Access policy: Block access for unknown or unsupported device platform
+
+Users will be blocked from accessing company resources when the device type is unknown or unsupported.
+++
+## Create a Conditional Access policy
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Conditions**, select **Device platforms**
+ 1. Set **Configure** to **Yes**.
+ 1. Under **Include**, select **Any device**
+ 1. Under **Exclude**, select **Android**, **iOS**, **Windows**, and **macOS**.
+ 1. Select, **Done**.
+1. Under **Access controls** > **Grant**, select **Block access**, then select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**.
Once you've completed your terms of use policy document, use the following proce
| Expire starting on | Frequency | Result | | | | | | Today's date | Monthly | Starting today, users must accept the terms of use policy and then reaccept every month. |
- | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents will expire and then users must reaccept every month. |
+ | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents will expire, and then users must reaccept every month. |
For example, if you set the expire starting on date to **Jan 1** and frequency to **Monthly**, this is how expirations might occur for two users:
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select a terms of use policy. 1. Select **View audit logs**.
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## Update the version or pdf of an existing terms of use
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## View previous versions of a ToU
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy for which you want to view a version history. 1. Select **Languages and version history**
You can edit some details of terms of use policies, but you can't modify an exis
## See who has accepted each version
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want. 1. By default, the next page will show you the current state of each user's acceptance to the ToU
-1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each users events in details about each version and what happened.
+1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each user's events in details about each version and what happened.
1. Alternatively, you can select a specific version from the **Version** drop-down to see who has accepted that specific version. - ## Add a ToU language The following procedure describes how to add a ToU language.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit Terms**
If a user is using browser that isn't supported, they'll be asked to use a diffe
You can delete old terms of use policies using the following procedure.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to remove. 1. Select **Delete terms**.
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Find these options in the **Azure portal** > **Azure Active Directory**, **Diagn
## Use the audit log
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Audit logs**. 1. Select the **Date** range you want to query in. 1. Select **Activity** and choose one of the following
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
This preview enables blocking service principals from outside of trusted public
Create a location based Conditional Access policy that applies to service principals.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Create a location based Conditional Access policy that applies to service princi
:::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png":::
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation-adfs.md
Previously updated : 05/13/2022 Last updated : 10/17/2022
This article describes how to set up [SAML/WS-Fed IdP federation](direct-federat
Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed below. To illustrate the SAML configuration steps, this section shows how to set up AD FS for SAML 2.0.
-To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
+To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
|Attribute |Value | |||
An AD FS server must already be set up and functioning before you begin this pro
1. In the **Add a Claim Description** window, specify the following values: - **Display Name**: Persistent Identifier
- - **Claim identifier**: `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`
+ - **Claim identifier**: `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`
- Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can accept**. - Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can send**.
An AD FS server must already be set up and functioning before you begin this pro
9. In the **Identifiers** tab, enter ``https://login.microsoftonline.com/<tenant ID>/`` in the **Relying party identifier** text box using the tenant ID of the service partnerΓÇÖs Azure AD tenant. Select **Add**. > [!NOTE]
-> Be sure to include a slash (/) after the tenant ID. For example, https://login.microsoftonline.com/094a6247-27d4-489f-a23b-b9672900084d/.
+> Be sure to include a slash (/) after the tenant ID, for example: `https://login.microsoftonline.com/00000000-27d4-489f-a23b-00000000084d/`.
10. Select **OK**.
An AD FS server must already be set up and functioning before you begin this pro
- `https://login.microsoftonline.com/<tenant ID>/` > [!NOTE]
- > Be sure to include a slash (/) after the tenant ID, for example: https://login.microsoftonline.com/094a6247-27d4-489f-a23b-b9672900084d/.
+ > Be sure to include a slash (/) after the tenant ID, for example: `https://login.microsoftonline.com/00000000-27d4-489f-a23b-00000000084d/`.
11. Select **Next**. 12. In the **Choose Access Control Policy** page, select a policy, and then select **Next**.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
# Leave an organization as an external user
-As an Azure Active Directory (Azure AD) [B2B collaboration](/articles/active-directory/external-identities/what-is-b2b.md) or [B2B direct connect](/articles/active-directory/external-identities/b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
+As an Azure Active Directory (Azure AD) [B2B collaboration](what-is-b2b.md) or [B2B direct connect](b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
See the following articles on securing external access to resources. We recommen
1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) 1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-1. [Secure local guest accounts](10-secure-local-guest.md) (YouΓÇÖre here)
+1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
You can block external users from accessing specific sets of resources with Cond
To create a policy that blocks access for external users to a set of applications:
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All guests and external users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). 1. Select **Done**.
After confirming your settings using [report-only mode](../conditional-access/ho
There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this [Create a security group](active-directory-groups-create-azure-portal.md) to contain the external users who should access the finance applications:
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance.
-1. Under **Assignments**, select **Users or workload identities**..
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All guests and external users**. 1. Under **Exclude**, select **Users and groups**, 1. Choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
Last updated 04/27/2020
-+
The following table is intended to highlight the key actions for the following l
- Azure Active Directory Premium P1 (Azure AD P1) - Enterprise Mobility + Security (EMS E3)-- Microsoft 365 (M365 E3, A3, F1, F3)
+- Microsoft 365 (E3, A3, F1, F3)
| Recommended action | Detail | | | |
The following table is intended to highlight the key actions for the following l
- Azure Active Directory Premium P2 (Azure AD P2) - Enterprise Mobility + Security (EMS E5)-- Microsoft 365 (M365 E5, A5)
+- Microsoft 365 (E5, A5)
| Recommended action | Detail | | | |
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
See the following articles on securing external access to resources. We recommen
9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-10. [Secure local guest accounts](10-secure-local-guest.md)
+10. [Convert local guest accounts to B2B](10-secure-local-guest.md)
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department.
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
To add an external Azure AD directory or domain as a connected organization, fol
1. Select the **Directory + domain** tab, and then select **Add directory + domain**.
- The **Select directories + domains** pane opens.
+ Then **Select directories + domains** pane opens.
-1. In the search box, enter a domain name to search for the Azure AD directory or domain. Be sure to enter the entire domain name.
+1. In the search box, enter a domain name to search for the Azure AD directory or domain. You can also add domains that are not in Azure AD. Be sure to enter the entire domain name.
-1. Confirm that the organization name and authentication type are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
+1. Confirm that the organization name(s) and authentication type(s) are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
![The "Select directories + domains" pane](./media/entitlement-management-organization/organization-select-directories-domains.png) > [!NOTE] > Access from some domains could be blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
-1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
+1. Select **Add** to add the Azure AD directory or domain. **You can add multiple Azure AD directories and domains**.
-1. After you've added the Azure AD directory or domain, select **Select**.
+1. After you've added the Azure AD directories or domains, select **Select**.
- The organization appears in the list.
+ The organization(s) appears in the list.
![The "Directory + domain" pane](./media/entitlement-management-organization/organization-directory-domain.png)
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
You can add extra expressions using **And/Or** to create complex conditionals, a
[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox)
+> [!NOTE]
+> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta#supported-user-properties-and-query-parameters?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+ For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
If you want all the latest features and updates, check this page and install wha
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+## 2.1.19.0
+
+### Release status:
+11/2/2022: Released for download
+
+### Functional changes
+
+ - We added a new attribute 'employeeLeaveDateTime' for syncing to Azure AD. To learn more about how to use this attribute to manage your users' life cycles, please refer to [this article](https://learn.microsoft.com/azure/active-directory/governance/how-to-lifecycle-workflow-sync-attributes)
+
+### Bug fixes
+
+ - we fixed a bug where Azure AD Connect Password writeback stopped with error code "SSPR_0029 ERROR_ACCESS_DENIED"
+ ## 2.1.18.0 ### Release status:
active-directory Confluence App Proxy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluence-app-proxy-tutorial.md
+
+ Title: 'Tutorial: App Proxy configuration for Azure AD SAML SSO for Confluence'
+description: Learn App Proxy configuration for Azure AD SAML SSO for Confluence.
++++++++ Last updated : 11/03/2022+++
+# Tutorial: App Proxy configuration for Azure AD SAML SSO for Confluence
+
+This article helps to configure Azure AD SAML SSO for your on-premises Confluence application using Application Proxy.
+
+## Prerequisites
+
+To configure Azure AD integration with Confluence SAML SSO by Microsoft, you need the following items:
+
+- An Azure AD subscription.
+- Confluence server application installed on a Windows 64-bit server (on-premises or on the cloud IaaS infrastructure).
+- Confluence server is HTTPS enabled.
+- Note the supported versions for Confluence Plugin are mentioned in below section.
+- Confluence server is reachable on internet particularly to Azure AD Login page for authentication and should able to receive the token from Azure AD.
+- Admin credentials are set up in Confluence.
+- WebSudo is disabled in Confluence.
+- Test user created in the Confluence server application.
+
+To get started, you need the following items:
+
+* Do not use your production environment, unless it is necessary.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Confluence SAML SSO by Microsoft single sign-on (SSO) enabled subscription.
+
+## Supported versions of Confluence
+
+As of now, following versions of Confluence are supported:
+
+- Confluence: 5.0 to 5.10
+- Confluence: 6.0.1 to 6.15.9
+- Confluence: 7.0.1 to 7.17.0
+
+> [!NOTE]
+> Please note that our Confluence Plugin also works on Ubuntu Version 16.04
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO for on-premises confluence setup using application proxy mode.
+1. Download and Install Azure AD App Proxy connector.
+1. Add Application Proxy in Azure AD.
+1. Add a Confluence SAML SSO app in Azure AD.
+1. Configure SSO for SAML SSO Confluence Application in Azure AD.
+1. Create an Azure AD Test user.
+1. Assigning the test user for the Confluence Azure AD App.
+1. Configure SSO for Confluence SAML SSO by Microsoft Confluence plugin in your Confluence Server.
+1. Assigning the test user for the Microsoft Confluence plugin in your Confluence Server.
+1. Test the SSO.
+
+## Download and Install the App Proxy Connector Service
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy.
+2. Select **App proxy** from Azure Services section.
+3. Select **Download connector service**.
+
+ ![Screenshot for Download connector service.](./media/confluence-app-proxy-tutorial/download-connector-service.png)
+
+4. Accept terms & conditions to download connector. Once downloaded, install it to the system, which hosts the confluence application.
+
+## Add an On-premises Application in Azure AD
+
+To add an Application proxy, we need to create an enterprise application.
+
+1. Sign in as an administrator in the Azure portal.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. Choose **Add an on-premises application**.
+
+ ![Screenshot for Add an on-premises application.](./media/confluence-app-proxy-tutorial/add-on-premises-application.png)
+
+1. Type the name of the application and click the create button at the bottom left column.
+
+ ![Screenshot for on-premises application.](./media/confluence-app-proxy-tutorial/on-premises-application.png)
+
+ 1. **Internal URL** will be your Confluence application URL.
+ 2. **External URL** will be auto-generated based on the Name you choose.
+ 3. **Pre Authentication** can be left to Azure Active Directory as default.
+ 4. Choose **Connector Group** which lists your connector agent under it as active.
+ 5. Leave the **Additional Settings** as default.
+
+1. Click on the **Save** from the top options to configure an application proxy.
++
+## Add a Confluence SAML SSO app in Azure AD
+
+Now that you've prepared your environment and installed a connector, you're ready to add confluence applications to Azure AD.
+
+1. Sign in as an administrator in the Azure portal.
+2. In the left navigation panel, select Azure Active Directory.
+3. Select Enterprise applications, and then select New applications.
+4. Select **Confluence SAML SSO by Microsoft** widget from the Azure AD Gallery.
++
+## Configure SSO for Confluence SAML SSO Application in Azure AD
+
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. Open the **Confluence SAML SSO by Microsoft** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot for Edit Basic SAML Configuration.](common/edit-urls.png)
+
+1. On the Basic SAML Configuration section, enter the **External Url** value for the following fields: identifier, Reply URL, SignOn URL.
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assigning the test user for the Confluence Azure AD App
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Confluence Azure AD App.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Confluence SAML SSO by Microsoft**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+1. Verify the App Proxy setup by checking if the configured test user is able to SSO using the external URL mentioned in the on-premises application.
+
+> [!NOTE]
+> Complete the setup of the JIRA SAML SSO by Microsoft application by following [this](./jiramicrosoft-tutorial.md) tutorial.
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Confluence SAML SSO by Microsoft single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> For the information on application proxy configuration for Confluence, please refer [this](confluence-app-proxy-tutorial.md) tutorial.
+ ## Supported versions of Confluence As of now, following versions of Confluence are supported:
active-directory Keepabl Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keepabl-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Keepabl for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Keepabl.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 80b48f18-fbdd-4c35-8aa9-b5f7a8331044
+++
+ms.devlang: na
+ Last updated : 10/25/2022+++
+# Tutorial: Configure Keepabl for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Keepabl and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Keepabl](https://keepabl.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Keepabl.
+> * Remove users in Keepabl when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Keepabl.
+> * [Single sign-on](keepabl-tutorial.md) to Keepabl (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Keepabl with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Keepabl](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Keepabl to support provisioning with Azure AD
+
+1. Sign in to [Keepabl Admin Portal](https://app.keepabl.com) and then navigate to **Account Settings > Your Organization**, where youΓÇÖll see the **Single Sign-On (SSO)** section.
+1. Click on the **Edit Identity Provider** button.You will be taken to the SSO Setup page, where once you select Microsoft Azure as your provider and then scroll down, you will see your **Tenant URL** and **Secret Token**. These value will be entered in the Provisioning tab of your Keepabl application in the Azure portal.
+
+ ![Screenshot of extraction of tenant url and token.](media/keepabl-provisioning-tutorial/token.png)
+
+>[!NOTE]
+>To Setup Identity Provider or SSO visit [here](https://keepabl.com/admin-guide-to-sso-keepabl).
+
+## Step 3. Add Keepabl from the Azure AD application gallery
+
+Add Keepabl from the Azure AD application gallery to start managing provisioning to Keepabl. If you have previously setup Keepabl for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to Keepabl
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Keepabl based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Keepabl in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Keepabl**.
+
+ ![Screenshot of the Keepabl link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Keepabl Tenant URL and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to Keepabl.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Keepabl**.
+
+1. Review the user attributes that are synchronized from Azure AD to Keepabl in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Keepabl for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Keepabl API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Keepabl|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String|&check;|&check;
+ |active|Boolean|||
+ |name.givenName|String|||
+ |name.familyName|String|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Keepabl, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Keepabl by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Several open-source tools can help with your migration, depending on your scenar
* [Velero](https://velero.io/) (Requires Kubernetes 1.7+) * [Azure Kube CLI extension](https://github.com/yaron2/azure-kube-cli)
-* [ReShifter](https://github.com/mhausenblas/reshifter)
+ In this article we will summarize migration details for:
In this article, we summarized migration details for:
> * Deployment of your cluster configuration
-[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
+[region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 11/1/2022 Last updated : 11/3/2022
Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
-This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster.
+This article shows you how to create a connection to an AKS node.
## Before you begin
When done, `exit` the SSH session, stop any port forwarding, and then `exit` the
kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx ```
-## Update SSH key on an existing AKS cluster (preview)
-
-### Prerequisites
-* Before you start, ensure the Azure CLI is installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* The aks-preview extension version 0.5.111 or later. To learn how to install an Azure extension, see [How to install extensions][how-to-install-azure-extensions].
-
-> [!NOTE]
-> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
-
-Use the [az aks update][az-aks-update] command to update the SSH key on the cluster. This operation will update the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
-
-```azurecli
-az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
-```
-
-Examples:
-In the following example, you can specify the new SSH key value for the `--ssh-key-value` argument.
-
-```azurecli
-az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
-```
-
-In the following example, you specify a SSH key file.
-
-```azurecli
-az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value .ssh/id_rsa.pub
-```
-
-> [!IMPORTANT]
-> During this operation, all virtual machine scale set instances are upgraded and re-imaged to use the new SSH key.
- ## Next steps If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
After the OSM add-on is disabled, use `osm uninstall cluster-wide-resources` to
osm uninstall cluster-wide-resources ```
+> [!NOTE]
+> For version 1.1, the command is `osm uninstall mesh --delete-cluster-wide-resources`
+ > [!IMPORTANT] > You must remove these additional resources after you disable the OSM add-on. Leaving these resources on your cluster may cause issues if you enable the OSM add-on again in the future.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
To get started, create an AKS cluster with a single node pool. The following exa
# Create a resource group in East US az group create --name myResourceGroup --location eastus
-# Create a basic single-node AKS cluster
+# Create a basic single-node pool AKS cluster
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS)
-[!Important]
-The feature described in this document, pod security policy (preview), will begin deprecation with Kubernetes version 1.21, with its removal in version 1.25. AKS will mark Pod Security Policy as "Deprecated" in the AKS API on 04-01-2023. You can now Migrate Pod Security Policy to Pod Security Admission Controller ahead of the deprecation.
+> [!Important]
+> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 04-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
To make all your certificates accessible, set the value to `*`.
+> [!NOTE]
+> If your are using `*` for the App Setting, you will need to restart your web app after adding a new certificate to your web app to ensure that new certificate becomes accessible to your app.
+ ## Load certificate in Windows apps The `WEBSITE_LOAD_CERTIFICATES` app setting makes the specified certificates accessible to your Windows hosted app in the Windows certificate store, in [Current User\My](/windows-hardware/drivers/install/local-machine-and-current-user-certificate-stores).
To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Ja
* [Enforce HTTPS](configure-ssl-bindings.md#enforce-https) * [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions) * [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
-* [Environment variables and app settings reference](reference-app-settings.md)
+* [Environment variables and app settings reference](reference-app-settings.md)
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
Title: Quickstart for adding feature flags to ASP.NET Core description: Add feature flags to ASP.NET Core apps and manage them using Azure App Configuration-+ ms.devlang: csharp Previously updated : 09/28/2020- Last updated : 10/28/2022+ #Customer intent: As an ASP.NET Core developer, I want to use feature flags to control feature availability quickly and confidently. # Quickstart: Add feature flags to an ASP.NET Core app
-In this quickstart, you create an end-to-end implementation of feature management in an ASP.NET Core app using Azure App Configuration. You'll use the App Configuration service to centrally store all your feature flags and control their states.
+In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control the availability of a new web page in an ASP.NET Core app without restarting or redeploying it.
-The .NET Core Feature Management libraries extend the framework with comprehensive feature flag support. These libraries are built on top of the .NET Core configuration system. They seamlessly integrate with App Configuration through its .NET Core configuration provider.
+The feature management support extends the dynamic configuration feature in App Configuration. The example in this quickstart builds on the ASP.NET Core app introduced in the dynamic configuration tutorial. Before you continue, finish the [quickstart](./quickstart-aspnet-core-app.md), and the [tutorial](./enable-dynamic-configuration-aspnet-core.md) to create an ASP.NET Core app with dynamic configuration first.
## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [.NET Core SDK](https://dotnet.microsoft.com/download)
+Follow the documents to create an ASP.NET Core app with dynamic configuration.
+* [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md)
+* [Tutorial: Use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md)
-## Create an App Configuration store
+## Create a feature flag
+Navigate to the Azure App Configuration store you created previously in Azure portal. Under **Operations** section, select **Feature manager** > **Create** to add a feature flag called *Beta*.
-8. Select **Operations** > **Feature manager** > **Add** to add a feature flag called *Beta*.
+> [!div class="mx-imgBorder"]
+> ![Enable feature flag named Beta](./media/add-beta-feature-flag.png)
- > [!div class="mx-imgBorder"]
- > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
+Leave the rest of fields empty for now. Select **Apply** to save the new feature flag. To learn more, check out [Manage feature flags in Azure App Configuration](./manage-feature-flags.md).
- Leave **Label** empty for now. Select **Apply** to save the new feature flag.
+## Use a feature flag
-## Create an ASP.NET Core web app
-
-Use the [.NET Core command-line interface (CLI)](/dotnet/core/tools) to create a new ASP.NET Core MVC project. The advantage of using the .NET Core CLI instead of Visual Studio is that the .NET Core CLI is available across the Windows, macOS, and Linux platforms.
-
-Run the following command to create an ASP.NET Core MVC project in a new *TestFeatureFlags* folder:
-
-```dotnetcli
-dotnet new mvc --no-https --output TestFeatureFlags
-```
--
-## Connect to an App Configuration store
-
-1. Install the [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) and [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet packages by running the following commands:
-
- ```dotnetcli
- dotnet add package Microsoft.Azure.AppConfiguration.AspNetCore
- ```
+1. Navigate into the project's directory, and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package.
```dotnetcli dotnet add package Microsoft.FeatureManagement.AspNetCore ```
-1. Run the following command in the same directory as the *.csproj* file. The command uses Secret Manager to store a secret named `ConnectionStrings:AppConfig`, which stores the connection string for your App Configuration store. Replace the `<your_connection_string>` placeholder with your App Configuration store's connection string. You can find the connection string under **Access Keys** in the Azure portal.
-
- ```dotnetcli
- dotnet user-secrets set ConnectionStrings:AppConfig "<your_connection_string>"
- ```
-
- Secret Manager is used only to test the web app locally. When the app is deployed to [Azure App Service](https://azure.microsoft.com/services/app-service/web), use the **Connection Strings** application setting in App Service instead of Secret Manager to store the connection string.
-
- Access this secret using the .NET Core Configuration API. A colon (`:`) works in the configuration name with the Configuration API on all supported platforms. For more information, see [Configuration keys and values](/aspnet/core/fundamentals/configuration#configuration-keys-and-values).
-
-1. In *Program.cs*, update the `CreateWebHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
-
- > [!IMPORTANT]
- > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.x. Select the correct syntax based on your environment.
-
- #### [.NET 5.x](#tab/core5x)
+1. Open *Program.cs*, and add a call to the `UseFeatureFlags` method inside the `AddAzureAppConfiguration` call.
+ #### [.NET 6.x](#tab/core6x)
```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(options =>
- options.Connect(connection).UseFeatureFlags());
- }).UseStartup<Startup>());
+ // Load configuration from Azure App Configuration
+ builder.Configuration.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(connectionString)
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*", LabelFilter.Null)
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
+
+ // Load all feature flags with no label
+ options.UseFeatureFlags();
+ });
``` #### [.NET Core 3.x](#tab/core3x)- ```csharp public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder =>
+ {
webBuilder.ConfigureAppConfiguration(config => {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
+ //Retrieve the Connection String from the secrets manager
+ IConfiguration settings = config.Build();
+ string connectionString = settings.GetConnectionString("AppConfig");
+
+ // Load configuration from Azure App Configuration
config.AddAzureAppConfiguration(options =>
- options.Connect(connection).UseFeatureFlags());
- }).UseStartup<Startup>());
+ {
+ options.Connect(connectionString)
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*", LabelFilter.Null)
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
+
+ // Load all feature flags with no label
+ options.UseFeatureFlags();
+ });
+ });
+
+ webBuilder.UseStartup<Startup>();
+ });
```
+
- #### [.NET Core 2.x](#tab/core2x)
+ > [!TIP]
+ > When no parameter is passed to the `UseFeatureFlags` method, it loads *all* feature flags with *no label* in your App Configuration store. The default refresh expiration of feature flags is 30 seconds. You can customize this behavior via the `FeatureFlagOptions` parameter. For example, the following code snippet loads only feature flags that start with *TestApp:* in their *key name* and have the label *dev*. The code also changes the refresh expiration time to 5 minutes. Note that this refresh expiration time is separate from that for regular key-values.
+ >
+ > ```csharp
+ > options.UseFeatureFlags(featureFlagOptions =>
+ > {
+ > featureFlagOptions.Select("TestApp:*", "dev");
+ > featureFlagOptions.CacheExpirationInterval = TimeSpan.FromMinutes(5);
+ > });
+ > ```
- ```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(options =>
- options.Connect(connection).UseFeatureFlags());
- }).UseStartup<Startup>();
- ```
+1. Add feature management to the service collection of your app by calling `AddFeatureManagement`.
-
+ #### [.NET 6.x](#tab/core6x)
+ Update *Program.cs* with the following code.
- With the preceding change, the [configuration provider for App Configuration](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) has been registered with the .NET Core Configuration API.
+ ```csharp
+ // Existing code in Program.cs
+ // ... ...
-1. In *Startup.cs*, add a reference to the .NET Core feature
+ builder.Services.AddRazorPages();
- ```csharp
- using Microsoft.FeatureManagement;
- ```
+ // Add Azure App Configuration middleware to the container of services.
+ builder.Services.AddAzureAppConfiguration();
-1. Update the `Startup.ConfigureServices` method to add feature flag support by calling the `AddFeatureManagement` method. Optionally, you can include any filter to be used with feature flags by calling `AddFeatureFilter<FilterType>()`:
+ // Add feature management to the container of services.
+ builder.Services.AddFeatureManagement();
- #### [.NET 5.x](#tab/core5x)
+ // Bind configuration "TestApp:Settings" section to the Settings object
+ builder.Services.Configure<Settings>(builder.Configuration.GetSection("TestApp:Settings"));
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddControllersWithViews();
- services.AddFeatureManagement();
- }
+ var app = builder.Build();
+
+ // The rest of existing code in program.cs
+ // ... ...
```+ #### [.NET Core 3.x](#tab/core3x)
-
- Add the following code:
- ```csharp
+ Open *Startup.cs*, and update the `ConfigureServices` method.
+
+ ```csharp
public void ConfigureServices(IServiceCollection services) {
- services.AddControllersWithViews();
+ services.AddRazorPages();
+
+ // Add Azure App Configuration middleware to the container of services.
services.AddAzureAppConfiguration();
- services.AddFeatureManagement();
- }
- ```
- And then add below:
+ // Add feature management to the container of services.
+ services.AddFeatureManagement();
- ```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- // ...
- app.UseAzureAppConfiguration();
- }
+ // Bind configuration "TestApp:Settings" section to the Settings object
+ services.Configure<Settings>(Configuration.GetSection("TestApp:Settings"));
+ }
```
-
- #### [.NET Core 2.x](#tab/core2x)
+
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddMvc()
- .SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
- services.AddFeatureManagement();
- }
- ```
+ Add `using Microsoft.FeatureManagement;` at the top of the file if it's not present.
-
+1. Add a new empty Razor page named **Beta** under the *Pages* directory. It includes two files *Beta.cshtml* and *Beta.cshtml.cs*.
-1. Add a *MyFeatureFlags.cs* file to the root project directory with the following code:
+ Open *Beta.cshtml*, and update it with the following markup:
- ```csharp
- namespace TestFeatureFlags
- {
- public enum MyFeatureFlags
- {
- Beta
- }
+ ```cshtml
+ @page
+ @model TestAppConfig.Pages.BetaModel
+ @{
+ ViewData["Title"] = "Beta Page";
}+
+ <h1>This is the beta website.</h1>
```
-1. Add a *BetaController.cs* file to the *Controllers* directory with the following code:
+ Open *Beta.cshtml.cs*, and add `FeatureGate` attribute to the `BetaModel` class. The `FeatureGate` attribute ensures the *Beta* page is accessible only when the *Beta* feature flag is enabled. If the *Beta* feature flag isn't enabled, the page will return 404 Not Found.
```csharp
- using Microsoft.AspNetCore.Mvc;
- using Microsoft.FeatureManagement;
+ using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.FeatureManagement.Mvc;
- namespace TestFeatureFlags.Controllers
+ namespace TestAppConfig.Pages
{
- public class BetaController: Controller
+ [FeatureGate("Beta")]
+ public class BetaModel : PageModel
{
- private readonly IFeatureManager _featureManager;
-
- public BetaController(IFeatureManagerSnapshot featureManager) =>
- _featureManager = featureManager;
-
- [FeatureGate(MyFeatureFlags.Beta)]
- public IActionResult Index() => View();
+ public void OnGet()
+ {
+ }
}
- }
+ }
```
-1. In *Views/_ViewImports.cshtml*, register the feature manager Tag Helper using an `@addTagHelper` directive:
+1. Open *Pages/_ViewImports.cshtml*, and register the feature manager Tag Helper using an `@addTagHelper` directive:
```cshtml @addTagHelper *, Microsoft.FeatureManagement.AspNetCore
dotnet new mvc --no-https --output TestFeatureFlags
The preceding code allows the `<feature>` Tag Helper to be used in the project's *.cshtml* files.
-1. Open *_Layout.cshtml* in the *Views*\\*Shared* directory. Locate the `<nav>` bar code under `<body>` > `<header>`. Insert a new `<feature>` tag in between the *Home* and *Privacy* navbar items, as shown in the highlighted lines below.
+1. Open *_Layout.cshtml* in the *Pages*\\*Shared* directory. Insert a new `<feature>` tag in between the *Home* and *Privacy* navbar items, as shown in the highlighted lines below.
- :::code language="html" source="../../includes/azure-app-configuration-navbar.md" range="15-38" highlight="14-18":::
+ :::code language="html" source="../../includes/azure-app-configuration-navbar.md" range="15-38" highlight="13-17":::
-1. Create a *Views/Beta* directory and an *Index.cshtml* file containing the following markup:
-
- ```cshtml
- @{
- ViewData["Title"] = "Beta Home Page";
- }
-
- <h1>This is the beta website.</h1>
- ```
+ The `<feature>` tag ensures the *Beta* menu item is shown only when the *Beta* feature flag is enabled.
## Build and run the app locally
dotnet new mvc --no-https --output TestFeatureFlags
```dotnetcli dotnet run ```
+1. Open a browser window, and go to the URL shown in the `dotnet run` output. Your browser should display a page similar to the image below.
-1. Open a browser window, and go to `http://localhost:5000`, which is the default URL for the web app hosted locally. If you're working in the Azure Cloud Shell, select the **Web Preview** button followed by **Configure**. When prompted, select port 5000.
-
- ![Locate the Web Preview button](./media/quickstarts/cloud-shell-web-preview.png)
+ ![Feature flag before enabled](./media/quickstarts/aspnet-core-feature-flag-local-before.png)
- Your browser should display a page similar to the image below.
- :::image type="content" source="media/quickstarts/aspnet-core-feature-flag-local-before.png" alt-text="Local quickstart app before change" border="true":::
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created previously.
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance that you created in the quickstart.
+1. Select **Feature manager** and locate the **Beta** feature flag. Enable the flag by selecting the checkbox under **Enabled**.
-1. Select **Feature manager**.
+1. Refresh the browser a few times. When the cache expires after 30 seconds, the page shows with updated content.
-1. Enable the *Beta* flag by selecting the checkbox under **Enabled**.
+ ![Feature flag after enabled](./media/quickstarts/aspnet-core-feature-flag-local-after.png)
-1. Return to the command shell. Cancel the running `dotnet` process by pressing <kbd>Ctrl+C</kbd>. Restart your app using `dotnet run`.
+1. Select the *Beta* menu. It will bring you to the beta website that you enabled dynamically.
-1. Refresh the browser page to see the new configuration settings.
-
- :::image type="content" source="media/quickstarts/aspnet-core-feature-flag-local-after.png" alt-text="Local quickstart app after change" border="true":::
+ ![Feature flag beta page](./media/quickstarts/aspnet-core-feature-flag-local-beta.png)
## Clean up resources
dotnet new mvc --no-https --output TestFeatureFlags
## Next steps
-In this quickstart, you created a new App Configuration store and used it to manage features in an ASP.NET Core web app via the [Feature Management libraries](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
+In this quickstart, you added feature management capability to an ASP.NET Core app on top of dynamic configuration. The [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) library offers rich integration for ASP.NET Core apps, including feature management in MVC controller actions, razor pages, views, routes, and middleware. For more information, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Use feature flags in ASP.NET Core apps](./use-feature-flags-dotnet-core.md)
+
+While a feature flag allows you to activate or deactivate functionality in your app, you may want to customize a feature flag based on your app's logic. Feature filters allow you to enable a feature flag conditionally. For more information, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Use feature filters for conditional feature flags](./howto-feature-filters-aspnet-core.md)
+
+Azure App Configuration offers built-in feature filters that enable you to activate a feature flag only during a specific period or to a particular targeted audience of your app. For more information, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Enable features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
+
+To enable feature management capability for other types of apps, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET apps](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in Azure Functions](./quickstart-feature-flag-azure-functions-csharp.md)
+
+To learn more about managing feature flags in Azure App Configuration, continue to the following tutorial.
-* Learn more about [feature management](./concept-feature-management.md).
-* [Manage feature flags](./manage-feature-flags.md).
-* [Use feature flags in an ASP.NET Core app](./use-feature-flags-dotnet-core.md).
-* [Use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md)
+> [!div class="nextstepaction"]
+> [Manage feature flags in Azure App Configuration](./manage-feature-flags.md)
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
Last updated 07/30/2021
-# Azure Arc data services data collection and reporting
+# Azure Arc-enabled data services data collection and reporting
This article describes the data that Azure Arc-enabled data services transmits to Microsoft.
+Azure Arc-enabled data services doesn't store any customer data.
## Related products
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 10/27/2022 Last updated : 11/03/2022
Azure Arc resource bridge currently supports the following Azure regions:
* East US * West Europe
+* UK South
+* Canada Central
+* Australia East
+* Southeast Asia
### Regional resiliency
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
# [Isolated process](#tab/isolated-process) # [C# Script](#tab/csharp-script)
azure-functions Functions Identity Based Connections Tutorial 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial-2.md
You've granted your function app access to the service bus namespace using manag
| Name | Value | Description | | | - | -- |
- | **ServiceBusConnection__fullyQualifiedNamespace** | <SERVICE_BUS_NAMESPACE>.servicebus.windows.net | This setting connections your function app to the Service Bus use identity-based connections instead of secrets. |
+ | **ServiceBusConnection__fullyQualifiedNamespace** | <SERVICE_BUS_NAMESPACE>.servicebus.windows.net | This setting connects your function app to the Service Bus using an identity-based connection instead of secrets. |
1. After you create the two settings, select **Save** > **Confirm**.
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
You can set the value of maximum workers allowed for running sync functions usin
For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing CPU-bound tasks to finish.
-For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you're seeing.
+For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default (the number of cores) + 4 and then tweak based on the throughput values you're seeing.
For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure and other Microsoft cloud services compliance scope description: This article tracks FedRAMP and DoD compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.+ + recommendations: false Previously updated : 09/29/2022 Last updated : 11/03/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: September 2022*
+*Last updated: November 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; |
-| [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; |
+| [Azure Automanage Guest Configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: September 2022*
+*Last updated: November 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Application Gateway](../../application-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Automation](../../automation/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Active Directory (Premium P1 + P2, specifically Privileged Identity Management and Access Reviews)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md) and [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Automanage Machine Configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Stream](/stream/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Network Watcher](../../network-watcher/index.yml) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
-| [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 10/17/2022 Last updated : 11/3/2022
Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This article provides an overview of Azure Monitor Agent's capabilities and supported use cases.
-Here's a short **introduction to Azure Monitor video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
## Consolidating legacy agents
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
|:|:|:| | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent by using Azure extension framework. | | On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). |
- | Windows 10, 11 desktops, workstations | [Client installer (Public preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
- | Windows 10, 11 laptops | [Client installer (Public preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
+ | Windows 10, 11 desktops, workstations | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
+ | Windows 10, 11 laptops | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
1. Define a data collection rule and associate the resource to the rule.
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads | | Windows event logs (including sysmon events) | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
- | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
<sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br> <sup>2</sup> Azure Monitor Linux Agent versions 1.15.2 and higher support syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
In addition to the generally available data collection listed above, Azure Monit
## Supported regions
-Azure Monitor Agent is available in all public regions and Azure Government clouds. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
+Azure Monitor Agent is available in all public regions and Azure Government clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
## Costs There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
description: Define network settings and enable network isolation for Azure Moni
Previously updated : 10/14/2022 Last updated : 11/01/2022
The Azure Monitor Agent extensions for Windows and Linux can communicate either
# [Windows VM](#tab/PowerShellWindows) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <type-handler-version> -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
``` # [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <type-handler-version> -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
``` # [Windows Arc-enabled server](#tab/PowerShellWindowsArc) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+$settings = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
+$protectedSettings = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings -ProtectedSetting $protectedSettings
``` # [Linux Arc-enabled server](#tab/PowerShellLinuxArc) ```powershell
-$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
-$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
+$settings = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = "true"}}
+$protectedSettings = @{"proxy" = @{username = "[username]"; password = "[password]"}}
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings -ProtectedSetting $protectedSettings
```
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
-There are four types of alerts:
+There are five types of alerts:
- [Metric alerts](#metric-alerts) - [Prometheus alerts](#prometheus-alerts-preview) - [Log alerts](#log-alerts)
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
View fired and resolved Prometheus alerts in the Azure portal with other alert t
## Next steps -- [Create a Prometheus rule groups](../essentials/prometheus-rule-groups.md).
+- [Create a Prometheus rule group](../essentials/prometheus-rule-groups.md).
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
You may want to collect metrics beyond what is collected by [instrumentation lib
The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
-The following table shows the recommended [aggregation types](/essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
+The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
| OpenTelemetry Instrument | Azure Monitor Aggregation Type | |||
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
ms.reviwer: heya
# Statsbeat in Azure Application Insights
-Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application insights customers:
+Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
- Service Health and Reliability (outside-in monitoring of connectivity to ingestion endpoint) - Support Diagnostics (self-help insights and CSS insights) - Product Improvement (insights for design optimizations)
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
To create a funnel:
1. To apply filters to the step select **Add filters**, which will appear after you choose an item for the top step. 1. Then choose your *Second step* and so on.+
+> [!NOTE]
+> Funnels are limited to a maximum of six steps.
+ 1. Select the **View** tab to see your funnel results :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot of the funnel tab on view tab showing results from the top and second step." lightbox="./media/usage-funnels/funnel-2.png":::
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
Title: Understanding autoscale settings in Azure Monitor description: "A detailed breakdown of autoscale settings and how they work. Applies to Virtual Machines, Cloud Services, Web Apps"++ Previously updated : 12/18/2017 Last updated : 11/02/2022 -+++
-# Understand Autoscale settings
-Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure Autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time. This article takes a detailed look at the anatomy of an Autoscale setting. The article begins with the schema and properties of a setting, and then walks through the different profile types that can be configured. Finally, the article discusses how the Autoscale feature in Azure evaluates which profile to execute at any given time.
+# Understand autoscale settings
+
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
+
+This article gives a detailed explanation of the autoscale settings.
## Autoscale setting schema
-To illustrate the Autoscale setting schema, the following Autoscale setting is used. It is important to note that this Autoscale setting has:
-- One profile. -- Two metric rules in this profile: one for scale out, and one for scale in.
- - The scale-out rule is triggered when the virtual machine scale set's average percentage CPU metric is greater than 85 percent for the past 10 minutes.
- - The scale-in rule is triggered when the virtual machine scale set's average is less than 60 percent for the past minute.
+
+The following example shows an autoscale setting. This autoscale setting has the following attributes:
+- A single default profile.
+- Two metric rules in this profile: one for scale-out, and one for scale-in.
+ - The scale-out rule is triggered when the Virtual Machine Scale Set's average percentage CPU metric is greater than 85 percent for the past 10 minutes.
+ - The scale-in rule is triggered when the Virtual Machine Scale Set's average is less than 60 percent for the past minute.
> [!NOTE] > A setting can have multiple profiles. To learn more, see the [profiles](#autoscale-profiles) section. A profile can also have multiple scale-out rules and scale-in rules defined. To see how they are evaluated, see the [evaluation](#autoscale-evaluation) section. ```JSON {
- "id": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/autoscalesettings/setting1",
- "name": "setting1",
- "type": "Microsoft.Insights/autoscaleSettings",
- "location": "East US",
- "properties": {
- "enabled": true,
- "targetResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/autoscaleSettings",
+ "apiVersion": "2015-04-01",
+ "name": "VMSS1-Autoscale-607",
+ "location": "eastus",
+ "properties": {
+
+ "name": "VMSS1-Autoscale-607",
+ "enabled": true,
+ "targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"profiles": [ {
- "name": "mainProfile",
+ "name": "Auto created default scale condition",
"capacity": { "minimum": "1", "maximum": "4",
To illustrate the Autoscale setting schema, the following Autoscale setting is u
{ "metricTrigger": { "metricName": "Percentage CPU",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT10M",
To illustrate the Autoscale setting schema, the following Autoscale setting is u
{ "metricTrigger": { "metricName": "Percentage CPU",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT10M",
To illustrate the Autoscale setting schema, the following Autoscale setting is u
} ```
-| Section | Element name | Description |
-| | | |
-| Setting | ID | The Autoscale setting's resource ID. Autoscale settings are an Azure Resource Manager resource. |
-| Setting | name | The Autoscale setting name. |
-| Setting | location | The location of the Autoscale setting. This location can be different from the location of the resource being scaled. |
-| properties | targetResourceUri | The resource ID of the resource being scaled. You can only have one Autoscale setting per resource. |
-| properties | profiles | An Autoscale setting is composed of one or more profiles. Each time the Autoscale engine runs, it executes one profile. |
-| profile | name | The name of the profile. You can choose any name that helps you identify the profile. |
-| profile | Capacity.maximum | The maximum capacity allowed. It ensures that Autoscale, when executing this profile, does not scale your resource above this number. |
-| profile | Capacity.minimum | The minimum capacity allowed. It ensures that Autoscale, when executing this profile, does not scale your resource below this number. |
-| profile | Capacity.default | If there is a problem reading the resource metric (in this case, the CPU of ΓÇ£vmss1ΓÇ¥), and the current capacity is below the default, Autoscale scales out to the default. This is to ensure the availability of the resource. If the current capacity is already higher than the default capacity, Autoscale does not scale in. |
-| profile | rules | Autoscale automatically scales between the maximum and minimum capacities, by using the rules in the profile. You can have multiple rules in a profile. Typically there are two rules: one to determine when to scale out, and the other to determine when to scale in. |
-| rule | metricTrigger | Defines the metric condition of the rule. |
-| metricTrigger | metricName | The name of the metric. |
-| metricTrigger | metricResourceUri | The resource ID of the resource that emits the metric. In most cases, it is the same as the resource being scaled. In some cases, it can be different. For example, you can scale a virtual machine scale set based on the number of messages in a storage queue. |
-| metricTrigger | timeGrain | The metric sampling duration. For example, **TimeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by using the aggregation method specified in the statistic element. |
-| metricTrigger | statistic | The aggregation method within the timeGrain period. For example, **statistic = ΓÇ£AverageΓÇ¥** and **timeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by taking the average. This property dictates how the metric is sampled. |
-| metricTrigger | timeWindow | The amount of time to look back for metrics. For example, **timeWindow = ΓÇ£PT10MΓÇ¥** means that every time Autoscale runs, it queries metrics for the past 10 minutes. The time window allows your metrics to be normalized, and avoids reacting to transient spikes. |
-| metricTrigger | timeAggregation | The aggregation method used to aggregate the sampled metrics. For example, **TimeAggregation = ΓÇ£AverageΓÇ¥** should aggregate the sampled metrics by taking the average. In the preceding case, take the ten 1-minute samples, and average them. |
-| rule | scaleAction | The action to take when the metricTrigger of the rule is triggered. |
-| scaleAction | direction | "Increase" to scale out, or "Decrease" to scale in.|
-| scaleAction | value | How much to increase or decrease the capacity of the resource. |
-| scaleAction | cooldown | The amount of time to wait after a scale operation before scaling again. For example, if **cooldown = ΓÇ£PT10MΓÇ¥**, Autoscale does not attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
+The table below describes the elements in the above autoscale setting's JSON.
-## Autoscale profiles
+| Section | Element name |Portal name| Description |
+| | | | |
+| Setting | ID | |The autoscale setting's resource ID. Autoscale settings are an Azure Resource Manager resource. |
+| Setting | name | |The autoscale setting name. |
+| Setting | location | |The location of the autoscale setting. This location can be different from the location of the resource being scaled. |
+| properties | targetResourceUri | |The resource ID of the resource being scaled. You can only have one autoscale setting per resource. |
+| properties | profiles | Scale condition |An autoscale setting is composed of one or more profiles. Each time the autoscale engine runs, it executes one profile. |
+| profiles | name | |The name of the profile. You can choose any name that helps you identify the profile. |
+| profiles | capacity.maximum | Instance limits - Maximum |The maximum capacity allowed. It ensures that autoscale doesn't scale your resource above this number when executing the profile. |
+| profiles | capacity.minimum | Instance limits - Minimum |The minimum capacity allowed. It ensures that autoscale doesn't scale your resource below this number when executing the profile |
+| profiles | capacity.default | Instance limits - Default |If there's a problem reading the resource metric, and the current capacity is below the default, autoscale scales out to the default. This ensures the availability of the resource. If the current capacity is already higher than the default capacity, autoscale doesn't scale in. |
+| profiles | rules | Rules |Autoscale automatically scales between the maximum and minimum capacities, by using the rules in the profile. You can have multiple rules in a profile. Typically there are two rules: one to determine when to scale out, and the other to determine when to scale in. |
+| rule | metricTrigger | Scale rule |Defines the metric condition of the rule. |
+| metricTrigger | metricName | Metric name |The name of the metric. |
+| metricTrigger | metricResourceUri | |The resource ID of the resource that emits the metric. In most cases, it is the same as the resource being scaled. In some cases, it can be different. For example, you can scale a Virtual Machine Scale Set based on the number of messages in a storage queue. |
+| metricTrigger | timeGrain | Time grain (minutes) |The metric sampling duration. For example, **TimeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by using the aggregation method specified in the statistic element. |
+| metricTrigger | statistic | Time grain statistic |The aggregation method within the timeGrain period. For example, **statistic = ΓÇ£AverageΓÇ¥** and **timeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by taking the average. This property dictates how the metric is sampled. |
+| metricTrigger | timeWindow | Duration |The amount of time to look back for metrics. For example, **timeWindow = ΓÇ£PT10MΓÇ¥** means that every time autoscale runs, it queries metrics for the past 10 minutes. The time window allows your metrics to be normalized, and avoids reacting to transient spikes. |
+| metricTrigger | timeAggregation |Time aggregation |The aggregation method used to aggregate the sampled metrics. For example, **TimeAggregation = ΓÇ£AverageΓÇ¥** should aggregate the sampled metrics by taking the average. In the preceding case, take the ten 1-minute samples, and average them. |
+| rule | scaleAction | Action |The action to take when the metricTrigger of the rule is triggered. |
+| scaleAction | direction | Operation |"Increase" to scale out, or "Decrease" to scale in.|
+| scaleAction | value |Instance count |How much to increase or decrease the capacity of the resource. |
+| scaleAction | cooldown | Cool down (minutes)|The amount of time to wait after a scale operation before scaling again. For example, if **cooldown = ΓÇ£PT10MΓÇ¥**, autoscale doesn't attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
-There are three types of Autoscale profiles:
-- **Regular profile:** The most common profile. If you donΓÇÖt need to scale your resource based on the day of the week, or on a particular day, you can use a regular profile. This profile can then be configured with metric rules that dictate when to scale out and when to scale in. You should only have one regular profile defined.
+## Autoscale profiles
- The example profile used earlier in this article is an example of a regular profile. Note that it is also possible to set a profile to scale to a static instance count for your resource.
+There are three types of autoscale profiles:
-- **Fixed date profile:** This profile is for special cases. For example, letΓÇÖs say you have an important event coming up on December 26, 2017 (PST). You want the minimum and maximum capacities of your resource to be different on that day, but still scale on the same metrics. In this case, you should add a fixed date profile to your settingΓÇÖs list of profiles. The profile is configured to run only on the eventΓÇÖs day. For any other day, Autoscale uses the regular profile.
+- **Default profile:** Use the default profile if you donΓÇÖt need to scale your resource based on a particular date and time, or day of the week. The default profile runs when there are no other applicable profiles for the current date and time. You can only have one default profile.
+- **Fixed date profile:** The fixed date profile is relevant for a single date and time. Use the fixed date profile to set scaling rules for a specific event. The profile runs only once, on the eventΓÇÖs date and time. For all other times, autoscale uses the default profile.
- ```json
+```json
+ ...
"profiles": [ { "name": " regularProfile",
There are three types of Autoscale profiles:
... }, "rules": [
- {
...
- },
- {
- ...
- }
] }, {
There are three types of Autoscale profiles:
... }, "rules": [
- {
...
- },
- {
- ...
- }
], "fixedDate": { "timeZone": "Pacific Standard Time",
There are three types of Autoscale profiles:
} } ]
- ```
-
-- **Recurrence profile:** This type of profile enables you to ensure that this profile is always used on a particular day of the week. Recurrence profiles only have a start time. They run until the next recurrence profile or fixed date profile is set to start. An Autoscale setting with only one recurrence profile runs that profile, even if there is a regular profile defined in the same setting. The following two examples illustrate how this profile is used:-
- **Example 1: Weekdays vs. weekends**
-
- Let's say that on weekends, you want your maximum capacity to be 4. On weekdays, because you expect more load, you want your maximum capacity to be 10. In this case, your setting would contain two recurrence profiles, one to run on weekends and the other on weekdays.
+```
- The setting looks like this:
+- **Recurrence profile:** A recurrence profile is used for a day or set of days of the week. The schema for a recurring profile doesn't include an end date. The end of date and time for a recurring profile is set by the start time of the following profile. When using the portal to configure recurring profiles, the default profile is automatically updated to start at the end time that you specify for the recurring profile. For more information on configuring multiple profiles, see [Autoscale with multiple profiles](./autoscale-multiprofile.md)
- ```json
- "profiles": [
- {
- "name": "weekdayProfile",
- "capacity": {
- ...
- },
- "rules": [
- {
- ...
- }
- ],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday"
- ],
- "hours": [
- 0
+ The partial schema example below shows a recurring profile, starting at 06:00 and ending at 19:00 on Saturdays and Sundays. The default profile has been modified to start at 19:00 on Saturdays and Sundays.
+
+``` JSON
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/ autoscaleSettings",
+ "apiVersion": "2015-04-01",
+ "name": "VMSS1-Autoscale-607",
+ "location": "eastus",
+ "properties": {
+
+ "name": "VMSS1-Autoscale-607",
+ "enabled": true,
+ "targetResourceUri": "/subscriptions/ abc123456-987-f6e5-d43c-9a8d8e7f6541/ resourceGroups/rg-vmss1/providers/ Microsoft.Compute/ virtualMachineScaleSets/VMSS1",
+ "profiles": [
+ {
+ "name": "Weekend profile",
+ "capacity": {
+ ...
+ },
+ "rules": [
+ ...
+ ],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 6
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
+ "capacity": {
+ ...
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 19
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ ...
+ ]
+ }
],
- "minutes": [
- 0
- ]
+ "notifications": [],
+ "targetResourceLocation": "eastus"
}
+
}
- },
- {
- "name": "weekendProfile",
- "capacity": {
- ...
- },
- "rules": [
- {
- ...
- }
- ],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Saturday"
- ],
- "hours": [
- 0
- ],
- "minutes": [
- 0
- ]
- }
+ ]
}
- }
- ]
- ```
-
- The preceding setting shows that each recurrence profile has a schedule. This schedule determines when the profile starts running. The profile stops when itΓÇÖs time to run another profile.
-
- For example, in the preceding setting, ΓÇ£weekdayProfileΓÇ¥ is set to start on Monday at 12:00 AM. That means this profile starts running on Monday at 12:00 AM. It continues until Saturday at 12:00 AM, when ΓÇ£weekendProfileΓÇ¥ is scheduled to start running.
-
- **Example 2: Business hours**
-
- Let's say you want to have one metric threshold during business hours (9:00 AM to 5:00 PM), and a different one for all other times. The setting would look like this:
-
- ```json
- "profiles": [
- {
- "name": "businessHoursProfile",
- "capacity": {
- ...
- },
- "rules": [{
- ...
- }],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday", ΓÇ£TuesdayΓÇ¥, ΓÇ£WednesdayΓÇ¥, ΓÇ£ThursdayΓÇ¥, ΓÇ£FridayΓÇ¥
- ],
- "hours": [
- 9
- ],
- "minutes": [
- 0
- ]
- }
- }
- },
- {
- "name": "nonBusinessHoursProfile",
- "capacity": {
- ...
- },
- "rules": [{
- ...
- }]
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday", ΓÇ£TuesdayΓÇ¥, ΓÇ£WednesdayΓÇ¥, ΓÇ£ThursdayΓÇ¥, ΓÇ£FridayΓÇ¥
- ],
- "hours": [
- 17
- ],
- "minutes": [
- 0
- ]
- }
- }
- }]
- ```
-
- The preceding setting shows that ΓÇ£businessHoursProfileΓÇ¥ begins running on Monday at 9:00 AM, and continues to 5:00 PM. ThatΓÇÖs when ΓÇ£nonBusinessHoursProfileΓÇ¥ starts running. The ΓÇ£nonBusinessHoursProfileΓÇ¥ runs until 9:00 AM Tuesday, and then the ΓÇ£businessHoursProfileΓÇ¥ takes over again. This repeats until Friday at 5:00 PM. At that point, ΓÇ£nonBusinessHoursProfileΓÇ¥ runs all the way to Monday at 9:00 AM.
-
-> [!Note]
-> The Autoscale user interface in the Azure portal enforces end times for recurrence profiles, and begins running the Autoscale setting's default profile in between recurrence profiles.
-## Autoscale evaluation
-Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules, it is important to understand how an Autoscale setting is evaluated. The Autoscale job runs every 30 to 60 seconds, depending on the resource type. Each time the Autoscale job runs, it begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum values, and any metric rules in the profile, and decides if a scale action is necessary.
+```
+## Autoscale evaluation
+
+Autoscale settings can have multiple profiles. Each profile can have multiple rules. Each time the autoscale job runs, it begins by choosing the applicable profile for that time. Autoscale then evaluates the minimum and maximum values, any metric rules in the profile, and decides if a scale action is necessary. The autoscale job runs every 30 to 60 seconds, depending on the resource type.
+### Which profile will autoscale use?
-### Which profile will Autoscale pick?
+Each time the autoscale service runs, the profiles are evaluated in the following order:
-Autoscale uses the following sequence to pick the profile:
-1. It first looks for any fixed date profile that is configured to run now. If there is, Autoscale runs it. If there are multiple fixed date profiles that are supposed to run, Autoscale selects the first one.
-2. If there are no fixed date profiles, Autoscale looks at recurrence profiles. If a recurrence profile is found, it runs it.
-3. If there are no fixed date or recurrence profiles, Autoscale runs the regular profile.
+1. Fixed date profiles
+1. Recurring profiles
+1. Default profile
-### How does Autoscale evaluate multiple rules?
+The first suitable profile found will be used.
-After Autoscale determines which profile to run, it evaluates all the scale-out rules in the profile (these are rules with **direction = ΓÇ£IncreaseΓÇ¥**).
+### How does autoscale evaluate multiple rules?
-If one or more scale-out rules are triggered, Autoscale calculates the new capacity determined by the **scaleAction** of each of those rules. Then it scales out to the maximum of those capacities, to ensure service availability.
+After autoscale determines which profile to run, it evaluates the scale-out rules in the profile, that is, where **direction = ΓÇ£IncreaseΓÇ¥**.
+If one or more scale-out rules are triggered, autoscale calculates the new capacity determined by the **scaleAction** specified for each of the rules. If more than one scale-out rule is triggered, autoscale scales to the highest specified capacity to ensure service availability.
-For example, let's say there is a virtual machine scale set with a current capacity of 10. There are two scale-out rules: one that increases capacity by 10 percent, and one that increases capacity by 3 counts. The first rule would result in a new capacity of 11, and the second rule would result in a capacity of 13. To ensure service availability, Autoscale chooses the action that results in the maximum capacity, so the second rule is chosen.
+For example, assume that there are two rules: Rule 1 specifies a scale out by 3 instances, and rule 2 specifies a scale out by 5. If both rules are triggered, autoscale will scale out by 5 instances. Similarly, if one rule specifies scale out by 3 instances and another rule, scale out by 15%, the higher of the two instance counts will be used.
-If no scale-out rules are triggered, Autoscale evaluates all the scale-in rules (rules with **direction = ΓÇ£DecreaseΓÇ¥**). Autoscale only takes a scale-in action if all of the scale-in rules are triggered.
+If no scale-out rules are triggered, autoscale evaluates the scale-in rules, that is, rules with **direction = ΓÇ£DecreaseΓÇ¥**. Autoscale only scales in if all of the scale-in rules are triggered.
-Autoscale calculates the new capacity determined by the **scaleAction** of each of those rules. Then it chooses the scale action that results in the maximum of those capacities to ensure service availability.
+Autoscale calculates the new capacity determined by the **scaleAction** of each of those rules. To ensure service availability, autoscale scales in by as little as possible to achieve the maximum capacity specified. For example, assume two scale-in rules, one that decreases capacity by 50 percent, and one that decreases capacity by 3 instances. If first rule results in 5 instances and the second rule results in 7, autoscale scales-in to 7 instances.
-For example, let's say there is a virtual machine scale set with a current capacity of 10. There are two scale-in rules: one that decreases capacity by 50 percent, and one that decreases capacity by 3 counts. The first rule would result in a new capacity of 5, and the second rule would result in a capacity of 7. To ensure service availability, Autoscale chooses the action that results in the maximum capacity, so the second rule is chosen.
+Each time autoscale calculates the result of a scale-in action, it evaluates whether that action would trigger a scale-out action. The scenario where a scale action triggers the opposite scale action is known as flapping. Autoscale may defer a scale-in action to avoid flapping or may scale by a number less than what was specified in the rule. For more information on flapping, see [Flapping in Autoscale](./autoscale-custom-metric.md)
## Next steps
-Learn more about Autoscale by referring to the following:
+
+Learn more about autoscale by referring to the following articles :
* [Overview of autoscale](./autoscale-overview.md) * [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
-* [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md)
+* [Autoscale with multiple profiles](./autoscale-multiprofile.md)
+* [Flapping in Autoscale](./autoscale-custom-metric.md)
* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
az resource create --resource-group divyaj-test --namespace microsoft.monitor --
``` ### [Resource Manager](#tab/resource-manager)
-Use the following Resource Manager template with any of the [standard deployment options](../resource-manager-samples.md#deploy-the-sample-templates) to create an Azure Monitor workspace.
+Use one of the following Resource Manager templates with any of the [standard deployment options](../resource-manager-samples.md#deploy-the-sample-templates) to create an Azure Monitor workspace.
```json {
Use the following Resource Manager template with any of the [standard deployment
} ```
+```bicep
+@description('Specify the name of the workspace.')
+param workspaceName string
+
+@description('Specify the location for the workspace.')
+param location string = resourceGroup().location
+
+resource workspace 'microsoft.monitor/accounts@2021-06-03-preview' = {
+ name: workspaceName
+ location: location
+}
+
+```
+
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The only requirement to enable Azure Monitor managed service for Prometheus is t
## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
-## Alerts
-Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules]
+## Rules and alerts
+Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alert rules and recording rules using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md).
## Limitations
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-active-directory.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus using Azure Active Directory (preview)
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your Kubernetes cluster running on-premises or in another cloud using Azure Active Directory authentication.
++ Last updated : 11/01/2022++
+# Configure remote write for Azure Monitor managed service for Prometheus using Azure Active Directory authentication (preview)
+This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using Azure Active Directory authentication.
+
+## Cluster configurations
+This article applies to the following cluster configurations:
+
+- Azure Kubernetes service (AKS)
+- Azure Arc-enabled Kubernetes cluster
+- Kubernetes cluster running in another cloud or on-premises
+
+> [!NOTE]
+> For Azure Kubernetes service (AKS) or Azure Arc-enabled Kubernetes cluster, managed identify authentication is recommended. See [Azure Monitor managed service for Prometheus remote write - managed identity (preview)](prometheus-remote-write-managed-identity.md).
+
+## Prerequisites
+See prerequisites at [Azure Monitor managed service for Prometheus remote write (preview)](prometheus-remote-write.md#prerequisites).
+
+## Create Azure Active Directory application
+Follow the procedure at [Register an application with Azure AD and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to register an application for Prometheus remote-write and create a service principal.
++
+## Get the client ID of the Azure Active Directory application.
+
+1. From the **Azure Active Directory** menu in Azure Portal, select **App registrations**.
+2. Locate your application and note the client ID.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/application-client-id.png" alt-text="Screenshot showing client ID of Azure Active Directory application." lightbox="media/prometheus-remote-write-active-directory/application-client-id.png":::
+
+## Assign Monitoring Metrics Publisher role on the data collection rule to the application
+The application requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+
+1. From the menu of your Azure Monitor Workspace account, click the **Data collection rule** to open the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+
+2. Click on **Access control (IAM)** in the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+
+3. Click **Add** and then **Add role assignment**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+
+4. Select **Monitoring Metrics Publisher** role and click **Next**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+
+5. Select **User, group, or service principal** and then click **Select members**. Select the application that you created and click **Select**.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot showing selection of application." lightbox="media/prometheus-remote-write-active-directory/select-application.png":::
+
+6. Click **Review + assign** to complete the role assignment.
++
+## Create an Azure key vault and generate certificate
+
+1. If you don't already have an Azure key vault, then create a new one using the guidance at [Create a vault](../../key-vault/general/quick-create-portal.md#create-a-vault).
+2. Create a certificate using the guidance at [Add a certificate to Key Vault](../../key-vault/certificates/quick-create-portal.md#add-a-certificate-to-key-vault).
+3. Download the newly generated certificate in CER format using the guidance at [Export certificate from Key Vault](../../key-vault/certificates/quick-create-portal.md#export-certificate-from-key-vault).
+
+## Add certificate to the Azure Active Directory application
+
+1. From the menu for your Azure Active Directory application, select **Certificates & secrets**.
+2. Click **Upload certificate** and select the certificate that you downloaded.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/upload-certificate.png" alt-text="Screenshot showing upload of certificate for Azure Active Directory application." lightbox="media/prometheus-remote-write-active-directory/upload-certificate.png":::
+
+> [!WARNING]
+> Certificates have an expiration date, and it's the responsibility of the user to keep these certificates valid.
+
+## Add CSI driver and storage for cluster
+
+> [!NOTE]
+> Azure Key Vault CSI driver configuration is just one of the ways to get certificate mounted on the pod. The remote write container only needs a local path to a certificate in the pod for the setting `AZURE_CLIENT_CERTIFICATE_PATH` value in the [Deploy Side car and configure remote write on the Prometheus server](#deploy-side-car-and-configure-remote-write-on-the-prometheus-server) step below.
+
+This step is only required if you didn't enable Azure Key Vault Provider for Secrets Store CSI Driver when you created your cluster.
+
+1. Run the following Azure CLI command to enable Azure Key Vault Provider for Secrets Store CSI Driver for your cluster.
+
+ ```azurecli
+ az aks enable-addons --addons azure-keyvault-secrets-provider --name <aks-cluster-name> --resource-group <resource-group-name>
+ ```
+
+2. Run the following commands to give the identity access to the key vault.
+
+ ```azurecli
+ # show client id of the managed identity of the cluster
+ az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
+
+ # set policy to access keys in your key vault
+ az keyvault set-policy -n <keyvault-name> --key-permissions get --spn <identity-client-id>
+
+ # set policy to access secrets in your key vault
+ az keyvault set-policy -n <keyvault-name> --secret-permissions get --spn <identity-client-id>
+
+ # set policy to access certs in your key vault
+ az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id>
+ ```
+
+3. Create a *SecretProviderClass* by saving the following YAML to a file named *secretproviderclass.yml*. Replace the values for `userAssignedIdentityID`, `keyvaultName`, `tenantId` and the objects to retrieve from your key vault. See [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver](../../aks/csi-secrets-store-identity-access.md) for details on values to use.
+
+ ```yml
+ # This is a SecretProviderClass example using user-assigned identity to access your key vault
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ name: azure-kvname-user-msi
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "true" # Set to true for using managed identity
+ userAssignedIdentityID: <client-id> # Set the clientID of the user-assigned managed identity to use
+ keyvaultName: <key-vault-name> # Set to the name of your key vault
+ cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
+ objects: |
+ array:
+ - |
+ objectName: <name-of-cert>
+ objectType: secret # object types: secret, key, or cert
+ objectFormat: pfx
+ objectEncoding: base64
+ objectVersion: ""
+ tenantId: <tenant-id> # The tenant ID of the key vault
+ ```
+
+4. Apply the *SecretProviderClass* by running the following command on your cluster.
+
+ ```
+ kubectl apply -f secretproviderclass.yml
+ ```
+
+## Deploy Side car and configure remote write on the Prometheus server
+
+1. Copy the YAML below and save to a file. This YAML assumes you're using 8081 as your listening port. Modify that value if you use a different port.
++
+ ```yml
+ prometheus:
+ prometheusSpec:
+ cluster: <CLUSTER-NAME>
+
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+
+ # Additional volumes on the output StatefulSet definition.
+ # Required only for AAD based auth
+ volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: azure-kvname-user-msi
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+
+ # Required only for AAD based auth
+ volumeMounts:
+ - name: secrets-store-inline
+ mountPath: /mnt/secrets-store
+ readOnly: true
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: '<INGESTION_URL>'
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: aadApplication
+ - name: AZURE_CLIENT_ID
+ value: '<APP-REGISTRATION-CLIENT-ID>'
+ - name: AZURE_TENANT_ID
+ value: '<TENANT-ID>'
+ - name: AZURE_CLIENT_CERTIFICATE_PATH
+ value: /mnt/secrets-store/<CERT-NAME>
+ - name: CLUSTER
+ value: '<CLUSTER-NAME>'
+ ```
++
+2. Replace the following values in the YAML.
+
+ | Value | Description |
+ |:|:|
+ | `<CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221102.1`<br>This is the remote write container image version. |
+ | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
+ | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application |
+ | `<TENANT-ID> ` | Tenant ID of the Azure Active Directory application |
+ | `<CERT-NAME>` | Name of the certificate |
+ | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
+
+
+++
+3. Open Azure Cloud Shell and upload the YAML file.
+4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides>
+ ```
+
+## Verification and troubleshooting
+See [Azure Monitor managed service for Prometheus remote write (preview)](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Next steps
+
+= [Setup Grafana to use Managed Prometheus as a data source](prometheus-grafana.md).
+- [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md
Title: Remote-write in Azure Monitor Managed Service for Prometheus (preview)
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus using managed identity (preview)
description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. Previously updated : 10/20/2022 Last updated : 11/01/2022
-# Azure Monitor managed service for Prometheus remote write - managed identity (preview)
-Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into our managed service.
-
-This article describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. You either use an existing identity created by AKS or [create one of your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
-
-## Architecture
-Azure Monitor provides a reverse proxy container (Azure Monitor side car container) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
-
+# Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication (preview)
+This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. You either use an existing identity created by AKS or [create one of your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
## Cluster configurations This article applies to the following cluster configurations:
This article applies to the following cluster configurations:
- Azure Kubernetes service (AKS) - Azure Arc-enabled Kubernetes cluster
-## Prerequisites
--- You must have self-managed Prometheus running on your AKS cluster. For example, see [Using Azure Kubernetes Service with Grafana and Prometheus](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/using-azure-kubernetes-service-with-grafana-and-prometheus/ba-p/3020459).-- You used [Kube-Prometheus Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) when you set up Prometheus on your AKS cluster.--
-## Create Azure Monitor workspace
-Data for Azure Monitor managed service for Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). You must [create a new workspace](../essentials/azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace) if you don't already have one.
+> [!NOTE]
+> For a Kubernetes cluster running in another cloud or on-premises, see [Azure Monitor managed service for Prometheus remote write - Azure Active Directory (preview)](prometheus-remote-write-active-directory.md).
+## Prerequisites
+See prerequisites at [Azure Monitor managed service for Prometheus remote write (preview)](prometheus-remote-write.md#prerequisites).
## Locate AKS node resource group The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can locate it from the **Resource groups** menu in the Azure portal. Start by making sure that you can locate this resource group since other steps below will refer to it.
Instead of creating your own ID, you can use one of the identities created by AK
-## Assign managed identity the Monitoring Metrics Publisher role on the data collection rule
+## Assign Monitoring Metrics Publisher role on the data collection rule to the managed identity
The managed identity requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace. 1. From the menu of your Azure Monitor Workspace account, click the **Data collection rule** to open the **Overview** page for the data collection rule.
This step isn't required if you're using an AKS identity since it will already h
```yml prometheus:
- prometheusSpec:
- externalLabels:
+ prometheusSpec:
cluster: <AKS-CLUSTER-NAME>
- ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
- ##
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
remoteWrite:
- - url: "http://localhost:8081/api/v1/write"
-
+ - url: 'http://localhost:8081/api/v1/write'
containers:
- - name: prom-remotewrite
- image: <CONTAINER-IMAGE-VERSION>
- imagePullPolicy: Always
- ports:
- - name: rw-port
- containerPort: 8081
- livenessProbe:
- httpGet:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
path: /health port: rw-port
- readinessProbe:
- httpGet:
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
path: /ready port: rw-port
- env:
- - name: INGESTION_URL
- value: "<INGESTION_URL>"
- - name: LISTENING_PORT
- value: "8081"
- - name: IDENTITY_TYPE
- value: "userAssigned"
- - name: AZURE_CLIENT_ID
- value: "<MANAGED-IDENTITY-CLIENT-ID>"
- # Optional parameters
- - name: CLUSTER
- value: "<CLUSTER-NAME>"
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: <INGESTION_URL>
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AZURE_CLIENT_ID
+ value: <MANAGED-IDENTITY-CLIENT-ID>
+ # Optional parameter
+ - name: CLUSTER
+ value: <CLUSTER-NAME>
```
This step isn't required if you're using an AKS identity since it will already h
| Value | Description | |:|:| | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |
- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2`<br>This is the remote write container image version. |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221102.1`<br>This is the remote write container image version. |
| `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity | | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
This step isn't required if you're using an AKS identity since it will already h
helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides> ``` -
-## Verify remote write is working correctly
-
-You can verify that Prometheus data is being sent into your Azure Monitor workspace in a couple of ways.
-
-1. By viewing your container log using kubectl commands:
-
- ```azurecli
- kubectl logs <Prometheus-Pod-Name> <Azure-Monitor-Side-Car-Container-Name>
- # example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite
- ```
- Expected output: time="2022-10-19T22:11:58Z" level=info msg="Metric packets published in last 1 minute" avgBytesPerRequest=19809 avgRequestDuration=0.17153638698214294 failedPublishingToAll=0 successfullyPublishedToAll=112 successfullyPublishedToSome=0
-
- You can confirm that the data is flowing via remote write if the above output has non-zero value for ΓÇ£avgBytesPerRequestΓÇ¥ and ΓÇ£avgRequestDurationΓÇ¥.
-
-2. By performing PromQL queries on the data and verifying results
- This can be done via Grafana. Refer to our documentation for [getting Grafana setup with Managed Prometheus](prometheus-grafana.md).
-
-## Troubleshooting remote write setup
-
-1. If the data is not flowing
-You can run the following commands to view errors from the container that cause the data not flowing.
-
- ```azurecli
- kubectl --namespace <Namespace> describe pod <Prometheus-Pod-Name>
- ```
-These logs should indicate the errors if any in the remote write container.
-
-2. If the container is restarting constantly
-This is likely due to misconfiguration of the container. In order to view the configuration values set for the container, run the following command:
- ```azurecli
- kubectl get po <Prometheus-Pod-Name> -o json | jq -c '.spec.containers[] | select( .name | contains(" <Azure-Monitor-Side-Car-Container-Name> "))'
- ```
-Output:
-{"env":[{"name":"INGESTION_URL","value":"https://my-azure-monitor-workspace.eastus2-1.metrics.ingest.monitor.azure.com/dataCollectionRules/dcr-00000000000000000/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"},{"name":"LISTENING_PORT","value":"8081"},{"name":"IDENTITY_TYPE","value":"userAssigned"},{"name":"AZURE_CLIENT_ID","value":"00000000-0000-0000-0000-00000000000"}],"image":"mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2","imagePullPolicy":"Always","name":"prom-remotewrite","ports":[{"containerPort":8081,"name":"rw-port","protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-vbr9d","readOnly":true}]}
-
-Verify the configuration values especially ΓÇ£AZURE_CLIENT_IDΓÇ¥ and ΓÇ£IDENTITY_TYPEΓÇ¥
+## Verification and troubleshooting
+See [Azure Monitor managed service for Prometheus remote write (preview)](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
## Next steps
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus (preview)
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster
++ Last updated : 11/01/2022++
+# Azure Monitor managed service for Prometheus remote write (preview)
+Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into our managed service.
+
+## Architecture
+Azure Monitor provides a reverse proxy container (Azure Monitor side car container) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
++
+## Prerequisites
+
+- You must have self-managed Prometheus running on your AKS cluster. For example, see [Using Azure Kubernetes Service with Grafana and Prometheus](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/using-azure-kubernetes-service-with-grafana-and-prometheus/ba-p/3020459).
+- You used [Kube-Prometheus Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) when you set up Prometheus on your AKS cluster.
+- Data for Azure Monitor managed service for Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). You must [create a new workspace](../essentials/azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace) if you don't already have one.
+
+## Configure remote write
+The process for configuring remote write depends on your cluster configuration and the type of authentication that you use.
+
+- **Managed identity** is recommended for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster. See [Azure Monitor managed service for Prometheus remote write - managed identity (preview)](prometheus-remote-write-managed-identity.md)
+- **Azure Active Directory** can be used for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster and is required for Kubernetes cluster running in another cloud or on-premises. See [Azure Monitor managed service for Prometheus remote write - Azure Active Directory (preview)](prometheus-remote-write-active-directory.md)
+++
+## Verify remote write is working correctly
+
+Use the following methods to verify that Prometheus data is being sent into your Azure Monitor workspace.
+
+### kubectl commands
+
+Use the following command to view your container log. Remote write data is flowing if the output has non-zero value for `avgBytesPerRequest` and `avgRequestDuration`.
+
+```azurecli
+kubectl logs <Prometheus-Pod-Name> <Azure-Monitor-Side-Car-Container-Name>
+# example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite
+```
+
+The output from this command should look similar to the following:
+
+```
+time="2022-11-02T21:32:59Z" level=info msg="Metric packets published in last 1 minute" avgBytesPerRequest=19713 avgRequestDurationInSec=0.023 failedPublishing=0 successfullyPublished=122
+```
++
+### PromQL queries
+Use PromQL queries in Grafana and verify that the results return expected data. See [getting Grafana setup with Managed Prometheus](prometheus-grafana.md) to configure Grafana
+
+## Troubleshoot remote write
+
+### No data is flowing
+If remote data isn't flowing, run the following command which will indicate the errors if any in the remote write container.
+
+```azurecli
+kubectl --namespace <Namespace> describe pod <Prometheus-Pod-Name>
+```
++
+### Container keeps restarting
+A container regularly restarting is likely due to misconfiguration of the container. Run the following command to view the configuration values set for the container. Verify the configuration values especially `AZURE_CLIENT_ID` and `IDENTITY_TYPE`.
+
+```azureccli
+kubectl get pod <Prometheus-Pod-Name> -o json | jq -c '.spec.containers[] | select( .name | contains("<Azure-Monitor-Side-Car-Container-Name>"))'
+```
+
+The output from this command should look similar to the following:
+
+```
+{"env":[{"name":"INGESTION_URL","value":"https://my-azure-monitor-workspace.eastus2-1.metrics.ingest.monitor.azure.com/dataCollectionRules/dcr-00000000000000000/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"},{"name":"LISTENING_PORT","value":"8081"},{"name":"IDENTITY_TYPE","value":"userAssigned"},{"name":"AZURE_CLIENT_ID","value":"00000000-0000-0000-0000-00000000000"}],"image":"mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2","imagePullPolicy":"Always","name":"prom-remotewrite","ports":[{"containerPort":8081,"name":"rw-port","protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-vbr9d","readOnly":true}]}
+```
+++
+## Next steps
+
+- [Setup Grafana to use Managed Prometheus as a data source](prometheus-grafana.md).
+- [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Title: Analyze usage in Log Analytics workspace in Azure Monitor
+ Title: Analyze usage in a Log Analytics workspace in Azure Monitor
description: Methods and queries to analyze the data in your Log Analytics workspace to help you understand usage and potential cause for high usage. Last updated 08/25/2022
-# Analyze usage in Log Analytics workspace
-Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data collected by each. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher than expected usage and also to predict your costs as you monitor additional resources and configure different Azure Monitor features.
+# Analyze usage in a Log Analytics workspace
+Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data that each solution collects. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher-than-expected usage. It also helps you to predict your costs as you monitor more resources and configure different Azure Monitor features.
-## Causes for higher than expected usage
-Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+## Causes for higher-than-expected usage
+Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the:
- - Set of insights and services enabled and their configuration
- - Number and type of monitored resources
- - Volume of data collected from each monitored resource
+ - Set of insights and services enabled and their configuration.
+ - Number and type of monitored resources.
+ - Volume of data collected from each monitored resource.
An unexpected increase in any of these factors can result in increased charges for data retention. The rest of this article provides methods for detecting such a situation and then analyzing collected data to identify and mitigate the source of the increased usage. ## Usage analysis in Azure Monitor
-You should start your analysis with existing tools in Azure Monitor. These require no configuration and can often provide the information you require with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, you use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
-### Log Analytics Workspace Insights
-[Log Analytics Workspace Insights](log-analytics-workspace-insights-overview.md#usage-tab) provides you with a quick understanding of the data in your workspace including the following:
+Start your analysis with existing tools in Azure Monitor. These tools require no configuration and can often provide the information you need with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
-- Data tables ingesting the most data volume in the main table-- Top resources contributing data-- Trend of data ingestion
+### Log Analytics Workspace Insights
+[Log Analytics Workspace Insights](log-analytics-workspace-insights-overview.md#usage-tab) provides you with a quick understanding of the data in your workspace. For example, you can determine the:
-See the **Usage** tab for a breakdown of ingestion by solution and table. This can help you quickly identify the tables that contribute to the bulk of your data volume. It also shows trending of data collection over time to determine if data collection steadily increases over time or suddenly increased in response to a particular configuration change.
+- Data tables that are ingesting the most data volume in the main table.
+- Top resources contributing data.
+- Trend of data ingestion.
-Select **Additional Queries** for pre-built queries that help you further understand your data patterns.
+See the **Usage** tab for a breakdown of ingestion by solution and table. This information can help you quickly identify the tables that contribute to the bulk of your data volume. The tab also shows trending of data collection over time. You can determine if data collection steadily increased over time or suddenly increased in response to a configuration change.
-### Usage and Estimated Costs
-The *Data ingestion per solution* chart on the [Usage and Estimated Costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
+Select **Additional Queries** for prebuilt queries that help you further understand your data patterns.
+### Usage and estimated costs
+The **Data ingestion per solution** chart on the [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
## Log queries
-You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data.
+You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data:
-- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there is an ingestion charge. Use this column to filter out non-billable data.
+- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there's an ingestion charge. Use this column to filter out non-billable data.
- [_BilledSize](log-standard-columns.md#_billedsize) provides the size in bytes of the record. ## Data volume by solution
-Analyze the amount of billable data collected by a particular service or solution. These queries use the [Usage](/azure/azure-monitor/reference/tables/usage) table that collects usage data for each table in the workspace.
-
+Analyze the amount of billable data collected by a particular service or solution. These queries use the [Usage](/azure/azure-monitor/reference/tables/usage) table that collects usage data for each table in the workspace.
-> [!NOTE]
-> The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When using the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
+> [!NOTE]
+> The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When you use the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
**Billable data volume by solution over the past month**
Usage
``` **Billable data volume for specific events**
-If you find that a particular data type is collecting excessive data, you may want to analyze the data in that table to determine particular records that are increasing. This example filters particular event IDs in the `Event` table and then provides a count for each ID. You can modify this queries using the columns from other tables.
+
+If you find that a particular data type is collecting excessive data, you might want to analyze the data in that table to determine particular records that are increasing. This example filters specific event IDs in the `Event` table and then provides a count for each ID. You can modify this query by using the columns from other tables.
```kusto Event
Event
``` ## Data volume by computer
-Analyze the amount of billable data collect from a virtual machine or set of virtual machines. The **Usage** table doesn't include information about data collected from virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this is only for analytics of data trends.
+You can analyze the amount of billable data collected from a virtual machine or a set of virtual machines. The **Usage** table doesn't include information about data collected from virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this query is only for analytics of data trends.
> [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the preceding queries.
**Billable data volume by computer for the last full day**
-
+ ```kusto find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _BilledSize, _IsBillable, Computer, Type | where _IsBillable == true and Type != "Usage"
find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project
``` ## Data volume by Azure resource, resource group, or subscription
-Analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure.
+You can analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure.
> [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the preceding queries.
**Billable data volume by resource ID for the last full day**
find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project
| sort by BillableDataBytes nulls last ```
-It may be helpful to parse the **_ResourceId** :
+It might be helpful to parse `_ResourceId`:
```Kusto | parse tolower(_ResourceId) with "/subscriptions/" subscriptionId "/resourcegroups/"
find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project
``` > [!TIP]
-> For workspaces with large data volumes, doing queries such as shown in this section -- which query large volumes of raw data -- might need to be restricted to a single day. To track trends over time, consider settting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
+> For workspaces with large data volumes, doing queries such as the ones shown in this section, which query large volumes of raw data, might need to be restricted to a single day. To track trends over time, consider setting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
## Querying for common data types
-If you find that you have excessive billable data for a particular data type, then you may need to perform a query to analyze data in that table. The following queries provide samples for some common data types:
+If you find that you have excessive billable data for a particular data type, you might need to perform a query to analyze data in that table. The following queries provide samples for some common data types:
**Security** solution
AzureDiagnostics
| summarize AggregatedValue = count() by ResourceProvider, ResourceId ```
-## Application insights data
-There are two approaches to investigating the amount of data collected for Application Insights, depending on whether you have a classic or workspace-based application. Use the `_BilledSize` property that is available on each ingested event for both workspace-based and classic resources. You can also use aggregated information in the [systemEvents](/azure/azure-monitor/reference/tables/appsystemevents) table for classic resources.
-
+## Application Insights data
+There are two approaches to investigating the amount of data collected for Application Insights, depending on whether you have a classic or workspace-based application. Use the `_BilledSize` property that's available on each ingested event for both workspace-based and classic resources. You can also use aggregated information in the [systemEvents](/azure/azure-monitor/reference/tables/appsystemevents) table for classic resources.
> [!NOTE]
-> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understand-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
+> Queries against Application Insights tables, except `SystemEvents`, will work for both a workspace-based and classic Application Insights resource. [Backward compatibility](../app/convert-classic-resource.md#understand-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** on the **Log Analytics workspace** menu. For a classic resource, open **Logs** on the **Application Insights** menu.
**Dependency operations generate the most data volume in the last 30 days (workspace-based or classic)**
dependencies
| render barchart ```
-**Daily data volume by type for this Application Insights resource the last 7 days (classic only)**
+**Daily data volume by type for this Application Insights resource for the last 7 days (classic only)**
```kusto systemEvents
systemEvents
``` ### Data volume trends for workspace-based resources
-To look at the data volume trends for [workspace-based Application Insights resources](../app/create-workspace-resource.md), use a query that includes all of the Application insights tables. The following queries use the [tables names specific to workspace-based resources](../app/apm-tables.md#table-schemas).
-
+To look at the data volume trends for [workspace-based Application Insights resources](../app/create-workspace-resource.md), use a query that includes all the Application Insights tables. The following queries use the [table names specific to workspace-based resources](../app/apm-tables.md#table-schemas).
-**Daily data volume by type for all Application Insights resources in a workspace for the 7 days**
+**Daily data volume by type for all Application Insights resources in a workspace for 7 days**
```kusto union AppAvailabilityResults,
union AppAvailabilityResults,
| summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d) ```
-To look at the data volume trends for only a single Application Insights resource, add the following line before the `summarize` in the above query:
+To look at the data volume trends for only a single Application Insights resource, add the following line before `summarize` in the preceding query:
```kusto | where _ResourceId contains "<myAppInsightsResourceName>" ``` > [!TIP]
-> For workspaces with large data volumes, doing queries such as this one above which query large volumes of raw data might need to be restricted to a single day. To track trends over time, consider settting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
+> For workspaces with large data volumes, doing queries such as the preceding one, which query large volumes of raw data, might need to be restricted to a single day. To track trends over time, consider setting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
-## Understanding nodes sending data
-If you don't have excessive data from any particular source, you may have an excessive number of agents that are sending data.
+## Understand nodes sending data
+If you don't have excessive data from any particular source, you might have an excessive number of agents that are sending data.
**Count of agent nodes that are sending a heartbeat each day in the last month**
Heartbeat
``` > [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the preceding queries.
+ **Count of nodes sending any data in the last 24 hours** ```kusto
find where TimeGenerated > ago(24h) project _BilledSize, Computer
``` ## Nodes billed by the legacy Per Node pricing tier
-The [legacy Per Node pricing tier](cost-logs.md#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending billed data types since some data types are free. In this case, use the leftmost field of the fully qualified domain name.
+The [legacy Per Node pricing tier](cost-logs.md#legacy-pricing-tiers) bills for nodes with hourly granularity. It also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending billed data types because some data types are free. In this case, use the leftmost field of the fully qualified domain name.
-The following queries return the count of computers with billed data per hour. The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query. If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the `where` clause.
+The following queries return the count of computers with billed data per hour. The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query. If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the `where` clause.
```kusto find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(n
| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d) | sort by day asc ```+ > [!NOTE]
-> There's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
+> Some complexity in the actual billing algorithm when solution targeting is used isn't represented in the preceding query.
-## Security and Automation node counts
+## Security and automation node counts
**Count of distinct security nodes**
union
| count ```
-**Number of distinct Automation nodes**
+**Number of distinct automation nodes**
```kusto ConfigurationData
union
``` ## Late-arriving data
-If you observe high data ingestion reported using `Usage` records, but you don't observe the same results summing `_BilledSize` directly on the data type, it's possible that you have late-arriving data. This is when data is ingested with old timestamps.
+If you observe high data ingestion reported by using `Usage` records, but you don't observe the same results summing `_BilledSize` directly on the data type, it's possible that you have late-arriving data. This situation occurs when data is ingested with old timestamps.
-For example, an agent may have a connectivity issue and send accumulated data once it reconnects. Or a host may have an incorrect time. This can result in an apparent discrepancy between the ingested data reported by the [Usage](/azure/azure-monitor/reference/tables/usage) data type and a query summing [_BilledSize](./log-standard-columns.md#_billedsize) over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
+For example, an agent might have a connectivity issue and send accumulated data when it reconnects. Or a host might have an incorrect time. Either example can result in an apparent discrepancy between the ingested data reported by the [Usage](/azure/azure-monitor/reference/tables/usage) data type and a query summing [_BilledSize](./log-standard-columns.md#_billedsize) over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
-To diagnose late-arriving data issues, use the [_TimeReceived](./log-standard-columns.md#_timereceived) column in addition to the [TimeGenerated](./log-standard-columns.md#timegenerated) column. `_TimeReceived` is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud.
+To diagnose late-arriving data issues, use the [_TimeReceived](./log-standard-columns.md#_timereceived) column and the [TimeGenerated](./log-standard-columns.md#timegenerated) column. The `_TimeReceived` property is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud.
-The following example is in response to high ingested data volumes of [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) data on May 2, 2021 to identify the timestamps on this ingested data. The `where TimeGenerated > datetime(1970-01-01)` statement is included to provide the clue to the Log Analytics user interface to look over all data.
+The following example is in response to high ingested data volumes of [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) data on May 2, 2021, to identify the timestamps on this ingested data. The `where TimeGenerated > datetime(1970-01-01)` statement is included to provide the clue to the Log Analytics user interface to look over all data.
```Kusto W3CIISLog
W3CIISLog
## Next steps -- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Azure Monitor Logs pricing details](cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.-- See [Data collection transformations in Azure Monitor (preview)](../essentials/data-collection-transformations.md) for details on using transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
+- See [Data collection transformations in Azure Monitor (preview)](../essentials/data-collection-transformations.md) for information on using transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-p
By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts. You can currently configure the following tables for Basic Logs: -- Custom tables: All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md)-- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2): Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records.-- [AppTraces](/azure/azure-monitor/reference/tables/apptraces): Freeform Application Insights traces.-- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs): Logs generated by Azure Container Apps, within a Container Apps environment.
+| Table | Details|
+|:|:|
+| Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
+| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
+| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Freeform Application Insights traces. |
+| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Logs generated by Azure Container Apps, within a Container Apps environment. |
+| [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
+| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services rooms operations incoming requests logs. |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
If the table that you're targeting with DCR-based log collection fits the criter
1. Configure your data collection rule (DCR) following procedures at [Send custom logs to Azure Monitor Logs using Resource Manager templates (preview)](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using resource manager templates (preview)](tutorial-workspace-transformations-api.md).
-1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-data-collection-endpoint) and the agent or component that will be sending data to the API.
+1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-a-data-collection-endpoint) and the agent or component that will be sending data to the API.
1. Issue the following API call against your table. This call is idempotent, so there will be no effect if the table has already been migrated.
azure-monitor Powershell Workspace Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/powershell-workspace-configuration.md
Title: Configure a Log Analytics workspace in Azure Monitor using PowerShell
-description: PowerShell samples showing how to configure a Log Analytics workspace in Azure Monitor to collect data from various data sources.
+description: PowerShell samples show how to configure a Log Analytics workspace in Azure Monitor to collect data from various data sources.
# Configure a Log Analytics workspace in Azure Monitor using PowerShell
-The following sample script configures the workspace to collect multiple types of logs from virtual machines using the [Log Analytics agent](../agents/log-analytics-agent.md).
+The following sample script configures the workspace to collect multiple types of logs from virtual machines by using the [Log Analytics agent](../agents/log-analytics-agent.md).
This script performs the following functions:
-1. Create a workspace
-1. Enable collection of IIS logs from computers with the Windows agent installed
-1. Collect Logical Disk perf counters from Linux computers (% Used Inodes; Free Megabytes; % Used Space; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec)
-1. Collect syslog events from Linux computers
-1. Collect Error and Warning events from the Application Event Log from Windows computers
-1. Collect Memory Available Mbytes performance counter from Windows computers
-1. Collect a custom log
+1. Create a workspace.
+1. Enable collection of IIS logs from computers with the Windows agent installed.
+1. Collect Logical Disk perf counters from Linux computers (% Used Inodes; Free Megabytes; % Used Space; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec).
+1. Collect Syslog events from Linux computers.
+1. Collect Error and Warning events from the Application Event Log from Windows computers.
+1. Collect Memory Available Mbytes performance counter from Windows computers.
+1. Collect a custom log.
```powershell $ResourceGroup = "my-resource-group"
New-AzOperationalInsightsCustomLogDataSource -ResourceGroupName $ResourceGroup -
``` > [!NOTE]
-> The format for the **CustomLogRawJson** parameter which defines the configuration for a custom log can be complex. Use [Get-AzOperationalInsightsDataSource](/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource) to retrieve the configuration for an existing Custom Log. The **Properties** property is the configuration required for the **CustomLogRawJson** parameter.
+> The format for the `CustomLogRawJson` parameter that defines the configuration for a custom log can be complex. Use [Get-AzOperationalInsightsDataSource](/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource) to retrieve the configuration for an existing custom log. The `Properties` property is the configuration required for the `CustomLogRawJson` parameter.
-In the above example regexDelimiter was defined as "\\n" for newline. The log delimiter may also be a timestamp. These are the supported formats:
+In the preceding example, `regexDelimiter` was defined as `\\n` for newline. The log delimiter might also be a timestamp. The following table lists the formats that are supported.
-| Format | Json RegEx format uses two `\\` for every `\` in a standard RegEx so if testing in a RegEx app reduce `\\` to `\` |
+| Format | JSON RegEx format uses two `\\` for every `\` in a standard RegEx, so if testing in a RegEx app, reduce `\\` to `\` |
| | | | `YYYY-MM-DD HH:MM:SS` | `((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]` | | `M/D/YYYY HH:MM:SS AM/PM` | `(([0-1]\\d)|[0-9])/(([0-3]\\d)|(\\d))/((\\d{2})|(\\d{4}))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]\\s(AM|PM|am|pm)` |
In the above example regexDelimiter was defined as "\\n" for newline. The log de
| `yyyy-MM-ddTHH:mm:ss` <br> The T is a literal letter T | `((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))T((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]` | ## Troubleshooting
-When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration:
-1. If you provide the same workspace name, resource group, subscription and region as in the deleted workspace, your workspace will be recovered including its data, configuration and connected agents.
-2. Workspace name must be unique per resource group. If you use a workspace name that is already exists, also in soft-delete in your resource group, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
- * [Recover](../logs/delete-workspace.md#recover-workspace) your workspace
- * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace
- * Create a new workspace using the same workspace name
+When you create a workspace that was deleted in the last 14 days and is in a [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have a different outcome depending on your workspace configuration. For example:
+- If you provide the same workspace name, resource group, subscription, and region as in the deleted workspace, your workspace will be recovered. The recovered workspace includes data, configuration, and connected agents.
+- A workspace name must be unique per resource group. If you use a workspace name that already exists and is also in soft delete in your resource group, you'll get an error. The error will state "The workspace name 'workspace-name' is not unique" or "conflict." To override the soft delete, permanently delete your workspace, and create a new workspace with the same name, follow these steps to recover the workspace first and then perform a permanent delete:
+
+ * [Recover](../logs/delete-workspace.md#recover-workspace) your workspace.
+ * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace.
+ * Create a new workspace by using the same workspace name.
## Next steps
-* [Review Log Analytics PowerShell cmdlets](/powershell/module/az.operationalinsights/) for additional information on using PowerShell for configuration of Log Analytics.
+[Review Log Analytics PowerShell cmdlets](/powershell/module/az.operationalinsights/) for more information on using PowerShell for configuration of Log Analytics.
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Title: Save a query in Azure Monitor Log Analytics (preview)
-description: Describes how to save a query in Log Analytics.
+description: This article describes how to save a query in Log Analytics.
Last updated 06/22/2022
# Save a query in Azure Monitor Log Analytics (preview)
-[Log queries](log-query-overview.md) are requests in Azure Monitor that allow you to process and retrieve data in a Log Analytics workspace. Saving a log queries allows you to:
+[Log queries](log-query-overview.md) are requests in Azure Monitor that you can use to process and retrieve data in a Log Analytics workspace. Saving a log query allows you to:
-- use the query in all Log Analytics context, including workspace and resource centric.-- allow other users to run same query.-- create a library of common queries for your organization.
+- Use the query in all Log Analytics contexts, including workspace and resource centric.
+- Allow other users to run the same query.
+- Create a library of common queries for your organization.
## Save options
-When you save a query, it's stored in a query pack which has the following benefits over the previous method of storing the query in a workspace. Saving to a query pack is the preferred method providing the following benefits:
+When you save a query, it's stored in a query pack, which has benefits over the previous method of storing the query in a workspace. Saving to a query pack is the preferred method, and it provides the following benefits:
- Easier discoverability with the ability to filter and group queries by different properties.-- Queries available when using a resource scope in Log Analytics.-- Make queries available across subscriptions.-- More data available to describe and categorize the query.-
+- Queries are available when you use a resource scope in Log Analytics.
+- Queries are made available across subscriptions.
+- More data is available to describe and categorize the query.
## Save a query To save a query to a query pack, select **Save as Log Analytics Query** from the **Save** dropdown in Log Analytics.
-[![Save query menu](media/save-query/save-query.png)](media/save-query/save-query.png#lightbox)
+[![Screenshot that shows the Save query menu.](media/save-query/save-query.png)](media/save-query/save-query.png#lightbox)
-When you save a query to a query pack, the following dialog box is displayed where you can provide values for the query properties. The query properties are used for filtering and grouping of similar queries to help you find the query you're looking for. See [Query properties](queries.md#query-properties) for a detailed description of each property.
+When you save a query to a query pack, the following dialog box appears where you can provide values for the query properties. The query properties are used for filtering and grouping of similar queries to help you find the query you're looking for. For a detailed description of each property, see [Query properties](queries.md#query-properties).
-Most users should leave the option to **Save to the default query pack** which will save the query in the [default query pack](query-packs.md#default-query-pack). Uncheck this box if there are other query packs in your subscription. See [Query packs in Azure Monitor Logs](query-packs.md) for details on creating a new query pack.
+Most users should leave the option to **Save to the default query pack**, which will save the query in the [default query pack](query-packs.md#default-query-pack). Clear this checkbox if there are other query packs in your subscription. For information on how to create a new query pack, see [Query packs in Azure Monitor Logs](query-packs.md).
-[![Save query dialog](media/save-query/save-query-dialog.png)](media/save-query/save-query-dialog.png#lightbox)
+[![Screenshot that shows the Save as query dialog.](media/save-query/save-query-dialog.png)](media/save-query/save-query-dialog.png#lightbox)
## Edit a query
-You may want to edit a query that you already saved. This may be to change the query itself or modify any of its properties. After you open an existing query in Log Analytics, you can edit it by selecting **Edit query details** from the **Save** dropdown. This will allow you to save the edited query with the same properties or modify any properties before saving.
-
-If you want to save the query with a different name, then select **Save as Log Analytics Query** the same as if you were creating a new query.
+You might want to edit a query that you've already saved. You might want to change the query itself or modify any of its properties. After you open an existing query in Log Analytics, you can edit it by selecting **Edit query details** from the **Save** dropdown. Now you can save the edited query with the same properties or modify any properties before saving.
+If you want to save the query with a different name, select **Save as Log Analytics Query** as if you were creating a new query.
## Save as a legacy query
-Saving as a legacy query is not recommended because of the advantages of query packs listed above. You can save a query to the workspace though to combine it with other queries that were save to the workspace before the release of query packs.
-
-To save a legacy query, select **Save as Log Analytics Query** from the **Save** dropdown in Log Analytics. Choose the **Save as Legacy query** option. The only option available will be the legacy category.
+We don't recommend saving as a legacy query because of the advantages of query packs. You can save a query to the workspace to combine it with other queries that were saved to the workspace before the release of query packs.
+To save a legacy query, select **Save as Log Analytics Query** from the **Save** dropdown in Log Analytics. Choose the **Save as Legacy query** option. The only option available will be the legacy category.
## Next steps
-[Get started with KQL Queries](get-started-queries.md)
+[Get started with KQL queries](get-started-queries.md)
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Title: Tutorial - Send data to Azure Monitor Logs using REST API (Resource Manager templates)
-description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor using REST API. Resource Manager template version.
+ Title: 'Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)'
+description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using the REST API Azure Resource Manager template version.
Last updated 07/15/2022 # Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)
-[Logs ingestion API (preview)](logs-ingestion-api-overview.md) in Azure Monitor allow you to send external data to a Log Analytics workspace with a REST API. This tutorial uses Resource Manager templates to walk through configuration of a new table and a sample application to send log data to Azure Monitor.
+The [Logs Ingestion API (preview)](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of a new table and a sample application to send log data to Azure Monitor.
> [!NOTE]
-> This tutorial uses Resource Manager templates and REST API to configure custom logs. See [Tutorial: Send data to Azure Monitor Logs using REST API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial using the Azure portal.
+> This tutorial uses ARM templates and a REST API to configure custom logs. For a similar tutorial using the Azure portal, see [Tutorial: Send data to Azure Monitor Logs using REST API (Azure portal)](tutorial-logs-ingestion-portal.md).
In this tutorial, you learn to: > [!div class="checklist"]
-> * Create a custom table in a Log Analytics workspace
-> * Create a data collection endpoint to receive data over HTTP
-> * Create a data collection rule that transforms incoming data to match the schema of the target table
-> * Create a sample application to send custom data to Azure Monitor
-
+> * Create a custom table in a Log Analytics workspace.
+> * Create a data collection endpoint (DCE) to receive data over HTTP.
+> * Create a data collection rule (DCR) that transforms incoming data to match the schema of the target table.
+> * Create a sample application to send custom data to Azure Monitor.
> [!NOTE]
-> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install Resource Manager templates. You can use any other method to make these calls.
+> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls by using the Azure Monitor **Tables** API and the Azure portal to install ARM templates. You can use any other method to make these calls.
## Prerequisites
-To complete this tutorial, you need the following:
+To complete this tutorial, you need:
-- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac) .-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
## Collect workspace details Start by gathering information that you'll need from your workspace.
-1. Navigate to your workspace in the **Log Analytics workspaces** menu in the Azure portal. From the **Properties** page, copy the **Resource ID** and save it for later use.
+Go to your workspace in the **Log Analytics workspaces** menu in the Azure portal. On the **Properties** page, copy the **Resource ID** and save it for later use.
- :::image type="content" source="media/tutorial-logs-ingestion-api/workspace-resource-id.png" lightbox="media/tutorial-logs-ingestion-api/workspace-resource-id.png" alt-text="Screenshot showing workspace resource ID.":::
-## Configure application
-Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) for this tutorial.
+## Configure an application
+Start by registering an Azure Active Directory application to authenticate against the API. Any Resource Manager authentication scheme is supported, but this tutorial follows the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-1. From the **Azure Active Directory** menu in the Azure portal, select **App registrations** and then **New registration**.
+1. On the **Azure Active Directory** menu in the Azure portal, select **App registrations** > **New registration**.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-registration.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-registration.png" alt-text="Screenshot showing app registration screen.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-registration.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-registration.png" alt-text="Screenshot that shows the app registration screen.":::
-2. Give the application a name and change the tenancy scope if the default is not appropriate for your environment. A **Redirect URI** isn't required.
+1. Give the application a name and change the tenancy scope if the default isn't appropriate for your environment. A **Redirect URI** isn't required.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-name.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-name.png" alt-text="Screenshot showing app details.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-name.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-name.png" alt-text="Screenshot that shows app details.":::
-3. Once registered, you can view the details of the application. Note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later in the process.
+1. After registration, you can view the details of the application. Note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later in the process.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-id.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-id.png" alt-text="Screenshot showing app ID.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-id.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-id.png" alt-text="Screenshot that shows the app ID.":::
-4. You now need to generate an application client secret, which is similar to creating a password to use with a username. Select **Certificates & secrets** and then **New client secret**. Give the secret a name to identify its purpose and select an **Expires** duration. *1 year* is selected here although for a production implementation, you would follow best practices for a secret rotation procedure or use a more secure authentication mode such a certificate.
+1. Generate an application client secret, which is similar to creating a password to use with a username. Select **Certificates & secrets** > **New client secret**. Give the secret a name to identify its purpose and select an **Expires** duration. The option **12 months** is selected here. For a production implementation, you would follow best practices for a secret rotation procedure or use a more secure authentication mode, such as a certificate.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret.png" alt-text="Screenshot showing secret for new app.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret.png" alt-text="Screenshot that shows the secret for the new app.":::
-5. Click **Add** to save the secret and then note the **Value**. Ensure that you record this value since You can't recover it once you navigate away from this page. Use the same security measures as you would for safekeeping a password as it's the functional equivalent.
+1. Select **Add** to save the secret and then note the **Value**. Ensure that you record this value because you can't recover it after you leave this page. Use the same security measures as you would for safekeeping a password because it's the functional equivalent.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot show secret value for new app.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app.":::
-## Create new table in Log Analytics workspace
-The custom table must be created before you can send data to it. The table for this tutorial will include three columns, as described in the schema below. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`.
+## Create a new table in a Log Analytics workspace
+The custom table must be created before you can send data to it. The table for this tutorial will include three columns, as described in the following schema. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`.
-Use the **Tables - Update** API to create the table with the PowerShell code below.
+Use the **Tables - Update** API to create the table with the following PowerShell code.
> [!IMPORTANT]
-> Custom tables must use a suffix of *_CL*.
+> Custom tables must use a suffix of `_CL`.
-1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
- :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot that shows opening Cloud Shell.":::
-2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+1. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
```PowerShell $tableParams = @'
Use the **Tables - Update** API to create the table with the PowerShell code bel
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams ```
+## Create a data collection endpoint
+A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics workspace where the data will be sent.
-## Create data collection endpoint
-A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. Once you configure the DCE and link it to a data collection rule, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics Workspace where the data will be sent.
-
-1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**.
- :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot to deploy custom template.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows how to deploy a custom template.":::
-2. Click **Build your own template in the editor**.
+1. Select **Build your own template in the editor**.
- :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows how to build a template in the editor.":::
-3. Paste the Resource Manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
+1. Paste the following ARM template into the editor and then select **Save**. You don't need to modify this template because you'll provide values for its parameters.
- :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot to edit Resource Manager template.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows how to edit an ARM template.":::
```json
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
} ```
-4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint.
+1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR and then provide values like a **Name** for the DCE. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCE.
- :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
-5. Click **Review + create** and then **Create** when you review the details.
+1. Select **Review + create** and then select **Create** after you review the details.
-6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion URI** since you'll need this in a later step.
+1. After the DCE is created, select it so that you can view its properties. Note the **Logs ingestion URI** because you'll need it in a later step.
- :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot for data collection endpoint uri.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows the DCE URI.":::
-7. Click **JSON View** to view other details for the DCE. Copy the **Resource ID** since you'll need this in a later step.
+1. Select **JSON View** to view other details for the DCE. Copy the **Resource ID** because you'll need it in a later step.
- :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot for data collection endpoint resource ID.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID.":::
+## Create a data collection rule
+The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of data that's being sent to the HTTP endpoint. It also defines the transformation that will be applied to it. The DCR also defines the destination workspace and table the transformed data will be sent to.
-## Create data collection rule
-The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) defines the schema of data that being sent to the HTTP endpoint, the transformation that will be applied to it, and the destination workspace and table the transformed data will be sent to.
+1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**.
-1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+ :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows how to deploy a custom template.":::
- :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot to deploy custom template.":::
+1. Select **Build your own template in the editor**.
-2. Click **Build your own template in the editor**.
+ :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows how to build a template in the editor.":::
- :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
+1. Paste the following ARM template into the editor and then select **Save**.
-3. Paste the Resource Manager template below into the editor and then click **Save**.
-
- :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot to edit Resource Manager template.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows how to edit an ARM template.":::
Notice the following details in the DCR defined in this template:
- - `dataCollectionEndpointId`: Resource ID of the data collection endpoint.
+ - `dataCollectionEndpointId`: Identifies the Resource ID of the data collection endpoint.
- `streamDeclarations`: Defines the columns of the incoming data. - `destinations`: Specifies the destination workspace. - `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table.
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
} ```
-4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
+1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR. Then provide values defined in the template. The values include a **Name** for the DCR and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCR.
- :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows how to edit custom deployment values.":::
-5. Click **Review + create** and then **Create** when you review the details.
+1. Select **Review + create** and then select **Create** after you review the details.
-6. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
+1. When the deployment is complete, expand the **Deployment details** box and select your DCR to view its details. Select **JSON View**.
- :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" alt-text="Screenshot for data collection rule details.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" alt-text="Screenshot that shows DCR details.":::
-7. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
+1. Copy the **Resource ID** for the DCR. You'll use it in the next step.
- :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot for data collection rule JSON view.":::
+ :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot that shows DCR JSON view.":::
> [!NOTE]
- > All of the properties of the DCR, such as the transformation, may not be displayed in the Azure portal even though the DCR was successfully created with those properties.
-
+ > All the properties of the DCR, such as the transformation, might not be displayed in the Azure portal even though the DCR was successfully created with those properties.
-## Assign permissions to DCR
-Once the data collection rule has been created, the application needs to be given permission to it. This will allow any application using the correct application ID and application key to send data to the new DCE and DCR.
+## Assign permissions to a DCR
+After the DCR has been created, the application needs to be given permission to it. Permission will allow any application using the correct application ID and application key to send data to the new DCE and DCR.
-1. From the DCR in the Azure portal, select **Access Control (IAM)** and then **Add role assignment**.
+1. From the DCR in the Azure portal, select **Access Control (IAM)** > **Add role assignment**.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-create.png" alt-text="Screenshot for adding custom role assignment to DCR.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-create.png" alt-text="Screenshot that shows adding a custom role assignment to DCR.":::
-2. Select **Monitoring Metrics Publisher** and click **Next**. You could instead create a custom action with the `Microsoft.Insights/Telemetry/Write` data action.
+1. Select **Monitoring Metrics Publisher** and select **Next**. You could instead create a custom action with the `Microsoft.Insights/Telemetry/Write` data action.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-select-role.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-select-role.png" alt-text="Screenshot for selecting role for DCR role assignment.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-select-role.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-select-role.png" alt-text="Screenshot that shows selecting a role for DCR role assignment.":::
-3. Select **User, group, or service principal** for **Assign access to** and click **Select members**. Select the application that you created and click **Select**.
+1. Select **User, group, or service principal** for **Assign access to** and choose **Select members**. Select the application that you created and choose **Select**.
- :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-select-member.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-select-member.png" alt-text="Screenshot for selecting members for DCR role assignment.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-select-member.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-select-member.png" alt-text="Screenshot that shows selecting members for the DCR role assignment.":::
+1. Select **Review + assign** and verify the details before you save your role assignment.
-4. Click **Review + assign** and verify the details before saving your role assignment.
-
- :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot for saving DCR role assignment.":::
-
+ :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot that shows saving the DCR role assignment.":::
## Send sample data
-The following PowerShell code sends data to the endpoint using HTTP REST fundamentals.
+The following PowerShell code sends data to the endpoint by using HTTP REST fundamentals.
> [!NOTE]
-> This tutorial uses commands that require PowerShell v7.0 or later. Please make sure your local installation of PowerShell is up to date, or execute this script using the Azure CloudShell.
+> This tutorial uses commands that require PowerShell v7.0 or later. Make sure your local installation of PowerShell is up to date or execute this script by using Azure Cloud Shell.
-1. Run the following PowerShell command which adds a required assembly for the script.
+1. Run the following PowerShell command, which adds a required assembly for the script.
```powershell Add-Type -AssemblyName System.Web ```
-1. Replace the parameters in the *step 0* section with values from the resources that you just created. You may also want to replace the sample data in the *step 2* section with your own.
+1. Replace the parameters in the **Step 0** section with values from the resources that you created. You might also want to replace the sample data in the **Step 2** section with your own.
```powershell ##################
- ### Step 0: set parameters required for the rest of the script
+ ### Step 0: Set parameters required for the rest of the script.
################## #information needed to authenticate to AAD and obtain a bearer token $tenantId = "00000000-0000-0000-0000-000000000000"; #Tenant ID the data collection endpoint resides in
The following PowerShell code sends data to the endpoint using HTTP REST fundame
$dceEndpoint = "https://my-dcr-name.westus2-1.ingest.monitor.azure.com"; #the endpoint property of the Data Collection Endpoint object ##################
- ### Step 1: obtain a bearer token used later to authenticate against the DCE
+ ### Step 1: Obtain a bearer token used later to authenticate against the DCE.
################## $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
The following PowerShell code sends data to the endpoint using HTTP REST fundame
"@; ##################
- ### Step 3: send the data to Log Analytics via the DCE.
+ ### Step 3: Send the data to Log Analytics via the DCE.
################## $body = $staticData; $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"};
The following PowerShell code sends data to the endpoint using HTTP REST fundame
``` > [!NOTE]
- > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and executely. Executing it uncommented as part of the script will not resolve the issue - the command must be executed separately.
+ > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and execute it. Executing it uncommented as part of the script won't resolve the issue. The command must be executed separately.
-2. After executing this script, you should see a `HTTP - 204` response, and in just a few minutes, the data arrive to your Log Analytics workspace.
+1. After you execute this script, you should see an `HTTP - 204` response. In a few minutes, the data arrives to your Log Analytics workspace.
## Troubleshooting
-This section describes different error conditions you may receive and how to correct them.
+This section describes different error conditions you might receive and how to correct them.
### Script returns error code 403
-Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
-### Script returns error code 413 or warning of `TimeoutExpired` with the message `ReadyBody_ClientConnectionAbort` in the response
-The message is too large. The maximum message size is currently 1MB per call.
+### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response
+The message is too large. The maximum message size is currently 1 MB per call.
### Script returns error code 429
-API limits have been exceeded. Refer to [service limits for Logs ingestion API](../service-limits.md#logs-ingestion-api) for details on the current limits.
+API limits have been exceeded. For information on the current limits, see [Service limits for the Logs Ingestion API](../service-limits.md#logs-ingestion-api).
### Script returns error code 503
-Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
### You don't receive an error, but data doesn't appear in the workspace
-The data may take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+
+### IntelliSense in Log Analytics doesn't recognize the new table
+The cache that drives IntelliSense might take up to 24 hours to update.
-### IntelliSense in Log Analytics not recognizing new table
-The cache that drives IntelliSense may take up to 24 hours to update.
## Next steps -- [Complete a similar tutorial using the Azure portal.](tutorial-logs-ingestion-portal.md)-- [Read more about custom logs.](logs-ingestion-api-overview.md)
+- [Complete a similar tutorial using the Azure portal](tutorial-logs-ingestion-portal.md)
+- [Read more about custom logs](logs-ingestion-api-overview.md)
- [Learn more about writing transformation queries](../essentials//data-collection-transformations.md)
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Title: Design a Log Analytics workspace architecture
-description: Describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor.
+description: The article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor.
Last updated 05/25/2022 # Design a Log Analytics workspace architecture
-While a single [Log Analytics workspace](log-analytics-workspace-overview.md) may be sufficient for many environments using Azure Monitor and Microsoft Sentinel, many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces and the configuration and placement of those workspaces to meet your particular requirements while optimizing your costs.
+
+A single [Log Analytics workspace](log-analytics-workspace-overview.md) might be sufficient for many environments that use Azure Monitor and Microsoft Sentinel. But many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces. It also discusses the configuration and placement of those workspaces to meet your requirements while optimizing your costs.
> [!NOTE]
-> This article includes both Azure Monitor and Microsoft Sentinel since many customers need to consider both in their design, and most of the decision criteria applies to both. If you only use one of these services, then you can simply ignore the other in your evaluation.
+> This article discusses Azure Monitor and Microsoft Sentinel because many customers need to consider both in their design. Most of the decision criteria apply to both services. If you use only one of these services, you can ignore the other in your evaluation.
## Design strategy
-Your design should always start with a single workspace since this reduces the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace, and multiple services and data sources can send data to the same workspace. As you identify criteria to create additional workspaces, your design should use the fewest number that will match your particular requirements.
-
-Designing a workspace configuration includes evaluation of multiple criteria, some of which may be in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
+Your design should always start with a single workspace to reduce the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace. Multiple services and data sources can send data to the same workspace. As you identify criteria to create more workspaces, your design should use the fewest number that will match your requirements.
+Designing a workspace configuration includes evaluation of multiple criteria. But some of the criteria might be in conflict. For example, you might be able to reduce egress charges by creating a separate workspace in each Azure region. Consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria independently. Consider your requirements and priorities to determine which design will be most effective for your environment.
## Design criteria
-The following table briefly presents the criteria that you should consider in designing your workspace architecture. The sections below describe each of these criteria in full detail.
+The following table presents criteria to consider when you design your workspace architecture. The sections that follow describe the criteria.
| Criteria | Description | |:|:|
-| [Segregate operational and security data](#segregate-operational-and-security-data) | Many customers will create separate workspaces for their operational and security data for data ownership and the additional cost from Microsoft Sentinel. In some cases though, you may be able to save cost by consolidating into a single workspace to qualify for a commitment tier. |
-| [Azure tenants](#azure-tenants) | If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant. |
-| [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations. |
-| [Data ownership](#data-ownership) | You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies. |
+| [Segregate operational and security data](#segregate-operational-and-security-data) | Many customers will create separate workspaces for their operational and security data for data ownership and the extra cost from Microsoft Sentinel. In some cases, you might be able to save costs by consolidating into a single workspace to qualify for a commitment tier. |
+| [Azure tenants](#azure-tenants) | If you have multiple Azure tenants, you'll usually create a workspace in each one. Several data sources can only send monitoring data to a workspace in the same Azure tenant. |
+| [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region. You might have regulatory or compliance requirements to store data in specific locations. |
+| [Data ownership](#data-ownership) | You might choose to create separate workspaces to define data ownership. For example, you might create workspaces by subsidiaries or affiliated companies. |
| [Split billing](#split-billing) | By placing workspaces in separate subscriptions, they can be billed to different parties. |
-| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each table in a workspace, but you need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
+| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each table in a workspace. You need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
| [Commitment tiers](#commitment-tiers) | Commitment tiers allow you to reduce your ingestion cost by committing to a minimum amount of daily data in a single workspace. | | [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. | | [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. | ### Segregate operational and security data
-Most customers who use both Azure Monitor and Microsoft Sentinel will create a dedicated workspace for each to segregate ownership of data between your operational and security teams and also to optimize costs. If Microsoft Sentinel is enabled in a workspace, then all data in that workspace is subject to Sentinel pricing, even if it's operational data collected by Azure Monitor. While a workspace with Sentinel gets 3 months of free data retention instead of 31 days, this will typically result in higher cost for operational data in a workspace without Sentinel. See [Azure Monitor Logs pricing details](cost-logs.md#workspaces-with-microsoft-sentinel).
-
-The exception is if combining data in the same workspace helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day that would provide a 15% discount for Azure Monitor and 50% discount for Sentinel.
+Most customers who use both Azure Monitor and Microsoft Sentinel will create a dedicated workspace for each to segregate ownership of data between operational and security teams. This approach also helps to optimize costs. If Microsoft Sentinel is enabled in a workspace, all data in that workspace is subject to Microsoft Sentinel pricing, even if it's operational data collected by Azure Monitor.
-If you create separate workspaces for other criteria then you'll usually create additional workspace pairs. For example, if you have two Azure tenants, you may create four workspaces - an operational and security workspace in each tenant.
+A workspace with Microsoft Sentinel gets three months of free data retention instead of 31 days. This scenario typically results in higher costs for operational data in a workspace without Microsoft Sentinel. See [Azure Monitor Logs pricing details](cost-logs.md#workspaces-with-microsoft-sentinel).
+The exception is if combining data in the same workspace helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
-- **If you use both Azure Monitor and Microsoft Sentinel**, create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.-- **If you use both Microsoft Sentinel and Microsoft Defender for Cloud**, consider using the same workspace for both solutions to keep security data in one place.
+If you create separate workspaces for other criteria, you'll usually create more workspace pairs. For example, if you have two Azure tenants, you might create four workspaces with an operational and security workspace in each tenant.
+- **If you use both Azure Monitor and Microsoft Sentinel:** Create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
+- **If you use both Microsoft Sentinel and Microsoft Defender for Cloud:** Consider using the same workspace for both solutions to keep security data in one place.
### Azure tenants
-Most resources can only send monitoring data to a workspace in the same Azure tenant. Virtual machines using the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or the [Log Analytics agents](../agents/log-analytics-agent.md) can send data to workspaces in separate Azure tenants, which may be a scenario that you consider as a [service provider](#multiple-tenant-strategies).
+Most resources can only send monitoring data to a workspace in the same Azure tenant. Virtual machines that use [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) or the [Log Analytics agents](../agents/log-analytics-agent.md) can send data to workspaces in separate Azure tenants. You might consider this scenario as a [service provider](#multiple-tenant-strategies).
-- **If you have a single Azure tenant**, then create a single workspace for that tenant.-- **If you have multiple Azure tenants**, then create a workspace for each tenant. See [Multiple tenant strategies](#multiple-tenant-strategies) for other options including strategies for service providers.
-
-### Azure regions
-Log Analytics workspaces each reside in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/), and you may have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as United States and Europe.
--- **If you have requirements for keeping data in a particular geography**, create a separate workspace for each region with such requirements.-- **If you do not have requirements for keeping data in a particular geography**, use a single workspace for all regions.
+- **If you have a single Azure tenant:** Create a single workspace for that tenant.
+- **If you have multiple Azure tenants:** Create a workspace for each tenant. For other options including strategies for service providers, see [Multiple tenant strategies](#multiple-tenant-strategies).
-You should also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that may apply when sending data to a workspace from a resource in another region, although these charges are usually minor relative to data ingestion costs for most customers. These charges will typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources using [diagnostic settings](../essentials/diagnostic-settings.md) does not [incur egress charges](../usage-estimated-costs.md#data-transfer-charges).
+### Azure regions
+Each Log Analytics workspaces resides in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/). You might have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as the United States and Europe.
-Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to estimate the cost and determine which regions you actually need. Consider workspaces in multiple regions if bandwidth charges are significant.
+- **If you have requirements for keeping data in a particular geography:** Create a separate workspace for each region with such requirements.
+- **If you don't have requirements for keeping data in a particular geography:** Use a single workspace for all regions.
+Also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that might apply when you're sending data to a workspace from a resource in another region. These charges are usually minor relative to data ingestion costs for most customers. These charges typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources by using [diagnostic settings](../essentials/diagnostic-settings.md) doesn't [incur egress charges](../usage-estimated-costs.md#data-transfer-charges).
-- **If bandwidth charges are significant enough to justify the additional complexity**, create a separate workspace for each region with virtual machines.-- **If bandwidth charges are not significant enough to justify the additional complexity**, use a single workspace for all regions.
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to estimate the cost and determine which regions you need. Consider workspaces in multiple regions if bandwidth charges are significant.
+- **If bandwidth charges are significant enough to justify the extra complexity:** Create a separate workspace for each region with virtual machines.
+- **If bandwidth charges aren't significant enough to justify the extra complexity:** Use a single workspace for all regions.
### Data ownership
-You may have a requirement to segregate data or define boundaries based on ownership. For example, you may have different subsidiaries or affiliated companies that require delineation of their monitoring data.
+You might have a requirement to segregate data or define boundaries based on ownership. For example, you might have different subsidiaries or affiliated companies that require delineation of their monitoring data.
-- **If you require data segregation**, use a separate workspace for each data owner.-- **If you do not require data segregation**, use a single workspace for all data owners.
+- **If you require data segregation:** Use a separate workspace for each data owner.
+- **If you don't require data segregation:** Use a single workspace for all data owners.
### Split billing
-You may need to split billing between different parties or perform charge back to a customer or internal business unit. [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) allows you to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription), which may be sufficient for your billing requirements.
+You might need to split billing between different parties or perform charge back to a customer or internal business unit. You can use [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription). This approach might be sufficient for your billing requirements.
-- **If you do not need to split billing or perform charge back**, use a single workspace for all cost owners.-- **If you need to split billing or perform charge back**, consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides granular enough cost reporting for your requirements. If not, use a separate workspace for each cost owner.
+- **If you don't need to split billing or perform charge back:** Use a single workspace for all cost owners.
+- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
### Data retention and archive
-You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#set-retention-and-archive-policy-by-table). You may require different settings for different sets of data in a particular table. If this is the case, then you would need to separate that data into different workspaces, each with unique retention settings.
--- **If you can use the same retention and archive settings for all data in each table**, use a single workspace for all resources.-- **If you can require different retention and archive settings for different resources in the same table**, use a separate workspace for different resources.-
+You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#set-retention-and-archive-policy-by-table). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
+- **If you can use the same retention and archive settings for all data in each table:** Use a single workspace for all resources.
+- **If you require different retention and archive settings for different resources in the same table:** Use a separate workspace for different resources.
### Commitment tiers
-[Commitment tiers](../logs/cost-logs.md#commitment-tiers) provide a discount to your workspace ingestion costs when you commit to a particular amount of daily data. You may choose to consolidate data in a single workspace in order to reach the level of a particular tier. This same volume of data spread across multiple workspaces would not be eligible for the same tier, unless you have a dedicated cluster.
-
-If you can commit to daily ingestion of at least 500 GB/day, then you should implement a [dedicated cluster](../logs/cost-logs.md#dedicated-clusters) which provides additional functionality and performance. Dedicated clusters also allow you to combine the data from multiple workspaces in the cluster to reach the level of a commitment tier.
--- **If you will ingest at least 500 GB/day across all resources**, create a dedicated cluster and set the appropriate commitment tier.-- **If you will ingest at least 100 GB/day across resources**, consider combining them into a single workspace to take advantage of a commitment tier.
+[Commitment tiers](../logs/cost-logs.md#commitment-tiers) provide a discount to your workspace ingestion costs when you commit to a specific amount of daily data. You might choose to consolidate data in a single workspace to reach the level of a particular tier. This same volume of data spread across multiple workspaces wouldn't be eligible for the same tier, unless you have a dedicated cluster.
+If you can commit to daily ingestion of at least 500 GB per day, you should implement a [dedicated cluster](../logs/cost-logs.md#dedicated-clusters) that provides extra functionality and performance. Dedicated clusters also allow you to combine the data from multiple workspaces in the cluster to reach the level of a commitment tier.
+- **If you'll ingest at least 500 GB per day across all resources:** Create a dedicated cluster and set the appropriate commitment tier.
+- **If you'll ingest at least 100 GB per day across resources:** Consider combining them into a single workspace to take advantage of a commitment tier.
### Legacy agent limitations
-While you should avoid sending duplicate data to multiple workspaces because of the additional charges, you may have virtual machines connected to multiple workspaces. The most common scenario is an agent connected to separate workspaces for Azure Monitor and Microsoft Sentinel.
-
- The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [Log Analytics agent for Windows](../agents/log-analytics-agent.md) can connect to multiple workspaces. The [Log Analytics agent for Linux](../agents/log-analytics-agent.md) though can only connect to a single workspace.
+You should avoid sending duplicate data to multiple workspaces because of the extra charges, but you might have virtual machines connected to multiple workspaces. The most common scenario is an agent connected to separate workspaces for Azure Monitor and Microsoft Sentinel.
-- **If you use the Log Analytics agent for Linux**, migrate to the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or ensure that your Linux machines only require access to a single workspace.
+ [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) and the [Log Analytics agent for Windows](../agents/log-analytics-agent.md) can connect to multiple workspaces. The [Log Analytics agent for Linux](../agents/log-analytics-agent.md) can only connect to a single workspace.
+- **If you use the Log Analytics agent for Linux:** Migrate to [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) or ensure that your Linux machines only require access to a single workspace.
### Data access control
-When you grant a user [access to a workspace](manage-access.md#azure-rbac), they have access to all data in that workspace. This is appropriate for a member of a central administration or security team who must access data for all resources. Access to the workspace is also determined by resource-context RBAC and table-level RBAC.
+When you grant a user [access to a workspace](manage-access.md#azure-rbac), the user has access to all data in that workspace. This access is appropriate for a member of a central administration or security team who must access data for all resources. Access to the workspace is also determined by resource-context role-based access control (RBAC) and table-level RBAC.
-[Resource-context RBAC](manage-access.md#access-mode)
-By default, if a user has read access to an Azure resource, they inherit permissions to any of that resource's monitoring data sent to the workspace. This allows users to access information about resources they manage without being granted explicit access to the workspace. If you need to block this access, you can change the [access control mode](manage-access.md#access-control-mode) to require explicit workspace permissions.
+[Resource-context RBAC](manage-access.md#access-mode): By default, if a user has read access to an Azure resource, they inherit permissions to any of that resource's monitoring data sent to the workspace. This level of access allows users to access information about resources they manage without being granted explicit access to the workspace. If you need to block this access, you can change the [access control mode](manage-access.md#access-control-mode) to require explicit workspace permissions.
-- **If you want users to be able to access data for their resources**, keep the default access control mode of *Use resource or workspace permissions*.-- **If you want to explicitly assign permissions for all users**, change the access control mode to *Require workspace permissions*.
+- **If you want users to be able to access data for their resources:** Keep the default access control mode of **Use resource or workspace permissions**.
+- **If you want to explicitly assign permissions for all users:** Change the access control mode to **Require workspace permissions**.
+[Table-level RBAC](manage-access.md#set-table-level-read-access): With table-level RBAC, you can grant or deny access to specific tables in the workspace. In this way, you can implement granular permissions required for specific situations in your environment.
-[Table-level RBAC](manage-access.md#set-table-level-read-access)
-With table-level RBAC, you can grant or deny access to specific tables in the workspace. This allows you to implement granular permissions required for specific situations in your environment.
+For example, you might grant access to only specific tables collected by Microsoft Sentinel to an internal auditing team. Or you might deny access to security-related tables to resource owners who need operational data related to their resources.
-For example, you might grant access to only specific tables collected by Sentinel to an internal auditing team. Or you might deny access to security related tables to resource owners who need operational data related to their resources.
+- **If you don't require granular access control by table:** Grant the operations and security team access to their resources and allow resource owners to use resource-context RBAC for their resources.
+- **If you require granular access control by table:** Grant or deny access to specific tables by using table-level RBAC.
-- **If you don't require granular access control by table**, grant the operations and security team access to their resources and allow resource owners to use resource-context RBAC for their resources.-- **If you require granular access control by table**, grant or deny access to specific tables using table-level RBAC.--
-## Working with multiple workspaces
-Since many designs will include multiple workspaces, Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For details, see the following:
+## Work with multiple workspaces
+Many designs will include multiple workspaces, so Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For more information, see:
- [Create a log query across multiple workspaces and apps in Azure Monitor](cross-workspace-query.md)-- [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md).
+- [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md)
+ ## Multiple tenant strategies
-Environments with multiple Azure tenants, including service providers (MSPs), independent software vendors (ISVs), and large enterprises, often require a strategy where a central administration team has access to administer workspaces located in other tenants. Each of the tenants may represent separate customers or different business units.
+Environments with multiple Azure tenants, including Microsoft service providers (MSPs), independent software vendors (ISVs), and large enterprises, often require a strategy where a central administration team has access to administer workspaces located in other tenants. Each of the tenants might represent separate customers or different business units.
> [!NOTE] > For partners and service providers who are part of the [Cloud Solution Provider (CSP) program](https://partner.microsoft.com/membership/cloud-solution-provider), Log Analytics in Azure Monitor is one of the Azure services available in Azure CSP subscriptions.
-There are two basic strategies for this functionality as described below.
+Two basic strategies for this functionality are described in the following sections.
### Distributed architecture
-In a distributed architecture, a Log Analytics workspace is created in each Azure tenant. This is the only option you can use if you're monitoring Azure services other than virtual machines.
-
-There are two options to allow service provider administrators to access the workspaces in the customer tenants.
-
+In a distributed architecture, a Log Analytics workspace is created in each Azure tenant. This option is the only one you can use if you're monitoring Azure services other than virtual machines.
-- Use [Azure Lighthouse](../../lighthouse/overview.md) to access each customer tenant. The service provider administrators are included in an Azure AD user group in the service providerΓÇÖs tenant, and this group is granted access during the onboarding process for each customer. The administrators can then access each customerΓÇÖs workspaces from within their own service provider tenant, rather than having to log into each customerΓÇÖs tenant individually. For more information, see [Monitor customer resources at scale](../../lighthouse/how-to/monitor-at-scale.md).
+There are two options to allow service provider administrators to access the workspaces in the customer tenants:
-- Add individual users from the service provider as [Azure Active Directory guest users (B2B)](../../active-directory/external-identities/what-is-b2b.md). The customer tenant administrators manage individual access for each service provider administrator, and the service provider administrators must log in to the directory for each tenant in the Azure portal to be able to access these workspaces.
+- Use [Azure Lighthouse](../../lighthouse/overview.md) to access each customer tenant. The service provider administrators are included in an Azure Active Directory (Azure AD) user group in the service provider's tenant. This group is granted access during the onboarding process for each customer. The administrators can then access each customer's workspaces from within their own service provider tenant instead of having to sign in to each customer's tenant individually. For more information, see [Monitor customer resources at scale](../../lighthouse/how-to/monitor-at-scale.md).
+- Add individual users from the service provider as [Azure AD guest users (B2B)](../../active-directory/external-identities/what-is-b2b.md). The customer tenant administrators manage individual access for each service provider administrator. The service provider administrators must sign in to the directory for each tenant in the Azure portal to access these workspaces.
-
-Advantages to this strategy are:
+Advantages to this strategy:
- Logs can be collected from all types of resources.-- The customer can confirm specific levels of permissions with [Azure delegated resource management](../../lighthouse/concepts/architecture.md), or can manage access to the logs using their own [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).-- Each customer can have different settings for their workspace such as retention and data cap.
+- The customer can confirm specific levels of permissions with [Azure delegated resource management](../../lighthouse/concepts/architecture.md). Or the customer can manage access to the logs by using their own [Azure RBAC](../../role-based-access-control/overview.md).
+- Each customer can have different settings for their workspace, such as retention and data cap.
- Isolation between customers for regulatory and compliance.-- The charge for each workspace in included in the bill for the customer's subscription.
+- The charge for each workspace is included in the bill for the customer's subscription.
+
+Disadvantages to this strategy:
-Disadvantages to this strategy are:
+- Centrally visualizing and analyzing data across customer tenants with tools such as Azure Monitor workbooks can result in slower experiences. This is the case especially when analyzing data across more than 50 workspaces.
+- If customers aren't onboarded for Azure delegated resource management, service provider administrators must be provisioned in the customer directory. This requirement makes it more difficult for the service provider to manage many customer tenants at once.
-- Centrally visualizing and analyzing data across customer tenants with tools such as Azure Monitor Workbooks can result in slower experiences, especially when analyzing data across more than 50 workspaces.-- If customers are not onboarded for Azure delegated resource management, service provider administrators must be provisioned in the customer directory. This makes it more difficult for the service provider to manage a large number of customer tenants at once. ### Centralized A single workspace is created in the service provider's subscription. This option can only collect data from customer virtual machines. Agents installed on the virtual machines are configured to send their logs to this central workspace.
-Advantages to this strategy are:
--- Easy to manage a large number of customers.-- Service provider has full ownership over the logs and the various artifacts such as functions and saved queries.-- Service provider can perform analytics across all of its customers.
+Advantages to this strategy:
-Disadvantages to this strategy are:
+- It's easy to manage many customers.
+- The service provider has full ownership over the logs and the various artifacts, such as functions and saved queries.
+- A service provider can perform analytics across all of its customers.
-- Logs can only be collected from virtual machines with an agent. It will not work with PaaS, SaaS and Azure fabric data sources.-- It may be difficult to separate data between customers, since their data shares a single workspace. Queries need to use the computer's fully qualified domain name (FQDN) or the Azure subscription ID.-- All data from all customers will be stored in the same region with a single bill and same retention and configuration settings.
+Disadvantages to this strategy:
+- Logs can only be collected from virtual machines with an agent. It won't work with PaaS, SaaS, or Azure Service Fabric data sources.
+- It might be difficult to separate data between customers because their data shares a single workspace. Queries need to use the computer's fully qualified domain name or the Azure subscription ID.
+- All data from all customers will be stored in the same region with a single bill and the same retention and configuration settings.
### Hybrid
-In a hybrid model, each tenant has its own workspace, and some mechanism is used to pull data into a central location for reporting and analytics. This data could include a small number of data types or a summary of the activity such as daily statistics.
+In a hybrid model, each tenant has its own workspace. A mechanism is used to pull data into a central location for reporting and analytics. This data could include a small number of data types or a summary of the activity, such as daily statistics.
There are two options to implement logs in a central location: -- Central workspace. The service provider creates a workspace in its tenant and use a script that utilizes the [Query API](api/overview.md) with the [logs ingestion API](logs-ingestion-api-overview.md) to bring the data from the tenant workspaces to this central location. Another option is to use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to copy data to the central workspace.--- Power BI. The tenant workspaces export data to Power BI using the integration between the [Log Analytics workspace and Power BI](log-powerbi.md). -
+- **Central workspace**: The service provider creates a workspace in its tenant and uses a script that utilizes the [Query API](api/overview.md) with the [logs ingestion API](logs-ingestion-api-overview.md) to bring the data from the tenant workspaces to this central location. Another option is to use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to copy data to the central workspace.
+- **Power BI**: The tenant workspaces export data to Power BI by using the integration between the [Log Analytics workspace and Power BI](log-powerbi.md).
## Next steps -- [Learn more about designing and configuring data access in a workspace.](manage-access.md)-- [Get sample workspace architectures for Microsoft Sentinel.](../../sentinel/sample-workspace-designs.md)
+- Learn more about [designing and configuring data access in a workspace](manage-access.md).
+- Get [sample workspace architectures for Microsoft Sentinel](../../sentinel/sample-workspace-designs.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 10/26/2022 Last updated : 11/03/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
You should understand a few considerations when you plan for Azure NetApp Files network.
-> [!IMPORTANT]
-> [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
- ### Constraints The following table describes whatΓÇÖs supported for each network features configuration:
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/02/2022 Last updated : 11/03/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions). Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it.
- [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
+
+ Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
## July 2022
Azure NetApp Files is updated regularly. This article provides a summary about t
[Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
- Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West Europe, West US. Regional coverage will expand as the preview progresses.
- * [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions) Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 04/26/2022 Last updated : 11/02/2022 # Configure your Bicep environment
Bicep supports a configuration file named `bicepconfig.json`. Within this file,
To customize values, create this file in the directory where you store Bicep files. You can add `bicepconfig.json` files in multiple directories. The configuration file closest to the Bicep file in the directory hierarchy is used.
-To create a `bicepconfig.json` file in Visual Studio Code, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file).
+To create a `bicepconfig.json` file in Visual Studio Code, open the Command Palette (**[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. For more information, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file).
+ ## Available settings
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 08/08/2022 Last updated : 11/03/2022
Let's make sure your environment is set up for working with Bicep files. To auth
| Tasks | Options | Bicep CLI installation | | | - | -- | | Author | [VS Code and Bicep extension](#vs-code-and-bicep-extension) | automatic |
+| | [Visual Studio and Bicep extension](#visual-studio-and-bicep-extension) | automatic |
| Deploy | [Azure CLI](#azure-cli) | automatic | | | [Azure PowerShell](#azure-powershell) | [manual](#install-manually) | | | [VS Code and Bicep extension](#vs-code-and-bicep-extension) | automatic |
If you get an error during installation, see [Troubleshoot Bicep installation](i
You can deploy your Bicep files directly from the VS Code editor. For more information, see [Deploy Bicep files from Visual Studio Code](deploy-vscode.md).
+## Visual Studio and Bicep extension
+
+To author Bicep file from Visual Studio, you need:
+
+- **Visual Studio** - If you don't already have Visual Studio, [install it](https://visualstudio.microsoft.com/).
+- **Bicep extension for Visual Studio**. Visual Studio with the Bicep extension provides language support and resource autocompletion. The extension helps you create and validate Bicep files. Install the extension from [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep).
+
+To walk through a tutorial, see [Quickstart: Create Bicep files with Visual Studio](./quickstart-create-bicep-use-visual-studio.md).
+ ## Azure CLI When you use Azure CLI with Bicep, you have everything you need to [deploy](deploy-cli.md) and [decompile](decompile.md) Bicep files. Azure CLI automatically installs the Bicep CLI when a command is executed that needs it.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 03/14/2022 Last updated : 11/03/2022 # What is Bicep?
Bicep provides the following advantages:
- **Authoring experience**: When you use the [Bicep Extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
-
+ ![Bicep file authoring example](./media/overview/bicep-intellisense.gif)
+ You can also create Bicep files in Visual Studio with the [Bicep extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep).
+ - **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. - **Orchestration**: You don't have to worry about the complexities of ordering operations. Resource Manager orchestrates the deployment of interdependent resources so they're created in the correct order. When possible, Resource Manager deploys resources in parallel so your deployments finish faster than serial deployments. You deploy the file through one command, rather than through multiple imperative commands.
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
Title: Create Bicep files - Visual Studio Code description: Use Visual Studio Code and the Bicep extension to Bicep files for deploy Azure resources Previously updated : 06/30/2022 Last updated : 11/03/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Bicep files, so I can use them to deploy Azure resources.
This quickstart guides you through the steps to create a [Bicep file](overview.md) with Visual Studio Code. You'll create a storage account and a virtual network. You'll also learn how the Bicep extension simplifies development by providing type safety, syntax validation, and autocompletion.
+Similar authoring experience is also supported in Visual Studio. See [Quickstart: Create Bicep files with Visual Studio](./quickstart-create-bicep-use-visual-studio.md).
+ ## Prerequisites If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Quickstart Create Bicep Use Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio.md
This quickstart guides you through the steps to create a [Bicep file](overview.md) with Visual Studio. You'll create a storage account and a virtual network. You'll also learn how the Bicep extension simplifies development by providing type safety, syntax validation, and autocompletion.
+Similar authoring experience is also supported in Visual Studio Code. See [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
+ ## Prerequisites - Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
To work with module registries, you must have [Bicep CLI](./install.md) version
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). To create one, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md).
-To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep), or [Visual Studio](https://visualstudio.microsoft.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep).
## Create Bicep modules
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 09/29/2022 Last updated : 11/02/2022 # Create Bicep files by using Visual Studio Code
The `build` command converts a Bicep file to an Azure Resource Manager template
The [Bicep configuration file (bicepconfig.json)](./bicep-config.md) can be used to customize your Bicep development experience. You can add `bicepconfig.json` in multiple directories. The configuration file closest to the bicep file in the directory hierarchy is used. When you select this command, the extension opens a dialog for you to select a folder. The default folder is where you store the Bicep file. If a `bicepconfig.json` file already exists in the folder, you can overwrite the existing file.
+To create a Bicep configuration file:
+
+1. Open Visual Studio Code.
+1. From the **View** menu, select **Command Palette** (or press **[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**.
+1. Select the file directory where you want to place the file.
+1. Save the configuration file when you are done.
+ ### Deploy Bicep file You can deploy Bicep files directly from Visual Studio Code. Select **Deploy Bicep file** from the command palette or from the context menu. The extension prompts you to sign in Azure, select subscription, create/select resource group, and enter parameter values.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | deployments | resource group | 1-64 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
-> | resourcegroups | subscription | 1-90 | Underscores, hyphens, periods, and letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
+> | resourcegroups | subscription | 1-90 | Underscores, hyphens, periods, parentheses, and letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
> | tagNames | resource | 1-512 | Can't use:<br>`<>%&\?/` or control characters | > | tagNames / tagValues | tag name | 1-256 | All characters. | > | templateSpecs | resource group | 1-90 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
This section describes languages supported by Azure Video Indexer API.
| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (language model) | |::|:--:|:--:|:--:|:--:|:-:|::|
-| Afrikaans | `af-ZA` | | | | | Γ£ö |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | | | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | | | Γ£ö | |
-| Bosnian | `bs-Latn` | | | | Γ£ö | |
-| Bulgarian | `bg-BG` | | | | Γ£ö | |
-| Catalan | `ca-ES` | | | | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
-| Croatian | `hr-HR` | | | | Γ£ö | |
-| Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Danish | `da-DK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | | | Γ£ö | |
-| Fijian | `en-FJ` | | | | Γ£ö | |
-| Filipino | `fil-PH` | | | | Γ£ö | |
-| Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
-| Greek | `el-GR` | | | | Γ£ö | |
-| Haitian | `fr-HT` | | | | Γ£ö | |
-| Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | | | Γ£ö | |
-| Indonesian | `id-ID` | | | | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | | | Γ£ö | |
-| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | | | Γ£ö | |
-| Lithuanian | `lt-LT` | | | | Γ£ö | |
-| Malagasy | `mg-MG` | | | | Γ£ö | |
-| Malay | `ms-MY` | | | | Γ£ö | |
-| Maltese | `mt-MT` | | | | Γ£ö | |
-| Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | | | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | | | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
-| Slovak | `sk-SK` | | | | Γ£ö | |
-| Slovenian as default languages, w | `sl-SI` | | | | Γ£ö |
-| Spanish | `es-ES` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | | | Γ£ö | |
-| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | | | Γ£ö | |
-| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | | | Γ£ö | |
-| Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Afrikaans | `af-ZA` | | | | Γ£ö | |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö| Γ£ö | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | | | Γ£ö | |
+| Bosnian | `bs-Latn` | | | | Γ£ö | |
+| Bulgarian | `bg-BG` | | | | Γ£ö | |
+| Catalan | `ca-ES` | | | | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-CK` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
+| Croatian | `hr-HR` | | | | Γ£ö | |
+| Czech | `cs-CZ` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Danish | `da-DK` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English Australia | `en-AU` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English United Kingdom | `en-GB` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö |Γ£ö | Γ£ö| Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | | | Γ£ö | |
+| Fijian | `en-FJ` | | | | Γ£ö | |
+| Filipino | `fil-PH` | | | | Γ£ö | |
+| Finnish | `fi-FI` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | `el-GR` | | | | Γ£ö | |
+| Haitian | `fr-HT` | | | | Γ£ö | |
+| Hebrew | `he-IL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Hungarian | `hu-HU` | | | | Γ£ö | |
+| Indonesian | `id-ID` | | | | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | | | Γ£ö | |
+| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Latvian | `lv-LV` | | | | Γ£ö | |
+| Lithuanian | `lt-LT` | | | | Γ£ö | |
+| Malagasy | `mg-MG` | | | | Γ£ö | |
+| Malay | `ms-MY` | | | | Γ£ö | |
+| Maltese | `mt-MT` | | | | Γ£ö | |
+| Norwegian | `nb-NO` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Polish | `pl-PL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Romanian | `ro-RO` | | | | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | | | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
+| Slovak | `sk-SK` | | | | Γ£ö | |
+| Slovenian | `sl-SI` | | | | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö| Γ£ö| Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | | | Γ£ö | |
+| Thai | `th-TH` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | | | Γ£ö | |
+| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Urdu | `ur-PK` | | | | Γ£ö | |
+| Vietnamese | `vi-VN` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | |
+
+**Default languages supported by Language identification (LID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR), Italian (it-IT) , Japanese (ja-JP), Portuguese (pt-BR), Russian (ru-RU), Chinese (Simplified) (zh-Hans).
+
+**Default languages supported by Multi-language identification (MLID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR).
### Change default languages supported by LID and MLID
-Languages marked with * (in the table above) are used as default when auto-detecting languages by LID or/and MLID. You can specify to use other supported languages (listed in the table above) as default languages, when [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API and passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
+You can specify to use other supported languages (listed in the table above) as default languages, when [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API and passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
> [!NOTE]
-> To change the default languages that you want for LID or MLID to use when auto-detecting, call [upload a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and set the `customLanguages` parameter.
+> Language identification (LID) and Multi-language identification (MLID) compares speech at the language level, such as English and German.
+> Do not include multiple locales of the same language in the custom languages list.
## Language support in frontend experiences
The following table describes language support in the Azure Video Indexer fronte
| Catalan | `ca-ES` | | Γ£ö | | Chinese (Cantonese Traditional) | `zh-HK` | | Γ£ö | | Chinese (Simplified) | `zh-Hans` | Γ£ö |Γ£ö |
+| Chinese (Simplified) | `zh-CK` | Γ£ö |Γ£ö |
| Chinese (Traditional) | `zh-Hant` | |Γ£ö | | Croatian | `hr-HR` | | | | Czech | `cs-CZ` | Γ£ö | Γ£ö |
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
The new set of logs, described below, enables you to better monitor your indexin
Azure Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-### Expanded supported languages in LID and MLID through the API
+### Expanded supported languages in LID and MLID through Azure Video Indexer API
-We expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API.
+Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API.
+
+The following languages are now supported through the API: Arabic (United Arab Emirates), Arabic Modern Standard, Arabic Egypt, Arabic (Iraq), Arabic (Jordan), Arabic (Kuwait), Arabic (Oman), Arabic (Qatar), Arabic (Saudi Arabia), Arabic Syrian Arab Republic, Czech, Danish, German, English Australia, English United Kingdom, English United States, Spanish, Spanish (Mexico), Finnish, French (Canada), French, Hebrew, Hindi, Italian, Japanese, Korean, Norwegian, Dutch, Polish, Portuguese, Portuguese (Portugal), Russian, Swedish, Thai, Turkish, Ukrainian, Vietnamese, Chinese (Simplified), Chinese (Cantonese, Traditional).
+
+To specify the list of languages to be identified by LID or MLID when auto-detecting, call [upload a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API and set the `customLanguages` parameter to include up to 10 languages from the supported languages above. Please note that the languages specified in the `customLanguages` are compared at a language level thus should include only one locale per language.
For more information, see [supported languages](language-support.md).
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/reserved-instance.md
Title: Reserved instances of Azure VMware Solution
description: Learn how to buy a reserved instance for Azure VMware Solution. The reserved instance covers only the compute part of your usage and includes software licensing costs. Previously updated : 05/13/2021 Last updated : 11/02/2022+ # Save costs with Azure VMware Solution
-When you commit to a reserved instance of [Azure VMware Solution](introduction.md), you save money. The reservation discount automatically applies to the running Azure VMware Solution hosts that match the reservation scope and attributes. In addition, a reserved instance purchase covers only the compute part of your usage and includes software licensing costs.
+When you commit to a reserved instance of [Azure VMware Solution](introduction.md), you save money. The reservation discount automatically applies to the running Azure VMware Solution hosts that match the reservation scope and attributes. In addition, a reserved instance purchase covers only the compute part of your usage and includes software licensing costs.
## Purchase restriction considerations
These requirements apply to buying a reserved dedicated host instance:
- For EA subscriptions, you must enable the **Add Reserved Instances** option in the [EA portal](https://ea.azure.com/). If disabled, you must be an EA Admin for the subscription to enable it. -- For subscription under a Cloud Solution Provider (CSP) Azure Plan, the partner must purchase the customer's reserved instances in the Azure portal.
+- For subscription under a Cloud Solution Provider (CSP) Azure Plan, the partner must purchase the customer's reserved instances in the Azure portal.
### Buy reserved instances for an EA subscription
These requirements apply to buying a reserved dedicated host instance:
2. Select **All services** > **Reservations**.
-3. Select **Purchase Now** and then select **Azure VMware Solution**.
+3. Select **Purchase Now**, then select **Azure VMware Solution**.
4. Enter the required fields. The selected attributes that match running Azure VMware Solution hosts qualify for the reservation discount. Attributes include the SKU, regions (where applicable), and scope. Reservation scope selects where the reservation savings apply.
You can also split a reservation into smaller chunks or merge reservations. None
For details about CSP-managed reservations, see [Sell Microsoft Azure reservations to customers using Partner Center, the Azure portal, or APIs](/partner-center/azure-reservations). - >[!NOTE] >Once you've purchased your reservation, you won't be able to make these types of changes directly: >
backup Backup Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks.md
Title: Back up Azure Managed Disks description: Learn how to back up Azure Managed Disks from the Azure portal. Previously updated : 07/05/2022 Last updated : 11/03/2022
A Backup vault is a storage entity in Azure that holds backup data for various n
![Retention settings](./media/backup-managed-disks/retention-settings.png) >[!NOTE]
- >Azure Backup for Managed Disks uses incremental snapshots which are limited to 200 snapshots per disk. To allow you to take on-demand backups aside from scheduled backups, backup policy limits the total backups to 180. Learn more about [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disk.
+ >Azure Backup for Managed Disks uses incremental snapshots which are limited to 500 snapshots per disk. To allow you to take on-demand backups aside from scheduled backups, backup policy limits the total backups to 450. Learn more about [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disk.
1. Complete the backup policy creation by selecting **Review + create**.
cloud-shell Example Terraform Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/example-terraform-bash.md
- Title: Deploy with Terraform from Azure Cloud Shell | Microsoft Docs
-description: Deploy with Terraform from Azure Cloud Shell
---
-tags: azure-cloud-shell
---- Previously updated : 11/15/2017-
-ms.tool: terraform
---
-# Deploy with Terraform from Bash in Azure Cloud Shell
-This article walks you through creating a resource group with the [Terraform AzureRM provider](https://www.terraform.io/docs/providers/azurerm/https://docsupdatetracker.net/index.html).
-
-[Hashicorp Terraform](https://www.terraform.io/) is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members to be edited, reviewed, and versioned. The Microsoft AzureRM provider is used to interact with resources supported by Azure Resource Manager via the AzureRM APIs.
-
-## Automatic authentication
-Terraform is installed in Bash in Cloud Shell by default. Additionally, Cloud Shell automatically authenticates your default Azure CLI subscription to deploy resources through the Terraform Azure modules.
-
-Terraform uses the default Azure CLI subscription that is set. To update default subscriptions, run:
-
-```azurecli-interactive
-az account set --subscription mySubscriptionName
-```
-
-## Walkthrough
-### Launch Bash in Cloud Shell
-1. Launch Cloud Shell from your preferred location
-2. Verify your preferred subscription is set
-
-```azurecli-interactive
-az account show
-```
-
-### Create a Terraform template
-Create a new Terraform template named main.tf with your preferred text editor.
-
-```
-vim main.tf
-```
-
-Copy/paste the following code into Cloud Shell.
-
-```
-resource "azurerm_resource_group" "myterraformgroup" {
- name = "myRgName"
- location = "West US"
-}
-```
-
-Save your file and exit your text editor.
-
-### Terraform init
-Begin by running `terraform init`.
-
-```
-justin@Azure:~$ terraform init
-
-Initializing provider plugins...
-
-The following providers do not have any version constraints in configuration,
-so the latest version was installed.
-
-To prevent automatic upgrades to new major versions that may contain breaking
-changes, it is recommended to add version = "..." constraints to the
-corresponding provider blocks in configuration, with the constraint strings
-suggested below.
-
-* provider.azurerm: version = "~> 0.2"
-
-Terraform has been successfully initialized!
-
-You may now begin working with Terraform. Try running "terraform plan" to see
-any changes that are required for your infrastructure. All Terraform commands
-should now work.
-
-If you ever set or change modules or backend configuration for Terraform,
-rerun this command to reinitialize your working directory. If you forget, other
-commands will detect it and remind you to do so if necessary.
-```
-
-The [terraform init command](https://www.terraform.io/docs/commands/init.html) is used to initialize a working directory containing Terraform configuration files. The `terraform init` command is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
-
-### Terraform plan
-Preview the resources to be created by the Terraform template with `terraform plan`.
-
-```
-justin@Azure:~$ terraform plan
-Refreshing Terraform state in-memory prior to plan...
-The refreshed state will be used to calculate this plan, but will not be
-persisted to local or remote state storage.
----
-An execution plan has been generated and is shown below.
-Resource actions are indicated with the following symbols:
- + create
-
-Terraform will perform the following actions:
-
- + azurerm_resource_group.demo
- id: <computed>
- location: "westus"
- name: "myRGName"
- tags.%: <computed>
--
-Plan: 1 to add, 0 to change, 0 to destroy.
---
-Note: You didn't specify an "-out" parameter to save this plan, so Terraform
-can't guarantee that exactly these actions will be performed if
-"terraform apply" is subsequently run.
-```
-
-The [terraform plan command](https://www.terraform.io/docs/commands/plan.html) is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files. The plan can be saved using -out, and then provided to terraform apply to ensure only the pre-planned actions are executed.
-
-### Terraform apply
-Provision the Azure resources with `terraform apply`.
-
-```
-justin@Azure:~$ terraform apply
-azurerm_resource_group.demo: Creating...
- location: "" => "westus"
- name: "" => "myRGName"
- tags.%: "" => "<computed>"
-azurerm_resource_group.demo: Creation complete after 0s (ID: /subscriptions/mySubIDmysub/resourceGroups/myRGName)
-
-Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
-```
-
-The [terraform apply command](https://www.terraform.io/docs/commands/apply.html) is used to apply the changes required to reach the desired state of the configuration.
-
-### Verify deployment with Azure CLI
-Run `az group show -n myRgName` to verify the resource has succeeded provisioning.
-
-```azurecli-interactive
-az group show -n myRgName
-```
-
-### Clean up with terraform destroy
-Clean up the resource group created with the [Terraform destroy command](https://www.terraform.io/docs/commands/destroy.html) to clean up Terraform-created infrastructure.
-
-```
-justin@Azure:~$ terraform destroy
-azurerm_resource_group.demo: Refreshing state... (ID: /subscriptions/mySubID/resourceGroups/myRGName)
-
-An execution plan has been generated and is shown below.
-Resource actions are indicated with the following symbols:
- - destroy
-
-Terraform will perform the following actions:
-
- - azurerm_resource_group.demo
--
-Plan: 0 to add, 0 to change, 1 to destroy.
-
-Do you really want to destroy?
- Terraform will destroy all your managed infrastructure, as shown above.
- There is no undo. Only 'yes' will be accepted to confirm.
-
- Enter a value: yes
-
-azurerm_resource_group.demo: Destroying... (ID: /subscriptions/mySubID/resourceGroups/myRGName)
-azurerm_resource_group.demo: Still destroying... (ID: /subscriptions/mySubID/resourceGroups/myRGName, 10s elapsed)
-azurerm_resource_group.demo: Still destroying... (ID: /subscriptions/mySubID/resourceGroups/myRGName, 20s elapsed)
-azurerm_resource_group.demo: Still destroying... (ID: /subscriptions/mySubID/resourceGroups/myRGName, 30s elapsed)
-azurerm_resource_group.demo: Still destroying... (ID: /subscriptions/mySubID/resourceGroups/myRGName, 40s elapsed)
-azurerm_resource_group.demo: Destruction complete after 45s
-
-Destroy complete! Resources: 1 destroyed.
-```
-
-You have successfully created an Azure resource through Terraform. Visit next steps to continue learning about Cloud Shell.
-
-## Next steps
-[Learn about the Terraform Azure provider](https://www.terraform.io/docs/providers/azurerm/#)<br>
-[Bash in Cloud Shell quickstart](quickstart.md)
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 09/20/2022 Last updated : 11/03/2022
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
Previously updated : 09/20/2022 Last updated : 11/03/2022
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
Previously updated : 06/13/2022 Last updated : 11/03/2022
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 06/13/2022 Last updated : 11/03/2022 keywords: computer vision, computer vision applications, computer vision service
Use domain models to detect and identify domain-specific content in an image, su
Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. [Detect the color scheme](concept-detecting-color-schemes.md)
-### Generate a thumbnail
+### Get the area of interest / smart crop
-Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects within the image to determine the *area of interest*. Computer Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs. [Generate a thumbnail](concept-generating-thumbnails.md)
+Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. [Generate a thumbnail](concept-generating-thumbnails.md)
:::image type="content" source="Images/thumbnail-demo.png" alt-text="An image of a person on a mountain, with cropped versions to the right"::: -
-### Get the area of interest
-
-Analyze the contents of an image to return the coordinates of the *area of interest*. Instead of cropping the image and generating a thumbnail, Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. [Get the area of interest](concept-generating-thumbnails.md#area-of-interest)
- ### Moderate content in images You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Previously updated : 09/23/2022 Last updated : 11/03/2022
As with all of the Cognitive Services, developers using the Computer Vision serv
## Next steps -- OCR for general (non-document) images - try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).-- OCR for PDF, Office and HTML documents and document images, start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).
+- OCR for general (non-document) images: try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).
+- OCR for PDF, Office and HTML documents and document images: start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).
- Looking for the previous GA version? Refer to the [Computer Vision 3.2 GA SDK or REST API quickstarts](./quickstarts-sdk/client-library.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 06/13/2022 Last updated : 11/03/2022 keywords: computer vision, computer vision applications, computer vision service
Follow a quickstart to implement and run a service in your preferred development
* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md) * [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)
+* [Quickstart: Face](quickstarts-sdk/identity-client-library.md)
* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
Previously updated : 06/13/2022 Last updated : 11/03/2022 keywords: image recognition, image recognition app, custom vision
cognitive-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/image-classification.md
Previously updated : 06/13/2022 Last updated : 11/03/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
cognitive-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/object-detection.md
Previously updated : 06/13/2022 Last updated : 11/03/2022 ms.devlang: csharp, golang, java, javascript, python keywords: custom vision
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-apprentice-mode.md
Last updated 07/26/2022
When deploying a new Personalizer resource, it is initialized with an untrained Reinforcement Learning (RL) model. That is, it has not yet learned from any data and therefore will not perform well in practice. This is known as the "cold start" problem and is resolved over time by training the model with real data from your production environment. **Apprentice mode** is a learning behavior that helps mitigate the "cold start" problem, and allows you to gain confidence in the model _before_ it makes decisions in production, all without requiring any code change. -
+<!--
## What is Apprentice mode?
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
+> [!NOTE]
+> Call Automation currently doesnt interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isnt supported.
+ ## Common Use Cases Some of the common use cases that can be build using Call Automation include:
Azure Communication Services uses Event Grid to deliver the [IncomingCall event]
![Screenshot of flow for incoming call and actions.](./media/action-architecture.png) - ## Call Actions ### Pre-call actions
The Call Automation events are sent to the web hook callback URI specified when
| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed | | AddParticipantSucceeded| Your application added a participant | |AddParticipantFailed | Your application was unable to add a participant |
-| RemoveParticipantSucceeded|Your application removed a participant |
-| RemoveParticipantFailed |Your application was unable to remove a participant |
| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call | | PlayCompleted| Your application successfully played the audio file provided | | PlayFailed| Your application failed to play audio | | RecognizeCompleted | Recognition of user input was successfully completed |
-| RecognizeFailed | Recognition of user input was unsuccessful <br/><br/>*to learn more about recognize action events view our [quickstart](../../quickstarts/voice-video-calling/Recognize-Action.md)*|
+| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our [quickstart](../../quickstarts/voice-video-calling/Recognize-Action.md)*|
+ ## Known Issues
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers which are currently
| | | | -- | | Android | ✔️ | ❌ | ❌ | | iOS | ❌ | ✔️ | ❌ |
-| macOS | ✔️ | ✔️ | ❌ |
+| macOS | ✔️ | ✔️ | ✔️ |
| Windows | ✔️ | ❌ | ✔️ | | Ubuntu/Linux | ✔️ | ❌ | ❌ |
container-registry Container Registry Transfer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md
# ACR Transfer troubleshooting
-* **Template deployment failures or errors**
+## Template deployment failures or errors
* If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource. * For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
-* **Problems accessing Key Vault**<a name="problems-accessing-key-vault"></a>
+## Problems accessing Key Vault
* If your pipelineRun deployment fails with a `403 Forbidden` error when accessing Azure Key Vault, verify that your pipeline managed identity has adequate permissions. * A pipelineRun uses the exportPipeline or importPipeline managed identity to fetch the SAS token secret from your Key Vault. ExportPipelines and importPipelines are provisioned with either a system-assigned or user-assigned managed identity. This managed identity is required to have `secret get` permissions on the Key Vault in order to read the SAS token secret. Ensure that an access policy for the managed identity was added to the Key Vault. For more information, reference [Give the ExportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-exportpipeline-identity-keyvault-policy-access) and [Give the ImportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-importpipeline-identity-keyvault-policy-access).
-* **Problems accessing storage**<a name="problems-accessing-storage"></a>
+## Problems accessing storage
* If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token. * The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`. * The SAS token might not have sufficient Allowed Resource Types. Verify that the SAS token has been given permissions to Service, Container, and Object under Allowed Resource Types (`srt=sco` in the SAS token). * The SAS token might not have sufficient permissions. For export pipelines, the required SAS token permissions are Read, Write, List, and Add. For import pipelines, the required SAS token permissions are Read, Delete, and List. (The Delete permission is required only if the import pipeline has the `DeleteSourceBlobOnSuccess` option enabled.) * The SAS token might not be configured to work with HTTPS only. Verify that the SAS token is configured to work with HTTPS only (`spr=https` in the SAS token).
-* **Problems with export or import of storage blobs**
+## Problems with export or import of storage blobs
* SAS token may be invalid, or may have insufficient permissions for the specified export or import run. See [Problems accessing storage](#problems-accessing-storage). * Existing storage blob in source storage account might not be overwritten during multiple export runs. Confirm that the OverwriteBlob option is set in the export run and the SAS token has sufficient permissions. * Storage blob in target storage account might not be deleted after successful import run. Confirm that the DeleteBlobOnSuccess option is set in the import run and the SAS token has sufficient permissions. * Storage blob not created or deleted. Confirm that container specified in export or import run exists, or specified storage blob exists for manual import run.
-* **Problems with Source Trigger Imports**
+## Problems with Source Trigger Imports
* The SAS token must have the List permission for Source Trigger imports to work. * Source Trigger imports will only fire if the Storage Blob has a Last Modified time within the last 60 days. * The Storage Blob must have a valid ContentMD5 property in order to be imported by the Source Trigger feature. * The Storage Blob must have the "category":"acr-transfer-blob" blob metadata in order to be imported by the Source Trigger feature. This metadata is added automatically during an Export Pipeline Run, but may be stripped when moved from storage account to storage account depending on the method of copy.
-* **AzCopy issues**
+## AzCopy issues
* See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md).
-* **Artifacts transfer problems**
+## Artifacts transfer problems
* Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you're transferring a maximum of 50 artifacts. * Pipeline run might not have completed. An export or import run can take some time. * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team. * To create ACR Transfer resources such as `exportPipelines`,` importPipelines`, and `pipelineRuns`, the user must have at least Contributor access on the ACR subscription. Otherwise, they'll see authorization to perform the transfer denied or scope is invalid errors.
-* **Problems pulling the image in a physically isolated environment**
+## Problems pulling the image in a physically isolated environment
* If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If so, you'll need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-) <!-- LINKS - External -->
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Previously updated : 07/26/2022- Last updated : 11/03/2022+ # Quickstart: Azure Cosmos DB for NoSQL client library for .NET [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
> [!NOTE]
-> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
+> The [example code snippets](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) are available on GitHub as a .NET project.
[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
This section walks you through creating an Azure Cosmos DB account and setting u
### <a id="create-account"></a>Create an Azure Cosmos DB account
+> [!TIP]
+> Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. If you create an account using the free trial, you can safely skip this section.
+ [!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](./includes/create-resources.md)] ### Create a new .NET app
The easiest way to create a new item in a container is to first build a C# [clas
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/001-quickstart/Product.cs" id="entity" highlight="3-4":::
-Create an item in the container by calling [``Container.UpsertItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync). In this example, we chose to *upsert* instead of *create* a new item in case you run this sample code more than once.
+Create an item in the container by calling [``Container.CreateItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync).
:::code language="csharp" source="~/azure-cosmos-dotnet-v3/001-quickstart/Program.cs" id="new_item" highlight="3-4,12":::
Created item: 68719518391 [gear-surf-surfboards]
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the .NET SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB for NoSQL resources.
+In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the .NET SDK. You can now dive deeper into a tutorial where you manage your Azure Cosmos DB for NoSQL resources and data using a .NET console application.
> [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md)
+> [Tutorial: Develop a .NET console application with Azure Cosmos DB for NoSQL](tutorial-dotnet-console-app.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
> * [.NET](samples-dotnet.md) >
-The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
+The [cosmos-db-sql-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources.
## Prerequisites
The sample projects are all self-contained and are designed to be ran individual
| Task | API reference | | : | : |
-| [Create a client with endpoint and key](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with connection string](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with ``DefaultAzureCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
-| [Create a client with custom ``TokenCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with endpoint and key](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
### Databases | Task | API reference | | : | : |
-| [Create a database](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
+| [Create a database](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
### Containers | Task | API reference | | : | : |
-| [Create a container](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
+| [Create a container](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
### Items | Task | API reference | | : | : |
-| [Create an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
-| [Point read an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
-| [Query multiple items](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
+| [Create an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
+| [Point read an item](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
+| [Query multiple items](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
## Next steps
cosmos-db Tutorial Dotnet Console App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-console-app.md
+
+ Title: |
+ Tutorial: Develop a .NET console application with Azure Cosmos DB for NoSQL
+description: |
+ .NET tutorial to create a console application that adds data to Azure Cosmos DB for NoSQL.
++++++ Last updated : 11/02/2022
+ms.devlang: csharp
+++
+# Tutorial: Develop a .NET console application with Azure Cosmos DB for NoSQL
+++
+The Azure SDK for .NET allows you to add data to an API for NoSQL container either how-to-dotnet-create-item.md#create-an-item-asynchronously or by using a [transactional batch](transactional-batch.md?tabs=dotnet). This tutorial will walk through the process of create a new .NET console application that adds multiple items to a container.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a database using API for NoSQL
+> - Create a .NET console application and add the Azure SDK for .NET
+> - Add individual items into an API for NoSQL container
+> - Retrieve items efficient from an API for NoSQL container
+> - Create a transaction with batch changes for the API for NoSQL container
+>
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+- [Visual Studio Code](https://code.visualstudio.com)
+- [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0)
+- Experience writing C# applications.
+
+## Create API for NoSQL resources
+
+First, create an empty database in the existing API for NoSQL account. You'll create a container using the Azure SDK for .NET later.
+
+1. Navigate to your existing API for NoSQL account in the [Azure portal](https://portal.azure.com/).
+
+1. In the resource menu, select **Keys**.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/resource-menu-keys.png" lightbox="media/tutorial-dotnet-console-app/resource-menu-keys.png" alt-text="Screenshot of an API for NoSQL account page. The Keys option is highlighted in the resource menu.":::
+
+1. On the **Keys** page, observe and record the value of the **URI** and **PRIMARY KEY** fields. These values will be used throughout the tutorial.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/page-keys.png" alt-text="Screenshot of the Keys page with the URI and Primary Key fields highlighted.":::
+
+1. In the resource menu, select **Data Explorer**.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/resource-menu-data-explorer.png" alt-text="Screenshot of the Data Explorer option highlighted in the resource menu.":::
+
+1. On the **Data Explorer** page, select the **New Database** option in the command bar.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/page-data-explorer-new-database.png" alt-text="Screenshot of the New Database option in the Data Explorer command bar.":::
+
+1. In the **New Database** dialog, create a new container with the following settings:
+
+ | | Value |
+ | | |
+ | **Database id** | `cosmicworks` |
+ | **Database throughput type** | **Manual** |
+ | **Database throughput amount** | `400` |
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/dialog-new-database.png" alt-text="Screenshot of the New Database dialog in the Data Explorer with various values in each field.":::
+
+1. Select **OK** to create the database.
+
+## Create .NET console application
+
+Now, you'll create a new .NET console application and import the Azure SDK for .NET by using the `Microsoft.Azure.Cosmos` library from NuGet.
+
+1. Open a terminal in an empty directory.
+
+1. Create a new console application using the `console` built-in template
+
+ ```bash
+ dotnet new console --langVersion preview
+ ```
+
+1. Add the **3.31.1-preview** version of the `Microsoft.Azure.Cosmos` package from NuGet.
+
+ ```bash
+ dotnet add package Microsoft.Azure.Cosmos --version 3.31.1-preview
+ ```
+
+1. Also, add the **pre-release** version of the `System.CommandLine` package from NuGet.
+
+ ```bash
+ dotnet add package System.CommandLine --prerelease
+ ```
+
+1. Also, add the `Humanizer` package from NuGet.
+
+ ```bash
+ dotnet add package Humanizer
+ ```
+
+1. Build the console application project.
+
+ ```bash
+ dotnet build
+ ```
+
+1. Open Visual Studio Code using the current project folder as the workspace.
+
+ > [!TIP]
+ > You can run `code .` in the terminal to open Visual Studio Code and automatically open the working directory as the current workspace.
+
+1. Navigate to and open the **Program.cs** file. Delete all of the existing code in the file.
+
+1. Add this code to the file to use the **System.CommandLine** library to parse the command line for two strings passed in through the `--first` and `--last` options.
+
+ ```csharp
+ using System.CommandLine;
+
+ var command = new RootCommand();
+
+ var nameOption = new Option<string>("--name") { IsRequired = true };
+ var emailOption = new Option<string>("--email");
+ var stateOption = new Option<string>("--state") { IsRequired = true };
+ var countryOption = new Option<string>("--country") { IsRequired = true };
+
+ command.AddOption(nameOption);
+ command.AddOption(emailOption);
+ command.AddOption(stateOption);
+ command.AddOption(countryOption);
+
+ command.SetHandler(
+ handle: CosmosHandler.ManageCustomerAsync,
+ nameOption,
+ emailOption,
+ stateOption,
+ countryOption
+ );
+
+ await command.InvokeAsync(args);
+ ```
+
+ > [!NOTE]
+ > For this tutorial, it's not entirely important that you understand how the command-line parser works. The parser has four options that can be specified when the application is running. Three of the options are required since they will be used to construct the ID and partition key fields.
+
+1. At this point, the project won't build since you haven't defined the static `CosmosHandler.ManageCustomerAsync` method yet.
+
+1. **Save** the **Program.cs** file.
+
+## Add items to a container using the SDK
+
+Next, you'll use individual operations to add items into the API for NoSQL container. In this section, you'll define the `CosmosHandler.ManageCustomerAsync` method.
+
+1. Create a new **CosmosHandler.cs** file.
+
+1. In the **CosmosHandler.cs** file, add a new using directive for the `Humanizer` and `Microsoft.Azure.Cosmos` namespaces.
+
+ ```csharp
+ using Humanizer;
+ using Microsoft.Azure.Cosmos;
+ ```
+
+1. Create a new static class named `CosmosHandler`.
+
+ ```csharp
+ public static class CosmosHandler
+ { }
+ ```
+
+1. Just to validate this app will work, create a short implementation of the static `ManageCustomerAsync` method to print the command-line input.
+
+ ```csharp
+ public static async Task ManageCustomerAsync(string name, string email, string state, string country)
+ {
+ await Console.Out.WriteLineAsync($"Hello {name} of {state}, {country}!");
+ }
+ ```
+
+1. **Save** the **CosmosHandler.cs** file.
+
+1. Back in the terminal, run the application.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira' --state 'Washington' --country 'United States'
+ ```
+
+1. The output of the command should be a fun greeting.
+
+ ```output
+ Hello Mica Pereira of Washington, United States!
+ ```
+
+1. Return to the **CosmosHandler.cs** file.
+
+1. Within the static **CosmosHandler** class, add a new `private static readonly` member of type `CosmosClient` named `_client`.
+
+ ```csharp
+ private static readonly CosmosClient _client;
+ ```
+
+1. Create a new static constructor for the `CosmosHandler` class.
+
+ ```csharp
+ static CosmosHandler()
+ { }
+ ```
+
+1. Within the constructor, create a new instance of the `CosmosClient` class passing in two string parameters with the **URI** and **PRIMARY KEY** values you previously recorded in the lab. Store this new instance in the `_client` member.
+
+ ```csharp
+ static CosmosHandler()
+ {
+ _client = new CosmosClient(
+ accountEndpoint: "<uri>",
+ authKeyOrResourceToken: "<primary-key>"
+ );
+ }
+ ```
+
+1. Back within the static **CosmosHandler** class, create a new asynchronous method named `GetContainerAsync` that returns an `Container`.
+
+ ```csharp
+ private static async Task<Container> GetContainer()
+ { }
+ ```
+
+1. For the next steps, add this code within the `GetContainerAsync` method.
+
+ 1. Get the `cosmicworks` database and store it in a variable named `database`.
+
+ ```csharp
+ Database database = _client.GetDatabase("cosmicworks");
+ ```
+
+ 1. Create a new generic `List<>` of `string` values within a list of hierarchical partition key paths and store it in a variable named `keyPaths`.
+
+ ```csharp
+ List<string> keyPaths = new()
+ {
+ "/address/country",
+ "/address/state"
+ };
+ ```
+
+ 1. Create a new `ContainerProperties` variable with the name of the container (`customers`) and the list of partition key paths.
+
+ ```csharp
+ ContainerProperties properties = new(
+ id: "customers",
+ partitionKeyPaths: keyPaths
+ );
+ ```
+
+ 1. Use the `CreateContainerIfNotExistsAsync` method to supply the container properties and retrieve the container. This method will, per the name, asynchronously create the container if it doesn't already exist within the database. Return the result as the output of the `GetContainerAsync` method.
+
+ ```csharp
+ return await database.CreateContainerIfNotExistsAsync(
+ containerProperties: properties
+ );
+ ```
+
+1. Delete all of the code within the `ManageCustomerAsync` method.
+
+1. For the next steps, add this code within the `ManageCustomerAsync` method.
+
+ 1. Asynchronously call the `GetContainerAsync` method and store the result in a variable named `container`.
+
+ ```csharp
+ Container container = await GetContainerAsync();
+ ```
+
+ 1. Create a new variable named `id` that uses the `Kebaberize` method from **Humanizer** to transform the `name` method parameter.
+
+ ```csharp
+ string id = name.Kebaberize();
+ ```
+
+ > [!NOTE]
+ > The `Kebaberize` method will replace all spaces with hyphens and conver the text to lowercase.
+
+ 1. Create a new anonymous typed item using the `name`, `state`, and `country` method parameters and the `id` variable. Store the item as a variable named `customer`.
+
+ ```csharp
+ var customer = new {
+ id = id,
+ name = name,
+ address = new {
+ state = state,
+ country = country
+ }
+ };
+ ```
+
+ 1. Use the container's asynchronous `CreateItemAsync` method to create a new item in the container and assign the HTTP response metadata to a variable named `response`.
+
+ ```csharp
+ var response = await container.CreateItemAsync(customer);
+ ```
+
+ 1. Write the values of the `response` variable's `StatusCode` and `RequestCharge` properties to the console. Also write the value of the `id` variable.
+
+ ```csharp
+ Console.WriteLine($"[{response.StatusCode}]\t{id}\t{response.RequestCharge} RUs");
+ ```
+
+1. **Save** the **CosmosHandler.cs** file.
+
+1. Back in the terminal, run the application again.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira' --state 'Washington' --country 'United States'
+ ```
+
+1. The output of the command should include a status and request charge for the operation.
+
+ ```output
+ [Created] mica-pereira 7.05 RUs
+ ```
+
+ > [!NOTE]
+ > Your request charge may vary.
+
+1. Run the application one more time.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira' --state 'Washington' --country 'United States'
+ ```
+
+1. This time, the program should crash. If you scroll through the error message, you'll see the crash occurred because of a conflict in the unique identifier for the items.
+
+ ```output
+ Unhandled exception: Microsoft.Azure.Cosmos.CosmosException : Response status code does not indicate success: Conflict (409);Reason: (
+ Errors : [
+ "Resource with specified id or name already exists."
+ ]
+ );
+ ```
+
+## Retrieve an item using the SDK
+
+Now that you've created your first item in the container, you can use the same SDK to retrieve the item. Here, you'll query and point read the item to compare the difference in request unit (RU) consumption.
+
+1. Return to or open the **CosmosHandler.cs** file.
+
+1. Delete all lines of code from the `ManageCustomerAsync` method except for the first two lines.
+
+ ```csharp
+ public static async Task ManageCustomerAsync(string name, string email, string state, string country)
+ {
+ Container container = await GetContainerAsync();
+
+ string id = name.Kebaberize();
+ }
+ ```
+
+1. For the next steps, add this code within the `ManageCustomerAsync` method.
+
+ 1. Use the container's asynchronous `CreateItemAsync` method to create a new item in the container and assign the HTTP response metadata to a variable named `response`.
+
+ ```csharp
+ var response = await container.CreateItemAsync(customer);
+ ```
+
+ 1. Create a new string named `sql` with a SQL query to retrieve items where a filter (`@id`) matches.
+
+ ```csharp
+ string sql = """
+ SELECT
+ *
+ FROM customers c
+ WHERE c.id = @id
+ """;
+ ```
+
+ 1. Create a new `QueryDefinition` variable named `query` passing in the `sql` string as the only query parameter. Also, use the `WithParameter` fluid method to apply the value of the variable `id` to the `@id` parameter.
+
+ ```csharp
+ var query = new QueryDefinition(
+ query: sql
+ )
+ .WithParameter("@id", id);
+ ```
+
+ 1. Use the `GetItemQueryIterator<>` generic method and the `query` variable to create an iterator that gets data from Azure Cosmos DB. Store the iterator in a variable named `feed`. Wrap this entire expression in a using statement to dispose the iterator later.
+
+ ```csharp
+ using var feed = container.GetItemQueryIterator<dynamic>(
+ queryDefinition: query
+ );
+ ```
+
+ 1. Asynchronously call the `ReadNextAsync` method of the `feed` variable and store the result in a variable named `response`.
+
+ ```csharp
+ var response = await feed.ReadNextAsync();
+ ```
+
+ 1. Write the values of the `response` variable's `StatusCode` and `RequestCharge` properties to the console. Also write the value of the `id` variable.
+
+ ```csharp
+ Console.WriteLine($"[{response.StatusCode}]\t{id}\t{response.RequestCharge} RUs");
+ ```
+
+1. **Save** the **CosmosHandler.cs** file.
+
+1. Back in the terminal, run the application to read the single item using a SQL query.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira'
+ ```
+
+1. The output of the command should indicate that the query required multiple RUs.
+
+ ```output
+ [OK] mica-pereira 2.82 RUs
+ ```
+
+1. Back in the **CosmosHandler.cs** file, delete all lines of code from the `ManageCustomerAsync` method again except for the first two lines.
+
+ ```csharp
+ public static async Task ManageCustomerAsync(string name, string email, string state, string country)
+ {
+ Container container = await GetContainerAsync();
+
+ string id = name.Kebaberize();
+ }
+ ```
+
+1. For the next steps, add this code within the `ManageCustomerAsync` method.
+
+ 1. Create a new instance of `PartitionKeyBuilder` by adding the `state` and `country` parameters as a multi-part partition key value.
+
+ ```csharp
+ var partitionKey = new PartitionKeyBuilder()
+ .Add(country)
+ .Add(state)
+ .Build();
+ ```
+
+ 1. Use the container's `ReadItemAsync<>` method to point read the item from the container using the `id` and `partitionKey` variables. Save the result in a variable named `response`.
+
+ ```csharp
+ var response = await container.ReadItemAsync<dynamic>(
+ id: id,
+ partitionKey: partitionKey
+ );
+ ```
+
+ 1. Write the values of the `response` variable's `StatusCode` and `RequestCharge` properties to the console. Also write the value of the `id` variable.
+
+ ```csharp
+ Console.WriteLine($"[{response.StatusCode}]\t{id}\t{response.RequestCharge} RU");
+ ```
+
+1. **Save** the **CosmosHandler.cs** file again.
+
+1. Back in the terminal, run the application one more time to point read the single item.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira' --state 'Washington' --country 'United States'
+ ```
+
+1. The output of the command should indicate that the query required a single RU.
+
+ ```output
+ [OK] mica-pereira 1 RUs
+ ```
+
+## Create a transaction using the SDK
+
+Finally, you'll take the item you created, read that item, and create a different related item as part of a single transaction using the Azure SDK for .NET.
+
+1. Return to or open the **CosmosHandler.cs** file.
+
+1. Delete these lines of code from the `ManageCustomerAsync` method.
+
+ ```csharp
+ var response = await container.ReadItemAsync<dynamic>(
+ id: id,
+ partitionKey: partitionKey
+ );
+
+ Console.WriteLine($"[{response.StatusCode}]\t{id}\t{response.RequestCharge} RUs");
+ ```
+
+1. For the next steps, add this new code within the `ManageCustomerAsync` method.
+
+ 1. Create a new anonymous typed item using the `name`, `state`, and `country` method parameters and the `id` variable. Store the item as a variable named `customerCart`. This item will represent a real-time shopping cart for the customer that is currently empty.
+
+ ```csharp
+ var customerCart = new {
+ id = $"{Guid.NewGuid()}",
+ customerId = id,
+ items = new string[] {},
+ address = new {
+ state = state,
+ country = country
+ }
+ };
+ ```
+
+ 1. Create another new anonymous typed item using the `name`, `state`, and `country` method parameters and the `id` variable. Store the item as a variable named `customerCart`. This item will represent shipping and contact information for the customer.
+
+ ```csharp
+ var customerContactInfo = new {
+ id = $"{id}-contact",
+ customerId = id,
+ email = email,
+ location = $"{state}, {country}",
+ address = new {
+ state = state,
+ country = country
+ }
+ };
+ ```
+
+ 1. Create a new batch using the container's `CreateTransactionalBatch` method passing in the `partitionKey` variable. Store the batch in a variable named `batch`. Use fluent methods to perform the following actions:
+
+ | Method | Parameter |
+ | | |
+ | `ReadItem` | `id` string variable |
+ | `CreateItem` | `customerCart` anonymous type variable |
+ | `CreateItem` | `customerContactInfo` anonymous type variable |
+
+ ```csharp
+ var batch = container.CreateTransactionalBatch(partitionKey)
+ .ReadItem(id)
+ .CreateItem(customerCart)
+ .CreateItem(customerContactInfo);
+ ```
+
+ 1. Use the batch's `ExecuteAsync` method to start the transaction. Save the result in a variable named `response`.
+
+ ```csharp
+ using var response = await batch.ExecuteAsync();
+ ```
+
+ 1. Write the values of the `response` variable's `StatusCode` and `RequestCharge` properties to the console. Also write the value of the `id` variable.
+
+ ```csharp
+ Console.WriteLine($"[{response.StatusCode}]\t{response.RequestCharge} RUs");
+ ```
+
+1. **Save** the **CosmosHandler.cs** file again.
+
+1. Back in the terminal, run the application one more time to point read the single item.
+
+ ```bash
+ dotnet run -- --name 'Mica Pereira' --state 'Washington' --country 'United States'
+ ```
+
+1. The output of the command should show the request units used for the entire transaction.
+
+ ```output
+ [OK] 16.05 RUs
+ ```
+
+ > [!NOTE]
+ > Your request charge may vary.
+
+## Validate the final data in the Data Explorer
+
+To wrap up things, you'll use the Data Explorer in the Azure portal to view the data, and container you created in this tutorial.
+
+1. Navigate to your existing API for NoSQL account in the [Azure portal](https://portal.azure.com/).
+
+1. In the resource menu, select **Data Explorer**.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/resource-menu-data-explorer.png" alt-text="Screenshot of the Data Explorer option highlighted in the resource menu.":::
+
+1. On the **Data Explorer** page, expand the `cosmicworks` database, and then select the `customers` container.
+
+ :::image type="content" source="media/tutorial-dotnet-web-app/section-data-container.png" alt-text="Screenshot of the selected container node within the database node.":::
+
+1. In the command bar, select **New SQL query**.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/page-data-explorer-new-sql-query.png" alt-text="Screenshot of the New SQL Query option in the Data Explorer command bar.":::
+
+1. In the query editor, observe this SQL query string.
+
+ ```sql
+ SELECT * FROM c
+ ```
+
+1. Select **Execute Query** to run the query and observe the results.
+
+ :::image type="content" source="media/tutorial-dotnet-console-app/page-data-explorer-execute-query.png" alt-text="Screenshot of the Execute Query option in the Data Explorer command bar.":::
+
+1. The results should include a JSON array with three items created in this tutorial. Observe that all of the items have the same hierarchical partition key value, but unique ID fields. The example output included is truncated for brevity.
+
+ ```output
+ [
+ {
+ "id": "mica-pereira",
+ "name": "Mica Pereira",
+ "address": {
+ "state": "Washington",
+ "country": "United States"
+ },
+ ...
+ },
+ {
+ "id": "33d03318-6302-4559-b5c0-f3cc643b2f38",
+ "customerId": "mica-pereira",
+ "items": [],
+ "address": {
+ "state": "Washington",
+ "country": "United States"
+ },
+ ...
+ },
+ {
+ "id": "mica-pereira-contact",
+ "customerId": "mica-pereira",
+ "email": null,
+ "location": "Washington, United States",
+ "address": {
+ "state": "Washington",
+ "country": "United States"
+ },
+ ...
+ }
+ ]
+ ```
+
+## Clean up resources
+
+When no longer needed, delete the database used in this tutorial. To do so, navigate to the account page, select **Data Explorer**, select the `cosmicworks` database, and then select **Delete**.
+
+## Next steps
+
+Now that you've created your first .NET console application using Azure Cosmos DB, try the next tutorial where you'll update an existing web application to use Azure Cosmos DB data.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Develop an ASP.NET web application with Azure Cosmos DB for NoSQL](tutorial-dotnet-web-app.md)
cosmos-db Tutorial Dotnet Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-web-app.md
Title: ASP.NET Core MVC web app tutorial using Azure Cosmos DB
-description: ASP.NET Core MVC tutorial to create an MVC web application using Azure Cosmos DB. You'll store JSON and access data from a todo app hosted on Azure App Service - ASP NET Core MVC tutorial step by step.
--
+ Title: |
+ Tutorial: Develop an ASP.NET web application with Azure Cosmos DB for NoSQL
+description: |
+ ASP.NET tutorial to create a web application that queries data from Azure Cosmos DB for NoSQL.
+++ Previously updated : 05/02/2020- Last updated : 11/02/2022
+ms.devlang: csharp
+
-# Tutorial: Develop an ASP.NET Core MVC web application with Azure Cosmos DB by using .NET SDK
+# Tutorial: Develop an ASP.NET web application with Azure Cosmos DB for NoSQL
-> [!div class="op_single_selector"]
->
-> * [.NET](tutorial-dotnet-web-app.md)
-> * [Java](tutorial-java-web-app.md)
-> * [Node.js](tutorial-nodejs-web-app.md)
->
-
-This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article:
-If you don't have time to complete the tutorial, you can download the complete sample project from [GitHub][GitHub].
+The Azure SDK for .NET allows you to query data in an API for NoSQL container using [LINQ in C#](how-to-dotnet-query-items.md#query-items-using-linq-asynchronously) or a [SQL query string](how-to-dotnet-query-items.md#query-items-using-a-sql-query-asynchronously). This tutorial will walk through the process of updating an existing ASP.NET web application that uses placeholder data to instead query from the API.
-This tutorial covers:
+In this tutorial, you learn how to:
> [!div class="checklist"] >
-> * Creating an Azure Cosmos DB account
-> * Creating an ASP.NET Core MVC app
-> * Connecting the app to Azure Cosmos DB
-> * Performing create, read, update, and delete (CRUD) operations on the data
-
-> [!TIP]
-> This tutorial assumes that you have prior experience using ASP.NET Core MVC and Azure App Service. If you are new to ASP.NET Core or the [prerequisite tools](#prerequisites), we recommend you to download the complete sample project from [GitHub][GitHub], add the required NuGet packages, and run it. Once you build the project, you can review this article to gain insight on the code in the context of the project.
+> - Create and populate a database and container using API for NoSQL
+> - Create an ASP.NET web application from a template
+> - Query data from the API for NoSQL container using the Azure SDK for .NET
+>
## Prerequisites
-Before following the instructions in this article, make sure that you have the following resources:
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you have an Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit.
+- [Visual Studio Code](https://code.visualstudio.com)
+- [.NET 6 (LTS) or later](https://dotnet.microsoft.com/download/dotnet/6.0)
+- Experience writing C# applications.
-* An active Azure account. If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+## Create API for NoSQL resources
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
+First, you'll create a database and container in the existing API for NoSQL account. You'll then populate this account with data using the `cosmicworks` dotnet tool.
-* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
+1. Navigate to your existing API for NoSQL account in the [Azure portal](https://portal.azure.com/).
-All the screenshots in this article are from Microsoft Visual Studio Community 2019. If you use a different version, your screens and options may not match entirely. The solution should work if you meet the prerequisites.
+1. In the resource menu, select **Keys**.
-## Step 1: Create an Azure Cosmos DB account
+ :::image type="content" source="media/tutorial-dotnet-web-app/resource-menu-keys.png" lightbox="media/tutorial-dotnet-web-app/resource-menu-keys.png" alt-text="Screenshot of an API for NoSQL account page. The Keys option is highlighted in the resource menu.":::
-Let's start by creating an Azure Cosmos DB account. If you already have an Azure Cosmos DB for NoSQL account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#step-2-create-a-new-aspnet-core-mvc-application).
+1. On the **Keys** page, observe and record the value of the **URI**, **PRIMARY KEY**, and **PRIMARY CONNECTION STRING*** fields. These values will be used throughout the tutorial.
+ :::image type="content" source="media/tutorial-dotnet-web-app/page-keys.png" alt-text="Screenshot of the Keys page with the URI, Primary Key, and Primary Connection String fields highlighted.":::
+1. In the resource menu, select **Data Explorer**.
-In the next section, you create a new ASP.NET Core MVC application.
+ :::image type="content" source="media/tutorial-dotnet-web-app/resource-menu-data-explorer.png" alt-text="Screenshot of the Data Explorer option highlighted in the resource menu.":::
-## Step 2: Create a new ASP.NET Core MVC application
+1. On the **Data Explorer** page, select the **New Container** option in the command bar.
-1. Open Visual Studio and select **Create a new project**.
+ :::image type="content" source="media/tutorial-dotnet-web-app/page-data-explorer-new-container.png" alt-text="Screenshot of the New Container option in the Data Explorer command bar.":::
-1. In **Create a new project**, find and select **ASP.NET Core Web Application** for C#. Select **Next** to continue.
+1. In the **New Container** dialog, create a new container with the following settings:
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-new-project-dialog.png" alt-text="Create new ASP.NET Core web application project":::
+ | Setting | Value |
+ | | |
+ | **Database id** | `cosmicworks` |
+ | **Database throughput type** | **Manual** |
+ | **Database throughput amount** | `4000` |
+ | **Container id** | `products` |
+ | **Partition key** | `/categoryId` |
-1. In **Configure your new project**, name the project *todo* and select **Create**.
+ :::image type="content" source="media/tutorial-dotnet-web-app/dialog-new-container.png" alt-text="Screenshot of the New Container dialog in the Data Explorer with various values in each field.":::
-1. In **Create a new ASP.NET Core Web Application**, choose **Web Application (Model-View-Controller)**. Select **Create** to continue.
+ > [!IMPORTANT]
+ > In this tutorial, we will first scale the database up to 4,000 RU/s in shared throughput to maximize performance for the data migration. Once the data migration is complete, we will scale down to 400 RU/s of provisioned throughput.
- Visual Studio creates an empty MVC application.
+1. Select **OK** to create the database and container.
-1. Select **Debug** > **Start Debugging** or F5 to run your ASP.NET application locally.
+1. Open a terminal to run commands to populate the container with data.
-## Step 3: Add Azure Cosmos DB NuGet package to the project
+ > [!TIP]
+ > You can optionally use the Azure Cloud Shell here.
-Now that we have most of the ASP.NET Core MVC framework code that we need for this solution, let's add the NuGet packages required to connect to Azure Cosmos DB.
+1. Install a **pre-release**version of the `cosmicworks` dotnet tool from NuGet.
-1. In **Solution Explorer**, right-click your project and select **Manage NuGet Packages**.
+ ```bash
+ dotnet tool install --global cosmicworks --prerelease
+ ```
-1. In the **NuGet Package Manager**, search for and select **Microsoft.Azure.Cosmos**. Select **Install**.
+1. Use the `cosmicworks` tool to populate your API for NoSQL account with sample product data using the **URI** and **PRIMARY KEY** values you recorded earlier in this lab. Those recorded values will be used for the `endpoint` and `key` parameters respectively.
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-nuget.png" alt-text="Install NuGet package":::
+ ```bash
+ cosmicworks \
+ --datasets product \
+ --endpoint <uri> \
+ --key <primary-key>
+ ```
- Visual Studio downloads and installs the Azure Cosmos DB package and its dependencies.
+1. Observe the output from the command line tool. It should add more than 200 items to the container. The example output included is truncated for brevity.
- You can also use **Package Manager Console** to install the NuGet package. To do so, select **Tools** > **NuGet Package Manager** > **Package Manager Console**. At the prompt, type the following command:
+ ```output
+ ...
+ Revision: v4
+ Datasets:
+ product
+
+ Database: [cosmicworks] Status: Created
+ Container: [products] Status: Ready
+
+ product Items Count: 295
+ Entity: [9363838B-2D13-48E8-986D-C9625BE5AB26] Container:products Status: RanToCompletion
+ ...
+ Container: [product] Status: Populated
+ ```
- ```ps
- Install-Package Microsoft.Azure.Cosmos
- ```
-
-## Step 4: Set up the ASP.NET Core MVC application
+1. Return to the **Data Explorer** page for your account.
-Now let's add the models, the views, and the controllers to this MVC application.
+1. In the **Data** section, expand the `cosmicworks` database node and then select **Scale**.
-### Add a model
+ :::image type="content" source="media/tutorial-dotnet-web-app/section-data-database-scale.png" alt-text="Screenshot of the Scale option within the database node.":::
-1. In **Solution Explorer**, right-click the **Models** folder, select **Add** > **Class**.
+1. Reduce the throughput from **4,000** down to **400**.
-1. In **Add New Item**, name your new class *Item.cs* and select **Add**.
+ :::image type="content" source="media/tutorial-dotnet-web-app/section-scale-throughput.png" alt-text="Screenshot of the throughput settings for the database reduced down to 400 RU/s.":::
-1. Replace the contents of *Item.cs* class with the following code:
+1. In the command bar, select **Save**.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Models/Item.cs":::
+ :::image type="content" source="media/tutorial-dotnet-web-app/page-data-explorer-save.png" alt-text="Screenshot of the Save option in the Data Explorer command bar.":::
-Azure Cosmos DB uses JSON to move and store data. You can use the `JsonProperty` attribute to control how JSON serializes and deserializes objects. The `Item` class demonstrates the `JsonProperty` attribute. This code controls the format of the property name that goes into JSON. It also renames the .NET property `Completed`.
+1. In the **Data** section, expand and select the **products** container node.
-### Add views
+ :::image type="content" source="media/tutorial-dotnet-web-app/section-data-container.png" alt-text="Screenshot of the expanded container node within the database node.":::
-Next, let's add the following views.
+1. In the command bar, select **New SQL query**.
-* A create item view
-* A delete item view
-* A view to get an item detail
-* An edit item view
-* A view to list all the items
+ :::image type="content" source="media/tutorial-dotnet-web-app/page-data-explorer-new-sql-query.png" alt-text="Screenshot of the New SQL Query option in the Data Explorer command bar.":::
-#### Create item view
+1. In the query editor, add this SQL query string.
-1. In **Solution Explorer**, right-click the **Views** folder and select **Add** > **New Folder**. Name the folder *Item*.
+ ```sql
+ SELECT
+ p.sku,
+ p.price
+ FROM products p
+ WHERE p.price < 2000
+ ORDER BY p.price DESC
+ ```
-1. Right-click the empty **Item** folder, then select **Add** > **View**.
+1. Select **Execute Query** to run the query and observe the results.
-1. In **Add MVC View**, make the following changes:
+ :::image type="content" source="media/tutorial-dotnet-web-app/page-data-explorer-execute-query.png" alt-text="Screenshot of the Execute Query option in the Data Explorer command bar.":::
- * In **View name**, enter *Create*.
- * In **Template**, select **Create**.
- * In **Model class**, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
+1. The results should be a paginated array of all items in the container with a `price` value that is less than **2,000** sorted from highest price to lowest. For brevity, a subset of the output is included here.
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-add-mvc-view.png" alt-text="Screenshot showing the Add MVC View dialog box":::
+ ```output
+ [
+ {
+ "sku": "BK-R79Y-48",
+ "price": 1700.99
+ },
+ ...
+ {
+ "sku": "FR-M94B-46",
+ "price": 1349.6
+ },
+ ...
+ ```
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+1. Replace the content of the query editor with this query and then select **Execute Query** again to observe the results.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Create.cshtml":::
+ ```sql
+ SELECT
+ p.name,
+ p.categoryName,
+ p.tags
+ FROM products p
+ JOIN t IN p.tags
+ WHERE t.name = "Tag-32"
+ ```
-#### Delete item view
+1. The results should be a smaller array of items filtered to only contain items that include at least one tag with a **name** value of `Tag-32`. Again, a subset of the output is included here for brevity.
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+ ```output
+ ...
+ {
+ "name": "ML Mountain Frame - Black, 44",
+ "categoryName": "Components, Mountain Frames",
+ "tags": [
+ {
+ "id": "18AC309F-F81C-4234-A752-5DDD2BEAEE83",
+ "name": "Tag-32"
+ }
+ ]
+ },
+ ...
+ ```
-1. In **Add MVC View**, make the following changes:
+## Create ASP.NET web application
- * In the **View name** box, type *Delete*.
- * In the **Template** box, select **Delete**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
+Now, you'll create a new ASP.NET web application using a sample project template. You'll then explore the source code and run the sample to get acquainted with the application before adding Azure Cosmos DB connectivity using the Azure SDK for .NET.
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+1. Open a terminal in an empty directory.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Delete.cshtml":::
+1. Install the `cosmicworks.template.web` project template package from NuGet.
-#### Add a view to get item details
+ ```bash
+ dotnet new install cosmicworks.template.web
+ ```
-1. In **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+1. Create a new web application project using the newly installed `dotnet new cosmosdbnosql-webapp` template.
-1. In **Add MVC View**, provide the following values:
+ ```bash
+ dotnet new cosmosdbnosql-webapp
+ ```
- * In **View name**, enter *Details*.
- * In **Template**, select **Details**.
- * In **Model class**, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
+1. Build and run the web application project.
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+ ```bash
+ dotnet run
+ ```
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Details.cshtml":::
+1. Observe the output of the run command. The output should include a list of ports and URLs where the application is running.
-#### Add an edit item view
+ ```output
+ ...
+ info: Microsoft.Hosting.Lifetime[14]
+ Now listening on: http://localhost:5000
+ info: Microsoft.Hosting.Lifetime[14]
+ Now listening on: https://localhost:5001
+ info: Microsoft.Hosting.Lifetime[0]
+ Application started. Press Ctrl+C to shut down.
+ info: Microsoft.Hosting.Lifetime[0]
+ Hosting environment: Production
+ ...
+ ```
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+1. Open a new browser and navigate to the running web application. Observe all three pages of the running application.
-1. In **Add MVC View**, make the following changes:
+ :::image type="content" source="media/tutorial-dotnet-web-app/sample-application-placeholder-data.png" lightbox="media/tutorial-dotnet-web-app/sample-application-placeholder-data.png" alt-text="Screenshot of the sample web application running with placeholder data.":::
+
+1. Stop the running application by terminating the running process.
+
+ > [!TIP]
+ > Use the <kbd>Ctrl</kbd>+<kbd>C</kbd> command to stop a running process.Alternatively, you can close and re-open the terminal.
- * In the **View name** box, type *Edit*.
- * In the **Template** box, select **Edit**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
+1. Open Visual Studio Code using the current project folder as the workspace.
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+ > [!TIP]
+ > You can run `code .` in the terminal to open Visual Studio Code and automatically open the working directory as the current workspace.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Edit.cshtml":::
+1. Navigate to and open the **Services/ICosmosService.cs** file. Observe the ``RetrieveActiveProductsAsync`` and ``RetrieveAllProductsAsync`` default method implementations. These methods create a static list of products to use when running the project for the first time. A truncated example of one of the methods is provided here.
-#### Add a view to list all the items
+ ```csharp
+ public async Task<IEnumerable<Product>> RetrieveActiveProductsAsync()
+ {
+ await Task.Delay(1);
-And finally, add a view to get all the items with the following steps:
+ return new List<Product>()
+ {
+ new Product(id: "baaa4d2d-5ebe-45fb-9a5c-d06876f408e0", categoryId: "3E4CEACD-D007-46EB-82D7-31F6141752B2", categoryName: "Components, Road Frames", sku: "FR-R72R-60", name: """ML Road Frame - Red, 60""", description: """The product called "ML Road Frame - Red, 60".""", price: 594.83000000000004m),
+ ...
+ new Product(id: "d5928182-0307-4bf9-8624-316b9720c58c", categoryId: "AA5A82D4-914C-4132-8C08-E7B75DCE3428", categoryName: "Components, Cranksets", sku: "CS-6583", name: """ML Crankset""", description: """The product called "ML Crankset".""", price: 256.49000000000001m)
+ };
+ }
+ ```
-1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
+1. Navigate to and open the **Services/CosmosService.cs** file. Observe the current implementation of the **CosmosService** class. This class implements the **ICosmosService** interface but doesn't override any methods. In this context, the class will use the default interface implementation until an override of the implementation is provided in the interface.
-1. In **Add MVC View**, make the following changes:
+ ```csharp
+ public class CosmosService : ICosmosService
+ { }
+ ```
- * In the **View name** box, type *Index*.
- * In the **Template** box, select **List**.
- * In the **Model class** box, select **Item (todo.Models)**.
- * Select **Use a layout page** and enter *~/Views/Shared/_Layout.cshtml*.
- * Select **Add**.
+1. Finally, navigate to and open the **Models/Product.cs** file. Observe the record type defined in this file. This type will be used in queries throughout this tutorial.
-1. Next select **Add** and let Visual Studio create a new template view. Replace the code in the generated file with the following contents:
+ ```csharp
+ public record Product(
+ string id,
+ string categoryId,
+ string categoryName,
+ string sku,
+ string name,
+ string description,
+ decimal price
+ );
+ ```
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Index.cshtml":::
+## Query data using the .NET SDK
-Once you complete these steps, close all the *cshtml* documents in Visual Studio.
+Next, you'll add the Azure SDK for .NET to this sample project and use the library to query data from the API for NoSQL container.
-### Declare and initialize services
+1. Back in the terminal, add the `Microsoft.Azure.Cosmos` package from NuGet.
-First, we'll add a class that contains the logic to connect to and use Azure Cosmos DB. For this tutorial, we'll encapsulate this logic into a class called `CosmosDbService` and an interface called `ICosmosDbService`. This service does the CRUD operations. It also does read feed operations such as listing incomplete items, creating, editing, and deleting the items.
+ ```bash
+ dotnet add package Microsoft.Azure.Cosmos
+ ```
-1. In **Solution Explorer**, right-click your project and select **Add** > **New Folder**. Name the folder *Services*.
+1. Build the project.
-1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *CosmosDbService* and select **Add**.
+ ```bash
+ dotnet build
+ ```
-1. Replace the contents of *CosmosDbService.cs* with the following code:
+1. Back in Visual Studio Code, navigate again to the **Services/CosmosService.cs** file.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/CosmosDbService.cs":::
+1. Add a new using directive for the `Microsoft.Azure.Cosmos` and `Microsoft.Azure.Cosmos.Linq` namespaces.
-1. Right-click the **Services** folder, select **Add** > **Class**. Name the new class *ICosmosDbService* and select **Add**.
+ ```csharp
+ using Microsoft.Azure.Cosmos;
+ using Microsoft.Azure.Cosmos.Linq;
+ ```
-1. Add the following code to *ICosmosDbService* class:
+1. Within the **CosmosService** class, add a new `private readonly` member of type `CosmosClient` named `_client`.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Services/ICosmosDbService.cs":::
+ ```csharp
+ private readonly CosmosClient _client;
+ ```
-1. Open the *Startup.cs* file in your solution and add the following method **InitializeCosmosClientInstanceAsync**, which reads the configuration and initializes the client.
+1. Create a new empty constructor for the `CosmosClient` class.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="InitializeCosmosClientInstanceAsync" :::
+ ```csharp
+ public CosmosClient()
+ { }
+ ```
-1. On that same file, replace the `ConfigureServices` method with:
+1. Within the constructor, create a new instance of the `CosmosClient` class passing in a string parameter with the **PRIMARY CONNECTION STRING** value you previously recorded in the lab. Store this new instance in the `_client` member.
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Startup.cs" id="ConfigureServices":::
+ ```csharp
+ public CosmosClient()
+ {
+ _client = new CosmosClient(
+ connectionString: "<primary-connection-string>"
+ );
+ }
+ ```
- The code in this step initializes the client based on the configuration as a singleton instance to be injected through [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
+1. Back within the **CosmosClient** class, create a new `private` property of type `Container` named `container`. Set the **get accessor** to return the `cosmicworks` database and `products` container.
- And make sure to change the default MVC Controller to `Item` by editing the routes in the `Configure` method of the same file:
+ ```csharp
+ private Container container
+ {
+ get => _client.GetDatabase("cosmicworks").GetContainer("products");
+ }
+ ```
- ```csharp
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Item}/{action=Index}/{id?}");
- });
- ```
+1. Create a new asynchronous method named `RetrieveAllProductsAsync` that returns an `IEnumerable<Product>`.
+ ```csharp
+ public async Task<IEnumerable<Product>> RetrieveAllProductsAsync()
+ { }
+ ```
+
+1. For the next steps, add this code within the `RetrieveAllProductsAsync` method.
+
+ 1. Use the `GetItemLinqQueryable<>` generic method to get an object of type `IQueryable<>` that you can use to construct a Language-integrated query (LINQ). Store that object in a variable named `queryable`.
+
+ ```csharp
+ var queryable = container.GetItemLinqQueryable<Product>();
+ ```
+
+ 1. Construct a LINQ query using the `Where` and `OrderByDescending` extension methods. Use the `ToFeedIterator` extension method to create an iterator to get data from Azure Cosmos DB and store the iterator in a variable named `feed`. Wrap this entire expression in a using statement to dispose the iterator later.
+
+ ```csharp
+ using FeedIterator<Product> feed = queryable
+ .Where(p => p.price < 2000m)
+ .OrderByDescending(p => p.price)
+ .ToFeedIterator();
+ ```
+
+ 1. Create a new variable named `results` using the generic `List<>` type.
+
+ ```csharp
+ List<Product> results = new();
+ ```
+
+ 1. Create a **while** loop that will iterate until the `HasMoreResults` property of the `feed` variable returns false. This loop will ensure that you loop through all pages of server-side results.
+
+ ```csharp
+ while (feed.HasMoreResults)
+ { }
+ ```
-1. Define the configuration in the project's *appsettings.json* file as shown in the following snippet:
+ 1. Within the **while** loop, asynchronously call the `ReadNextAsync` method of the `feed` variable and store the result in a variable named `response`.
- :::code language="json" source="~/samples-cosmosdb-dotnet-core-web-app/src/appsettings.json":::
+ ```csharp
+ while (feed.HasMoreResults)
+ {
+ var response = await feed.ReadNextAsync();
+ }
+ ```
-### Add a controller
+ 1. Still within the **while** loop, use a **foreach** loop to go through each item in the response and add them to the `results` list.
-1. In **Solution Explorer**, right-click the **Controllers** folder, select **Add** > **Controller**.
+ ```csharp
+ while (feed.HasMoreResults)
+ {
+ var response = await feed.ReadNextAsync();
+ foreach (Product item in response)
+ {
+ results.Add(item);
+ }
+ }
+ ```
-1. In **Add Scaffold**, select **MVC Controller - Empty** and select **Add**.
+ 1. Return the `results` list as the output of the `RetrieveAllProductsAsync` method.
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-controller-add-scaffold.png" alt-text="Select MVC Controller - Empty in Add Scaffold":::
+ ```csharp
+ return results;
+ ```
-1. Name your new controller *ItemController*.
+1. Create a new asynchronous method named `RetrieveActiveProductsAsync` that returns an `IEnumerable<Product>`.
-1. Replace the contents of *ItemController.cs* with the following code:
+ ```csharp
+ public async Task<IEnumerable<Product>> RetrieveActiveProductsAsync()
+ { }
+ ```
- :::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Controllers/ItemController.cs":::
+1. For the next steps, add this code within the `RetrieveActiveProductsAsync` method.
-The **ValidateAntiForgeryToken** attribute is used here to help protect this application against cross-site request forgery attacks. Your views should work with this anti-forgery token as well. For more information and examples, see [Preventing Cross-Site Request Forgery (CSRF) Attacks in ASP.NET MVC Application][Preventing Cross-Site Request Forgery]. The source code provided on [GitHub][GitHub] has the full implementation in place.
+ 1. Create a new string named `sql` with a SQL query to retrieve multiple fields where a filter (`@tagFilter`) is applied to **tags** array of each item.
-We also use the **Bind** attribute on the method parameter to help protect against over-posting attacks. For more information, see [Tutorial: Implement CRUD Functionality with the Entity Framework in ASP.NET MVC][Basic CRUD Operations in ASP.NET MVC].
+ ```csharp
+ string sql = """
+ SELECT
+ p.id,
+ p.categoryId,
+ p.categoryName,
+ p.sku,
+ p.name,
+ p.description,
+ p.price,
+ p.tags
+ FROM products p
+ JOIN t IN p.tags
+ WHERE t.name = @tagFilter
+ """;
+ ```
-## Step 5: Run the application locally
+ 1. Create a new `QueryDefinition` variable named `query` passing in the `sql` string as the only query parameter. Also, use the `WithParameter` fluid method to apply the value `Tag-75` to the `@tagFilter` parameter.
-To test the application on your local computer, use the following steps:
+ ```csharp
+ var query = new QueryDefinition(
+ query: sql
+ )
+ .WithParameter("@tagFilter", "Tag-75");
+ ```
-1. Press F5 in Visual Studio to build the application in debug mode. It should build the application and launch a browser with the empty grid page we saw before:
+ 1. Use the `GetItemQueryIterator<>` generic method and the `query` variable to create an iterator that gets data from Azure Cosmos DB. Store the iterator in a variable named `feed`. Wrap this entire expression in a using statement to dispose the iterator later.
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-an-item-a.png" alt-text="Screenshot of the todo list web application created by this tutorial":::
-
- If the application instead opens to the home page, append `/Item` to the url.
+ ```csharp
+ using FeedIterator<Product> feed = container.GetItemQueryIterator<Product>(
+ queryDefinition: query
+ );
+ ```
-1. Select the **Create New** link and add values to the **Name** and **Description** fields. Leave the **Completed** check box unselected. If you select it, the app adds the new item in a completed state. The item no longer appears on the initial list.
+ 1. Use a **while** loop to iterate through multiple pages of results and store the value in a generic `List<>` named **results**. Return the **results** as the output of the `RetrieveActiveProductsAsync` method.
-1. Select **Create**. The app sends you back to the **Index** view, and your item appears in the list. You can add a few more items to your **To-Do** list.
+ ```csharp
+ List<Product> results = new();
+
+ while (feed.HasMoreResults)
+ {
+ FeedResponse<Product> response = await feed.ReadNextAsync();
+ foreach (Product item in response)
+ {
+ results.Add(item);
+ }
+ }
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-an-item.png" alt-text="Screenshot of the Index view":::
-
-1. Select **Edit** next to an **Item** on the list. The app opens the **Edit** view where you can update any property of your object, including the **Completed** flag. If you select **Completed** and select **Save**, the app displays the **Item** as completed in the list.
+ return results;
+ ```
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-completed-item.png" alt-text="Screenshot of the Index view with the Completed box checked":::
+1. **Save** the **Services/CosmosClient.cs** file.
-1. Verify the state of the data in the Azure Cosmos DB service using [Azure Cosmos DB Explorer](https://cosmos.azure.com) or the Azure Cosmos DB Emulator's Data Explorer.
+ > [!TIP]
+ > If you are unsure that your code is correct, you can check your source code against the [sample code](https://github.com/Azure-Samples/cosmos-db-nosql-dotnet-sample-web/blob/sample/Services/CosmosService.cs) on GitHub.
-1. Once you've tested the app, select Ctrl+F5 to stop debugging the app. You're ready to deploy!
+## Validate the final application
-## Step 6: Deploy the application
+Finally, you'll run the application with **hot reloads** enabled. Running the application will validate that your code can access data from the API for NoSQL.
-Now that you have the complete application working correctly with Azure Cosmos DB we're going to deploy this web app to Azure App Service.
+1. Back in the terminal, run the application.
-1. To publish this application, right-click the project in **Solution Explorer** and select **Publish**.
+ ```bash
+ dotnet watch
+ ```
-1. In **Pick a publish target**, select **App Service**.
+ > [!NOTE]
+ > `dotnet watch` is enabled here so you can quickly change the code if you find a mistake.
-1. To use an existing App Service profile, choose **Select Existing**, then select **Publish**.
+1. The output of the run command should include a list of ports and URLs where the application is running. Open a new browser and navigate to the running web application. Observe all three pages of the running application. Each page should now include live data from Azure Cosmos DB.
-1. In **App Service**, select a **Subscription**. Use the **View** filter to sort by resource group or resource type.
+## Clean up resources
-1. Find your profile, and then select **OK**. Next search the required Azure App Service and select **OK**.
-
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-app-service-2019.png" alt-text="App Service dialog box in Visual Studio":::
-
-Another option is to create a new profile:
-
-1. As in the previous procedure, right-click the project in **Solution Explorer** and select **Publish**.
-
-1. In **Pick a publish target**, select **App Service**.
-
-1. In **Pick a publish target**, select **Create New** and select **Publish**.
-
-1. In **App Service**, enter your Web App name and the appropriate subscription, resource group, and hosting plan, then select **Create**.
-
- :::image type="content" source="./media/tutorial-dotnet-web-app/asp-net-mvc-tutorial-create-app-service-2019.png" alt-text="Create App Service dialog box in Visual Studio":::
-
-In a few seconds, Visual Studio publishes your web application and launches a browser where you can see your project running in Azure!
+When no longer needed, delete the database used in this tutorial. To do so, navigate to the account page, select **Data Explorer**, select the `cosmicworks` database, and then select **Delete**.
## Next steps
-In this tutorial, you've learned how to build an ASP.NET Core MVC web application. Your application can access data stored in Azure Cosmos DB. You can now continue with these resources:
-
-* [Partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* [Getting started with SQL queries](query/getting-started.md)
-* [How to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+Now that you've created your first .NET web application using Azure Cosmos DB, you can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB for NoSQL resources.
-[Visual Studio Express]: https://www.visualstudio.com/products/visual-studio-express-vs.aspx
-[Microsoft Web Platform Installer]: https://www.microsoft.com/web/downloads/platform.aspx
-[Preventing Cross-Site Request Forgery]: /aspnet/web-api/overview/security/preventing-cross-site-request-forgery-csrf-attacks
-[Basic CRUD Operations in ASP.NET MVC]: /aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-basic-crud-functionality-with-the-entity-framework-in-asp-net-mvc-application
-[GitHub]: https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md)
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Previously updated : 10/05/2021 Last updated : 11/02/2022 # Try Azure Cosmos DB free
From the [Try Azure Cosmos DB home page](https://aka.ms/trycosmosdb), select an
Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosmos DB or get started with our documentation. * [API for NoSQL Quickstart](nosql/quickstart-portal.md#create-container-database)
+* [API for PostgreSQL Quickstart](postgresql/quickstart-create-portal.md)
* [API for MongoDB Quickstart](mongodb/quickstart-python.md#learn-the-object-model) * [API for Apache Cassandra](cassandr) * [API for Apache Gremlin](gremlin/quickstart-console.md#add-a-graph)
After you create a Try Azure Cosmos DB sandbox account, you can start building a
* Learn more about [understanding your Azure Cosmos DB bill](understand-your-bill.md) * Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-portal.md#create-container-database)
+ * [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md)
* [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#learn-the-object-model) * [Get started with Azure Cosmos DB for Cassandra](cassandr) * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md#add-a-graph)
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
When you perform data integration and ETL processes in the cloud, your jobs can
The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5bkg2]
+ **Supported connectors** - [SAP CDC](connector-sap-change-data-capture.md) - [Azure SQL Database](connector-azure-sql-database.md)
databox-online Azure Stack Edge Gpu Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
Previously updated : 04/12/2021 Last updated : 11/02/2022
A Graphics Processing Unit (GPU) is included on every Azure Stack Edge Pro devic
| Specification | Value | |-|-|
-| GPU | One or two nVidia T4 GPUs <br> For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).|
+| GPU | One or two nVidia T4 GPUs <br> For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).|
## Power supply unit specifications
databox-online Azure Stack Edge Pro 2 Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-activate.md
Previously updated : 03/03/2022 Last updated : 10/27/2022 # Customer intent: As an IT admin, I need to understand how to activate Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro 2 Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates.md
Previously updated : 03/02/2022 Last updated : 10/27/2022 # Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro 2 so I can use it to establish a trust relationship between the device and the clients accessing the device.
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
Previously updated : 02/28/2022 Last updated : 11/03/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
To configure a client to access Kubernetes cluster, you will need the Kubernetes
1. In the local web UI of your device, go to **Devices** page. 2. Under the **Device endpoints**, copy the **Kubernetes API service** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
- ![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
+ ![Screenshot that shows the device page in local UI.](./media/azure-stack-edge-pro-2-deploy-configure-compute/device-kubernetes-endpoint-1.png)
3. Save the endpoint string. You will use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
To configure a client to access Kubernetes cluster, you will need the Kubernetes
- Go to Kubernetes API, select **advanced settings**, and download an advanced configuration file for Kubernetes.
- ![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)
+ ![Screenshot that shows the device page in local UI 1.](./media/azure-stack-edge-pro-2-deploy-configure-compute/download-advanced-config-1.png)
If you have been provided a key from Microsoft (select users may have a key), then you can use this config file.
- ![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png)
+ ![Screenshot that shows the device page in local UI 2.](./media/azure-stack-edge-pro-2-deploy-configure-compute/download-advanced-config-2.png)
- You can also go to **Kubernetes dashboard** endpoint and download an `aseuser` config file.
- ![Device page in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-compute/download-aseuser-config-1.png)
+ ![Screenshot that shows the device page in local UI 3.](./media/azure-stack-edge-pro-2-deploy-configure-compute/download-aseuser-config-1.png)
You can use this config file to sign into the Kubernetes dashboard or debug any issues in your Kubernetes cluster. For more information, see [Access Kubernetes dashboard](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md#access-dashboard).
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Previously updated : 08/01/2022 Last updated : 10/31/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
For Azure Consistent Services, follow these steps to configure virtual IP.
1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the Azure Consistent Services network that you specified. If you selected DHCP, a virtual IP is automatically picked from the Azure Consistent Services network that you selected. 1. Select **Apply**.
- ![Local web UI "Cluster" page with "Virtual IP Settings" blade configured for Azure consistent services on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-2m.png)
+ ![Local web UI "Cluster" page with "Virtual IP Settings" blade configured for Azure consistent services on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-2m.png)
### For Network File System
For clients connecting via NFS protocol to the two-node device, follow these ste
1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the NFS network that you specified. If you selected DHCP, a virtual IP is automatically picked from the NFS network that you selected. 1. Select **Apply**.
- ![Screenshot of local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2m.png)
+ ![Screenshot of local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-network-file-system-2m.png)
> [!NOTE] > Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
After the cluster is formed and configured, you'll now create new virtual switch
1. Select **Apply**.
- ![Screenshot of configuring compute in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+ ![Screenshot of configuring compute in Advanced networking in local UI 2](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
Follow these steps to validate your network settings.
1. Go to the **Diagnostic tests** page and select the tests as shown below. 1. Select **Run test**.
- ![Screenshot of the Diagnostic tests page in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/validate-network-settings-with-diagnostic-test.png)
+ ![Screenshot of the Diagnostic tests page in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/validate-network-settings-with-diagnostic-test.png)
1. Review test results to ensure that status shows **Healthy** for each test that was run.
databox-online Azure Stack Edge Pro 2 Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md
Previously updated : 03/04/2022 Last updated : 10/26/2022 # Customer intent: As an IT admin, I need to understand how to set up device name, update server and time server via the local web UI of Azure Stack Edge Pro 2 so I can use the device to transfer data to Azure.
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 06/17/2022 Last updated : 11/03/2022
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| Memory type | 2 x 32 GB DDR4-2933 RDIMM | | Memory: raw | 64 GB RAM | | Memory: usable | 51 GB RAM |
+| GPU | None |
# [Model 128G4T1GPU](#tab/sku-b)
+The Azure Stack Edge Pro 2 device has the following specifications for compute and memory:
| Specification | Value | |-|--|
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| Memory type | 4 x 32 GB DDR4-2933 RDIMM | | Memory: raw | 128 GB RAM | | Memory: usable | 102 GB RAM |
+| GPU | 1 NVIDIA A2 GPU <br> For more information, see [NVIDIA A2 GPUs](https://www.nvidia.com/en-us/data-center/products/a2/). |
# [Model 256G6T2GPU](#tab/sku-c)
+The Azure Stack Edge Pro 2 device has the following specifications for compute and memory:
| Specification | Value | |-|--|
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| Memory type | 4 x 64 GB DDR4-2933 RDIMM | | Memory: raw | 256 GB RAM | | Memory: usable | 204 GB RAM |
+| GPU | 2 NVIDIA A2 GPUs <br> For more information, see [NVIDIA A2 GPUs](https://www.nvidia.com/en-us/data-center/products/a2/). |
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
Title: Configure the Microsoft Security DevOps Azure DevOps extension description: Learn how to configure the Microsoft Security DevOps Azure DevOps extension. Previously updated : 09/20/2022 Last updated : 11/03/2022
If you don't have access to install the extension, you must request access from
1. Select **Shared**. > [!Note]
- > If you've already [installed the Microsoft Security DevOps extension](azure-devops-extension.md), it will be listed in the **Installed** tab.
+ > If you've already [installed the Microsoft Security DevOps extension](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops), it will be listed in the Installed tab.
1. Select **Microsoft Security DevOps**.
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
The triggers for an image scan are:
When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Defender for DevOps](defender-for-devops-introduction.md).
- ## Prerequisites Before you can scan your ACR images:
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
You can review and manage your current security alerts from Microsoft Defender f
### How do I estimate charges at the account level?
-To optimize costs, you might want to exclude specific Storage accounts associated with high traffic from Defender for Storage protections. To get an estimate of Defender for Storage costs, use the [Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724).
+To optimize costs, you might want to exclude specific Storage accounts associated with high traffic from Defender for Storage protections. To get an estimate of Defender for Storage costs, use the [Price Estimation Workbook](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Security%20Center/ConfigurationId/community-Workbooks%2FAzure%20Security%20Center%2FPrice%20Estimation/Type/workbook/WorkbookTemplateName/Price%20Estimation) in the Azure portal.
### Can I exclude a specific Azure Storage account from a protected subscription?
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Title: Defender for Azure Cosmos DB | Defender for Cloud in the Field
description: Learn about Defender for Cloud integration with Azure Cosmos DB. Previously updated : 10/18/2022 Last updated : 11/03/2022 # Defender for Azure Cosmos DB | Defender for Cloud in the Field
Last updated 10/18/2022
<br> <iframe src="https://aka.ms/docs/player?id=94238ff5-930e-48be-ad27-a2fff73e473f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> -- [00:00](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=00m00s) - Intro
+- [00:00](/shows/mdc-in-the-field/defender-cosmos-db#time=00m00s) - Intro
-- [01:37](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=01m37s) - Azure Cosmos DB main use case scenarios
+- [01:37](/shows/mdc-in-the-field/defender-cosmos-db#time=01m37s) - Azure Cosmos DB main use case scenarios
-- [02:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=02m30s) - Recommendations and alerts in Defender for Azure Cosmos DB
+- [02:30](/shows/mdc-in-the-field/defender-cosmos-db#time=02m30s) - Recommendations and alerts in Defender for Azure Cosmos DB
-- [04:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=04m30s) - SQL Injection detection for Azure Cosmos DB
+- [04:30](/shows/mdc-in-the-field/defender-cosmos-db#time=04m30s) - SQL Injection detection for Azure Cosmos DB
-- [06:15](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=06m15s) - Key extraction detection for Azure Cosmos DB
+- [06:15](/shows/mdc-in-the-field/defender-cosmos-db#time=06m15s) - Key extraction detection for Azure Cosmos DB
-- [11:00](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=11m00s) - Demonstration
+- [11:00](/shows/mdc-in-the-field/defender-cosmos-db#time=11m00s) - Demonstration
-- [14:30](https://learn.microsoft.com/shows/mdc-in-the-field/defender-cosmos-db#time=14m30s) - Final considerations
+- [14:30](/shows/mdc-in-the-field/defender-cosmos-db#time=14m30s) - Final considerations
## Recommended resources
-Learn more about [Enable Microsoft Defender for Azure Cosmos DB](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-databases-enable-cosmos-protections?tabs=azure-portal)
+Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity) - Follow us on social media:
- [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- [Twitter](https://twitter.com/msftsecurity)
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) -- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender for DevOps | Defender for Cloud in the field](episode-nineteen.md)
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
+
+ Title: Defender for DevOps | Defender for Cloud in the Field
+
+description: Learn about Defender for Cloud integration with Defender for DevOps.
+ Last updated : 11/03/2022++
+# Defender for DevOps | Defender for Cloud in the Field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Sukhandeep Singh joins Yuri Diogenes to talk about Defender for DevOps. Sukhandeep explains how Defender for DevOps uses a central console to provide security teams DevOps insights across multi-pipeline environments, such as GitHub and Azure DevOps. Sukhandeep also covers the security recommendations created by Defender for DevOps and demonstrates how to configure a GitHub connector using Defender for Cloud dashboard.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=f1e5ec4f-1e65-400d-915b-4db6cf550014" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:16](/shows/mdc-in-the-field/defender-for-devops#time=01m16s) - What is Defender for DevOps?
+
+- [02:22](/shows/mdc-in-the-field/defender-for-devops) - Current Integrations
+
+- [02:47](/shows/mdc-in-the-field/defender-for-devops)- GitHub connector
+
+- [04:16](/shows/mdc-in-the-field/defender-for-devops) - Security recommendations
+
+- [05:54](/shows/mdc-in-the-field/defender-for-devops) - Protection for infrastructure as a code
+
+- [07:03](/shows/mdc-in-the-field/defender-for-devops) - Azure ADO connector
+
+- [08:22](/shows/mdc-in-the-field/defender-for-devops#time=08m22s) - Demonstration
+
+## Recommended resources
+ - [Learn more](/defender-for-cloud/defender-for-devops-introduction.md) about Defender for DevOps.
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
Title: 'Quickstart: Connect your Azure DevOps repositories to Microsoft Defender for Cloud' description: Learn how to connect your Azure DevOps repositories to Defender for Cloud. Previously updated : 09/20/2022 Last updated : 11/03/2022
By connecting your Azure DevOps repositories to Defender for Cloud, you'll exten
- **Defender for Cloud's Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources.
+API calls performed by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits?view=azure-devops). For more information, see the [FAQ section](#faq).
+ ## Prerequisites - An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
By connecting your Azure DevOps repositories to Defender for Cloud, you'll exten
| Aspect | Details | |--|--| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-| Pricing: | For pricing please see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
+| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)| | Regions: | Central US | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) |
The Defender for DevOps service automatically discovers the organizations, proje
- Learn how to [create your first pipeline](https://learn.microsoft.com/azure/devops/pipelines/create-first-pipeline?view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
+## FAQ
+
+### Do API calls made by Defender for Cloud count against my consumption limit?
+
+Yes, API calls made by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits?view=azure-devops). Defender for Cloud makes calls on-behalf of the user who onboards the connector.
+
+### Why is my organization list empty in the UI?
+
+If your organization list is empty in the UI after you onboarded an Azure DevOps connector, you need to ensure that the organization in Azure DevOps is connected to the Azure tenant that has the user who authenticated the connector.
+
+For information on how to correct this issue, check out the [DevOps trouble shooting guide](troubleshooting-guide.md#troubleshoot-azure-devops-organization-connector-issues).
+ ## Next steps Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
Title: 'Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud' description: Learn how to connect your GitHub repositories to Defender for Cloud. Previously updated : 11/02/2022 Last updated : 11/03/2022
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## May 2022
+
+Updates in May include:
+
+- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level)
+- [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)
+- [Add and remove the Defender profile for AKS clusters using the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-using-the-cli)
+
+### Multicloud settings of Servers plan are now available in connector level
+
+There are now connector-level settings for Defender for Servers in multicloud.
+
+The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription.
+
+All auto-provisioning components available in the connector-level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#defender-for-servers-plans).
+
+Updates in the UI include a reflection of the selected pricing tier and the required components configured.
+++
+### Changes to vulnerability assessment
+
+Defender for Containers now displays vulnerabilities that have medium and low severities that aren't patchable.
+
+As part of this update, vulnerabilities that have medium and low severities are now shown, whether or not patches are available. This update provides maximum visibility, but still allows you to filter out undesired vulnerabilities by using the provided Disable rule.
++
+Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
+
+### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
+
+When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instance's security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports, and only open them with authorized requests for a limited time frame.
+
+Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
+
+### Add and remove the Defender profile for AKS clusters using the CLI
+
+The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster.
+
+> [!NOTE]
+> This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli).
+
+## April 2022
+
+Updates in April include:
+
+- [New Defender for Servers plans](#new-defender-for-servers-plans)
+- [Relocation of custom recommendations](#relocation-of-custom-recommendations)
+- [PowerShell script to stream alerts to Splunk and QRadar](#powershell-script-to-stream-alerts-to-splunk-and-ibm-qradar)
+- [Deprecated the Azure Cache for Redis recommendation](#deprecated-the-azure-cache-for-redis-recommendation)
+- [New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data](#new-alert-variant-for-microsoft-defender-for-storage-preview-to-detect-exposure-of-sensitive-data)
+- [Container scan alert title augmented with IP address reputation](#container-scan-alert-title-augmented-with-ip-address-reputation)
+- [See the activity logs that relate to a security alert](#see-the-activity-logs-that-relate-to-a-security-alert)
+
+### New Defender for Servers plans
+
+Microsoft Defender for Servers is now offered in two incremental plans:
+
+- Defender for Servers Plan 2, formerly Defender for Servers
+- Defender for Servers Plan 1, provides support for Microsoft Defender for Endpoint only
+
+While Defender for Servers Plan 2 continues to provide protections from threats and vulnerabilities to your cloud and on-premises workloads, Defender for Servers Plan 1 provides endpoint protection only, powered by the natively integrated Defender for Endpoint. Read more about the [Defender for Servers plans](defender-for-servers-introduction.md#defender-for-servers-plans).
+
+If you have been using Defender for Servers until now no action is required.
+
+In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads.
+
+### Relocation of custom recommendations
+
+Custom recommendations are those created by users and have no effect on the secure score. The custom recommendations can now be found under the All recommendations tab.
+
+Use the new "recommendation type" filter, to locate custom recommendations.
+
+Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
+
+### PowerShell script to stream alerts to Splunk and IBM QRadar
+
+We recommend that you use Event Hubs and a built-in connector to export security alerts to Splunk and IBM QRadar. Now you can use a PowerShell script to set up the Azure resources needed to export security alerts for your subscription or tenant.
+
+Just download and run the PowerShell script. After you provide a few details of your environment, the script configures the resources for you. The script then produces output that you use in the SIEM platform to complete the integration.
+
+To learn more, see [Stream alerts to Splunk and QRadar](export-to-siem.md#stream-alerts-to-qradar-and-splunk).
+
+### Deprecated the Azure Cache for Redis recommendation
+
+The recommendation `Azure Cache for Redis should reside within a virtual network` (Preview) has been deprecated. WeΓÇÖve changed our guidance for securing Azure Cache for Redis instances. We recommend the use of a private endpoint to restrict access to your Azure Cache for Redis instance, instead of a virtual network.
+
+### New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data
+
+Microsoft Defender for Storage's alerts notifies you when threat actors attempt to scan and expose, successfully or not, misconfigured, publicly open storage containers to try to exfiltrate sensitive information.
+
+To allow for faster triaging and response time, when exfiltration of potentially sensitive data may have occurred, we've released a new variation to the existing `Publicly accessible storage containers have been exposed` alert.
+
+The new alert, `Publicly accessible storage containers with potentially sensitive data have been exposed`, is triggered with a `High` severity level, after there has been a successful discovery of a publicly open storage container(s) with names that statistically have been found to rarely be exposed publicly, suggesting they might hold sensitive information.
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|--|--|--|--|
+|**PREVIEW - Publicly accessible storage containers with potentially sensitive data have been exposed** <br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery.Sensitive)| Someone has scanned your Azure Storage account and exposed container(s) that allow public access. One or more of the exposed containers have names that indicate that they may contain sensitive data. <br> <br> This usually indicates reconnaissance by a threat actor that is scanning for misconfigured publicly accessible storage containers that may contain sensitive data. <br> <br> After a threat actor successfully discovers a container, they may continue by exfiltrating the data. <br> Γ£ö Azure Blob Storage <br> Γ£û Azure Files <br> Γ£û Azure Data Lake Storage Gen2 | Collection | High |
+
+### Container scan alert title augmented with IP address reputation
+
+An IP address's reputation can indicate whether the scanning activity originates from a known threat actor, or from an actor that is using the Tor network to hide their identity. Both of these indicators, suggest that there's malicious intent. The IP address's reputation is provided by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684).
+
+The addition of the IP address's reputation to the alert title provides a way to quickly evaluate the intent of the actor, and thus the severity of the threat.
+
+The following alerts will include this information:
+
+- `Publicly accessible storage containers have been exposed`
+
+- `Publicly accessible storage containers with potentially sensitive data have been exposed`
+
+- `Publicly accessible storage containers have been scanned. No publicly accessible data was discovered`
+
+For example, the added information to the title of the `Publicly accessible storage containers have been exposed` alert will look like this:
+
+- `Publicly accessible storage containers have been exposed`**`by a suspicious IP address`**
+
+- `Publicly accessible storage containers have been exposed`**`by a Tor exit node`**
+
+All of the alerts for Microsoft Defender for Storage will continue to include threat intelligence information in the IP entity under the alert's Related Entities section.
+
+### See the activity logs that relate to a security alert
+
+As part of the actions you can take to [evaluate a security alert](managing-and-responding-alerts.md#respond-to-security-alerts), you can find the related platform logs in **Inspect resource context** to gain context about the affected resource.
+Microsoft Defender for Cloud identifies platform logs that are within one day of the alert.
+
+The platform logs can help you evaluate the security threat and identify steps that you can take to mitigate the identified risk.
## March 2022
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
You can configure the Microsoft Security DevOps tools on Azure Pipelines and Git
| [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
-| [Template Analyze](https://github.com/Azure/template-analyzer)r | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [Template Analyze](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
| [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
We are announcing the addition of the new Defender Cloud Security Posture Manage
- Attack path analysis - Agentless scanning for machines
-Larn more about the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md).
+Learn more about the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md).
### MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations
The new release contains the following capabilities:
|Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb| |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095| |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
- |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 ||
+ |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
The recommendations although in preview, will appear next to the recommendations that are currently in GA.
These alerts inform you of an access denied anomaly, is detected for any of your
|--|--|--|--| | **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_DeniedAccountVolumeAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low | | **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that doesn't normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |-
-## May 2022
-
-Updates in May include:
--- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level)-- [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)-- [Add and remove the Defender profile for AKS clusters using the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-using-the-cli)-
-### Multicloud settings of Servers plan are now available in connector level
-
-There are now connector-level settings for Defender for Servers in multicloud.
-
-The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription.
-
-All auto-provisioning components available in the connector-level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#defender-for-servers-plans).
-
-Updates in the UI include a reflection of the selected pricing tier and the required components configured.
---
-### Changes to vulnerability assessment
-
-Defender for Containers now displays vulnerabilities that have medium and low severities that aren't patchable.
-
-As part of this update, vulnerabilities that have medium and low severities are now shown, whether or not patches are available. This update provides maximum visibility, but still allows you to filter out undesired vulnerabilities by using the provided Disable rule.
--
-Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
-
-### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
-
-When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instance's security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports, and only open them with authorized requests for a limited time frame.
-
-Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
-
-### Add and remove the Defender profile for AKS clusters using the CLI
-
-The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster.
-
-> [!NOTE]
-> This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli).
-
-## April 2022
-
-Updates in April include:
--- [New Defender for Servers plans](#new-defender-for-servers-plans)-- [Relocation of custom recommendations](#relocation-of-custom-recommendations)-- [PowerShell script to stream alerts to Splunk and QRadar](#powershell-script-to-stream-alerts-to-splunk-and-ibm-qradar)-- [Deprecated the Azure Cache for Redis recommendation](#deprecated-the-azure-cache-for-redis-recommendation)-- [New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data](#new-alert-variant-for-microsoft-defender-for-storage-preview-to-detect-exposure-of-sensitive-data)-- [Container scan alert title augmented with IP address reputation](#container-scan-alert-title-augmented-with-ip-address-reputation)-- [See the activity logs that relate to a security alert](#see-the-activity-logs-that-relate-to-a-security-alert)-
-### New Defender for Servers plans
-
-Microsoft Defender for Servers is now offered in two incremental plans:
--- Defender for Servers Plan 2, formerly Defender for Servers-- Defender for Servers Plan 1, provides support for Microsoft Defender for Endpoint only-
-While Defender for Servers Plan 2 continues to provide protections from threats and vulnerabilities to your cloud and on-premises workloads, Defender for Servers Plan 1 provides endpoint protection only, powered by the natively integrated Defender for Endpoint. Read more about the [Defender for Servers plans](defender-for-servers-introduction.md#defender-for-servers-plans).
-
-If you have been using Defender for Servers until now no action is required.
-
-In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads.
-
-### Relocation of custom recommendations
-
-Custom recommendations are those created by users and have no effect on the secure score. The custom recommendations can now be found under the All recommendations tab.
-
-Use the new "recommendation type" filter, to locate custom recommendations.
-
-Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
-
-### PowerShell script to stream alerts to Splunk and IBM QRadar
-
-We recommend that you use Event Hubs and a built-in connector to export security alerts to Splunk and IBM QRadar. Now you can use a PowerShell script to set up the Azure resources needed to export security alerts for your subscription or tenant.
-
-Just download and run the PowerShell script. After you provide a few details of your environment, the script configures the resources for you. The script then produces output that you use in the SIEM platform to complete the integration.
-
-To learn more, see [Stream alerts to Splunk and QRadar](export-to-siem.md#stream-alerts-to-qradar-and-splunk).
-
-### Deprecated the Azure Cache for Redis recommendation
-
-The recommendation `Azure Cache for Redis should reside within a virtual network` (Preview) has been deprecated. WeΓÇÖve changed our guidance for securing Azure Cache for Redis instances. We recommend the use of a private endpoint to restrict access to your Azure Cache for Redis instance, instead of a virtual network.
-
-### New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data
-
-Microsoft Defender for Storage's alerts notifies you when threat actors attempt to scan and expose, successfully or not, misconfigured, publicly open storage containers to try to exfiltrate sensitive information.
-
-To allow for faster triaging and response time, when exfiltration of potentially sensitive data may have occurred, we've released a new variation to the existing `Publicly accessible storage containers have been exposed` alert.
-
-The new alert, `Publicly accessible storage containers with potentially sensitive data have been exposed`, is triggered with a `High` severity level, after there has been a successful discovery of a publicly open storage container(s) with names that statistically have been found to rarely be exposed publicly, suggesting they might hold sensitive information.
-
-| Alert (alert type) | Description | MITRE tactic | Severity |
-|--|--|--|--|
-|**PREVIEW - Publicly accessible storage containers with potentially sensitive data have been exposed** <br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery.Sensitive)| Someone has scanned your Azure Storage account and exposed container(s) that allow public access. One or more of the exposed containers have names that indicate that they may contain sensitive data. <br> <br> This usually indicates reconnaissance by a threat actor that is scanning for misconfigured publicly accessible storage containers that may contain sensitive data. <br> <br> After a threat actor successfully discovers a container, they may continue by exfiltrating the data. <br> Γ£ö Azure Blob Storage <br> Γ£û Azure Files <br> Γ£û Azure Data Lake Storage Gen2 | Collection | High |
-
-### Container scan alert title augmented with IP address reputation
-
-An IP address's reputation can indicate whether the scanning activity originates from a known threat actor, or from an actor that is using the Tor network to hide their identity. Both of these indicators, suggest that there's malicious intent. The IP address's reputation is provided by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684).
-
-The addition of the IP address's reputation to the alert title provides a way to quickly evaluate the intent of the actor, and thus the severity of the threat.
-
-The following alerts will include this information:
--- `Publicly accessible storage containers have been exposed`--- `Publicly accessible storage containers with potentially sensitive data have been exposed`--- `Publicly accessible storage containers have been scanned. No publicly accessible data was discovered`-
-For example, the added information to the title of the `Publicly accessible storage containers have been exposed` alert will look like this:
--- `Publicly accessible storage containers have been exposed`**`by a suspicious IP address`**--- `Publicly accessible storage containers have been exposed`**`by a Tor exit node`**-
-All of the alerts for Microsoft Defender for Storage will continue to include threat intelligence information in the IP entity under the alert's Related Entities section.
-
-### See the activity logs that relate to a security alert
-
-As part of the actions you can take to [evaluate a security alert](managing-and-responding-alerts.md#respond-to-security-alerts), you can find the related platform logs in **Inspect resource context** to gain context about the affected resource.
-Microsoft Defender for Cloud identifies platform logs that are within one day of the alert.
-
-The platform logs can help you evaluate the security threat and identify steps that you can take to mitigate the identified risk.
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
description: Learn about 3D Scenes Studio (preview) for Azure Digital Twins. Previously updated : 05/04/2022 Last updated : 11/02/2022
You can use the **Elements** list to explore all the elements and active alerts
[!INCLUDE [digital-twins-3d-embed.md](../../includes/digital-twins-3d-embed.md)]
-## Recommended limits
+## Limits and performance
-When working with 3D Scenes Studio, it's recommended to stay within the following limits.
+When working with 3D Scenes Studio, it's recommended to stay within the following limits. If you exceed these recommended limits, you may experience degraded performance or unintended application behavior.
| Capability | Recommended limit | | | |
-| Number of elements | 50 |
+| Number of linked twins (including all unique primary twins and secondary twins on elements) | No limit, but consider performance implications as number of twins increases. For more detail, see [Refresh rate and performance](#refresh-rate-and-performance) below. |
| Size of 3D file | 100 MB |
-If you exceed these recommended limits, you may experience degraded performance or unintended application behavior.
+These limits are recommended because 3D Scenes Studio leverages the standard [Azure Digital Twins APIs](concepts-apis-sdks.md), and therefore is subject to the published [API rate limits](reference-service-limits.md#rate-limits). As the number of digital twins linked to the scenes increases, so does the amount of data that is pulled into your scene on a regular data refresh (see the [next part of this section](#refresh-rate-and-performance) for more detail about refresh rates). This means that you will see these additional API calls reflected in billing meters and operation throughput.
-These limits are recommended because 3D Scenes Studio leverages the standard [Azure Digital Twins APIs](concepts-apis-sdks.md), and therefore is subject to the published [API rate limits](reference-service-limits.md#rate-limits). 3D Scenes Studio requests all relevant digital twins data every **10 seconds**. As the number of digital twins linked to the scenes increases, so does the amount of data that is pulled on this cadence. This means that you will see these additional API calls reflected in billing meters and operation throughput.
+### Refresh rate and performance
+
+The default refresh rate of the 3D scene viewer starts at 10 seconds for fewer than 100 twins. It increases as the number of twins increases, at a rate of about one second for every 10 twins.
+
+The **minimum refresh rate** can also be configured manually, to exercise some control over how often data is pulled and the resulting impact on performance. You can configure the minimum refresh rate for the viewer to be anywhere between 10 seconds and one hour. The viewer will never drop below the minimum refresh rate that you set. The viewer may, however, raise the **actual** refresh rate as the number of twins increases, in an effort to improve performance.
+
+For instructions on how to configure the minimum refresh rate for the viewer, see [Configure minimum refresh rate](how-to-use-3d-scenes-studio.md#configure-minimum-refresh-rate).
## Next steps
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
description: Learn how to use all the features of 3D Scenes Studio (preview) for Azure Digital Twins. Previously updated : 05/03/2022 Last updated : 11/02/2022
When looking at your scene in the viewer, you can use the **Select layers** butt
:::image type="content" source="media/how-to-use-3d-scenes-studio/layers-select-viewer.png" alt-text="Screenshot of 3D Scenes Studio in View mode. The layer selection is highlighted." lightbox="media/how-to-use-3d-scenes-studio/layers-select-viewer.png":::
+## Configure minimum refresh rate
+
+You can manually configure the **minimum refresh rate** for the 3D scene viewer, to exercise some control over how often data is pulled and the resulting impact on performance. You can configure the minimum refresh rate to be anywhere between 10 seconds and one hour.
+
+In the builder for a scene, select the **Scene configuration** button.
++
+Use the dropdown list to select a refresh rate option.
+
+While looking at the scene in the viewer, you can hover over the **Refresh** button to see the refresh rate setting and the time of the last refresh. You can also select it to refresh the scene manually.
++ ## Modify theme In either the builder or viewer for a scene, select the **Theme** icon to change the style, object colors, and background color of the display.
event-grid Custom Event To Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-function.md
Title: 'Quickstart: Send custom events to Azure Function - Event Grid' description: 'Quickstart: Use Azure Event Grid and Azure CLI or portal to publish a topic, and subscribe to that event. An Azure Function is used for the endpoint.' Previously updated : 09/28/2021 Last updated : 11/02/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Quickstart: Route custom events to an Azure Function with Event Grid
-Azure Event Grid is an eventing service for the cloud. Azure Functions is one of the supported event handlers. In this article, you use the Azure portal to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to an Azure Function.
+[Azure Event Grid](overview.md) is an eventing service for the cloud. Azure Functions is one of the [supported event handlers](event-handlers.md). In this article, you use the Azure portal to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to an Azure Function.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
-## Create Azure Function
+## Create Azure function app
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. On the left navigational menu, select **All services**.
+1. Select **Compute** in the list of **Categories**.
+1. Hover (not select) the mouse over **Function App**, and select **Create**.
+
+ :::image type="content" source="./media/custom-event-to-function/create-function-app-link.png" lightbox="./media/custom-event-to-function/create-function-app-link.png" alt-text="Screenshot showing the select of Create link for a Function App.":::
+1. On the **Basics** page of the **Create Function App** wizard, follow these steps:
+ 1. Select your **Azure subscription** in which you want to create the function app.
+ 1. Create a new **resource group** or select an existing resource group.
+ 1. Specify a **name** for the function app.
+ 1. Select **.NET** for **Runtime stack**.
+ 1. Select the **region** closest to you.
+ 1. Select **Next: Hosting** at the bottom of the page.
+
+ :::image type="content" source="./media/custom-event-to-function/create-function-app-page.png" alt-text="Screenshot showing the Basics tab of the Create Function App page.":::
+1. On the **Hosting** page, create a new storage account or select an existing storage account to be associated with the function app, and then select **Review + create** at the bottom of the page.
+
+ :::image type="content" source="./media/custom-event-to-function/create-function-app-hosting-page.png" alt-text="Screenshot showing the Hosting tab of the Create Function App page.":::
+1. On the **Review + create** page, review settings, and select **Create** at the bottom of the page to create the function app.
+1. Once the deployment is successful, select **Go to resource** to navigate to the home page for the function app.
+
+## Create a function
Before subscribing to the custom topic, create a function to handle the events.
-1. Create a function app using instructions from [Create a function app](../azure-functions/functions-get-started.md).
1. On the **Function App** page, select **Functions** on the left menu.
-1. Select **+Create** on the toolbar to create a function.
+1. Select **+ Create** on the toolbar to create a function.
+
+ :::image type="content" source="./media/custom-event-to-function/create-function-link.png" alt-text="Screenshot showing the selection of Create function link.":::
+
1. On the **Create Function** page, follow these steps: 1. This step is optional. For **Development environment**, select the development environment that you want to use to work with the function code.
- 1. Select **Azure Event Grid Trigger** in the **Select a template** section.
- 1. Enter a name for the function. In this example, it's **HandleEventsFunc**.
+ 1. Select **Azure Event Grid Trigger** in the **Select a template** section. Use the scroll bar right to the list to scroll down if needed.
+ 1. In the **Template details** section in the bottom pane, enter a name for the function. In this example, it's **HandleEventsFunc**.
:::image type="content" source="./media/custom-event-to-function/function-event-grid-trigger.png" lightbox="./media/custom-event-to-function/function-event-grid-trigger.png" alt-text="Select Event Grid trigger.":::
-4. Use the **Code + Test** page to see the existing code for the function and update it.
+4. On the **Function** page for the **HandleEventsFunc**, select **Code + Test** on the left navigational menu.
:::image type="content" source="./media/custom-event-to-function/function-code-test-menu.png" alt-text="Image showing the selection Code + Test menu for an Azure function.":::
+5. Replace the code with the following code.
+ ```csharp
+ #r "Azure.Messaging.EventGrid"
+ #r "System.Memory.Data"
+
+ using Azure.Messaging.EventGrid;
+ using System;
+
+ public static void Run(EventGridEvent eventGridEvent, ILogger log)
+ {
+ log.LogInformation(eventGridEvent.Data.ToString());
+ }
+ ```
+ :::image type="content" source="./media/custom-event-to-function/function-updated-code.png" alt-text="Screenshot showing the Code + Test view of an Azure function with the updated code.":::
+6. Select **Monitor** on the left menu, and then select **Logs**.
+
+ :::image type="content" source="./media/custom-event-to-function/monitor-page.png" alt-text="Screenshot showing the Monitor view the Azure function.":::
+7. Keep this window or tab of the browser open so that you can see the received event information.
## Create a custom topic
-An event grid topic provides a user-defined endpoint that you post your events to.
+An Event Grid topic provides a user-defined endpoint that you post your events to.
-1. Sign in to [Azure portal](https://portal.azure.com/).
-2. Select **All services** on the left navigational menu, search for **Event Grid**, and select **Event Grid Topics**.
+1. On a new tab of the web browser window, sign in to [Azure portal](https://portal.azure.com/).
+2. In the search bar at the topic, search for **Event Grid Topics**, and select **Event Grid Topics**.
- :::image type="content" source="./media/custom-event-to-function/select-event-grid-topics.png" alt-text="Image showing the selection of Event Grid Topics.":::
-3. On the **Event Grid Topics** page, select **+ Add** on the toolbar.
+ :::image type="content" source="./media/custom-event-to-function/select-event-grid-topics.png" alt-text="Image showing the selection of Event Grid topics.":::
+3. On the **Event Grid Topics** page, select **+ Create** on the command bar.
- :::image type="content" source="./media/custom-event-to-function/add-event-grid-topic-button.png" alt-text="Image showing the Create button to create an event grid topic.":::
+ :::image type="content" source="./media/custom-event-to-function/add-event-grid-topic-button.png" alt-text="Screenshot showing the Create button to create an Event Grid topic.":::
4. On the **Create Topic** page, follow these steps: 1. Select your **Azure subscription**. 2. Select the same **resource group** from the previous steps. 3. Provide a unique **name** for the custom topic. The topic name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
- 4. Select a **location** for the event grid topic.
+ 4. Select a **location** for the Event Grid topic.
5. Select **Review + create**. :::image type="content" source="./media/custom-event-to-function/create-custom-topic.png" alt-text="Image showing the Create Topic page."::: 1. On the **Review + create** page, review settings and select **Create**.
-5. After the custom topic has been created, select **Go to resource** link to see the following Event Grid Topic page for the topic you created.
+5. After the custom topic has been created, select **Go to resource** link to see the following Event Grid topic page for the topic you created.
- :::image type="content" source="./media/custom-event-to-function/event-grid-topic-home-page.png" alt-text="Image showing the home page for your Event Grid custom topic.":::
+ :::image type="content" source="./media/custom-event-to-function/event-grid-topic-home-page.png" lightbox="./media/custom-event-to-function/event-grid-topic-home-page.png" alt-text="Image showing the home page for your Event Grid custom topic.":::
## Subscribe to custom topic
-You subscribe to an event grid topic to tell Event Grid which events you want to track, and where to send the events.
+You subscribe to an Event Grid topic to tell Event Grid which events you want to track, and where to send the events.
1. Now, on the **Event Grid Topic** page for your custom topic, select **+ Event Subscription** on the toolbar.
The first example uses Azure CLI. It gets the URL and key for the custom topic,
1. In the Azure portal, select **Cloud Shell**. Select **Bash** in the top-left corner of the Cloud Shell window. :::image type="content" source="./media/custom-event-quickstart-portal/cloud-shell-bash.png" alt-text="Image showing Cloud Shell - Bash window":::
+1. Set the `topicname` and `resourcegroupname` variables that will be used in the commands.
+
+ Replace `TOPICNAME` with the name of your Event Grid topic.
+
+ ```azurecli
+ topicname="TOPICNAME"
+ ```
+
+ Replace `RESOURCEGROUPNAME` with the name of the Azure resource group that contains the Event Grid topic.
+
+ ```azurecli
+ resourcegroupname="RESOURCEGROUPNAME"
+ ```
1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command. ```azurecli
- endpoint=$(az eventgrid topic show --name <topic name> -g <resource group name> --query "endpoint" --output tsv)
+ endpoint=$(az eventgrid topic show --name $topicname -g $resourcegroupname --query "endpoint" --output tsv)
``` 2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. ```azurecli
- key=$(az eventgrid topic key list --name <topic name> -g <resource group name> --query "key1" --output tsv)
+ key=$(az eventgrid topic key list --name $topicname -g $resourcegroupname --query "key1" --output tsv)
``` 3. Copy the following statement with the event definition, and press **ENTER**.
The second example uses PowerShell to perform similar steps.
```powershell $resourceGroupName = <resource group name>
+ ```
+
+ ```powershell
$topicName = <topic name> ``` 3. Run the following commands to get the **endpoint** and the **keys** for the topic:
The second example uses PowerShell to perform similar steps.
### Verify that function received the event You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. Navigate to your Event Grid triggered function and open the logs. You should see a copy of the data payload of the event in the logs. If you don't make sure you open the logs window first, or hit reconnect, and then try sending a test event again. ## Clean up resources If you plan to continue working with this event, don't clean up the resources created in this article. Otherwise, delete the resources you created in this article.
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
Last updated 10/25/2022
# Subscribe to events published by SAP This article describes steps to subscribe to events published by a SAP S/4HANA system.
-> [!NOTE]
-> See the [New SAP events on Azure Event Grid](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/new-sap-events-on-azure-event-grid/ba-p/3663372) for an announcement of this feature.
- ## High-level steps The common steps to subscribe to events published by any partner, including SAP, are described in [subscribe to partner events](subscribe-to-partner-events.md). For your quick reference, the steps are provided again here with the addition of a step to make sure that your SAP system has the required components. This article deals with steps 1 and 3.
firewall Deploy Ps Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps-policy.md
description: In this article, you learn how to deploy and configure Azure Firewa
Previously updated : 05/03/2021 Last updated : 11/03/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
You'll define the outbound type to use the UDR that already exists on the subnet
> For more information on outbound type UDR including limitations, see [**egress outbound type UDR**](../aks/egress-outboundtype.md#limitations). > [!TIP]
-> Additional features can be added to the cluster deployment such as [**Private Cluster**](../aks/private-clusters.md).
+> Additional features can be added to the cluster deployment such as [**Private Cluster**](../aks/private-clusters.md) or changing the [**OS SKU**](../aks/cluster-configuration.md#mariner-os).
> > The AKS feature for [**API server authorized IP ranges**](../aks/api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. The authorized IP ranges feature is denoted in the diagram as optional. When enabling the authorized IP range feature to limit API server access, your developer tools must use a jumpbox from the firewall's virtual network or you must add all developer endpoints to the authorized IP range.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
* When *caching* is **enabled**, the cache behavior is different based on the cache behavior value selected. * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from 1 to 3 days.
- * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives.
+ * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable.
* **Override if origin missing**: If the origin doesnΓÇÖt return caching TTL values, Azure Front Door will use the specified cache duration. This behavior will only be applied if the response is cacheable. > [!NOTE]
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
+
+ Title: Use managed identities with Azure Front Door Standard/Premium (Preview)
+description: This article will show you how to set up managed identities to use with your Azure Front Door Standard or Premium profile.
++++ Last updated : 11/02/2022+++
+# Use managed identities with Azure Front Door Standard/Premium (Preview)
+
+Azure Front Door also supports using managed identities to access Key Vault certificate. A managed identity generated by Azure Active Directory (Azure AD) allows your Azure Front Door instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to create or rotate any secrets. For more information about managed identities, seeΓÇ»[What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!NOTE]
+> Once you enable managed identities in Azure Front Door and grant proper permissions to access Key Vault, Azure Front Door will always use managed identities to access Key Vault for customer certificate.
+>
+> You can grant two types of identities to an Azure Front Door profile:
+> * A **system-assigned** identity is tied to your service and is deleted if your service is deleted. The service can have only **one** system-assigned identity.
+> * A **user-assigned** identity is a standalone Azure resource that can be assigned to your service. The service can have **multiple** user-assigned identities.
+>
+> Managed identities are specific to the Azure AD tenant where your Azure subscription is hosted. They don't get updated if a subscription gets moved to a different directory. If a subscription gets moved, you'll need to recreate and configure the identities.
+
+## Prerequisites
+
+Before you can set up managed identities for Front Door, you must have a Front Door Standard or Premium profile. To create an Azure Front Door profile, see [create an Azure Front Door](create-front-door-portal.md).
+
+## Enable managed identity
+
+1. Go to an existing Azure Front Door Standard or Premium profile. Select **Identity (preview)** under *Settings*.
+
+ :::image type="content" source="./media/managed-identity/overview.png" alt-text="Screenshot of the identity button under settings for a Front Door profile.":::
+
+1. Select either **System assigned** or **User assigned**.
+
+ * **System assigned** - a managed identity is created for the Azure Front Door profile lifecycle and is used to access a Key Vault.
+
+ * **User assigned** - a standalone managed identity resource used to authenticate to a Key Vault and has its own lifecycle.
+
+### System assigned
+
+1. Toggle the *Status* to **On** and then select **Save**.
+
+ :::image type="content" source="./media/managed-identity/system-assigned.png" alt-text="Screenshot of the system assigned managed identity configuration page.":::
+
+1. You'll be prompted with a message to confirm you would like to create a system managed identity for the Front Door profile. Select **Yes** to confirm.
+
+ :::image type="content" source="./media/managed-identity/system-assigned-confirm.png" alt-text="Screenshot of the system assigned managed identity confirmation message.":::
+
+1. Once the system assigned managed identity has been created and registered with Azure AD, you can use the **Object (principal) ID** to allow Azure Front Door access to your Key Vault.
+
+ :::image type="content" source="./media/managed-identity/system-assigned-created.png" alt-text="Screenshot of the system assigned managed identity registered with Azure Active Directory.":::
+
+### User assigned
+
+1. You must have a user managed identity already created. For more information, see [create a user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+1. Select the **User assigned** tab and then select **+ Add**.
+
+ :::image type="content" source="./media/managed-identity/user-assigned.png" alt-text="Screenshot of the user assigned managed identity configuration page.":::
+
+1. Search and select the user assigned manage identity. Then select **Add** to add the user managed identity to the Azure Front Door profile.
+
+ :::image type="content" source="./media/managed-identity/add-user-managed-identity.png" alt-text="Screenshot of the add user assigned managed identity page.":::
+
+1. You'll now see the name of the user assigned managed identity you've selected show in the Azure Front Door profile.
+
+ :::image type="content" source="./media/managed-identity/user-assigned-configured.png" alt-text="Screenshot of the add user assigned managed identity added to Front Door profile.":::
+
+## Configure Key Vault access policy
+
+1. Navigate to your Azure Key Vault.
+
+ :::image type="content" source="./media/managed-identity/key-vault-list.png" alt-text="Screenshot of the Key Vault resource list.":::
+
+1. Select **Access policies** from under *Settings* and then select **+ Create**.
+
+ :::image type="content" source="./media/managed-identity/access-policies.png" alt-text="Screenshot of the access policies page for a Key Vault.":::
+
+1. On the **Permissions** tab of the *Create an access policy* page, select **List** and **Get** under *Secret permissions*. Then select **Next** to configure the next tab.
+
+ :::image type="content" source="./media/managed-identity/permissions.png" alt-text="Screenshot of the permissions tab for the Key Vault access policy.":::
+
+1. On the *Principal* tab, paste the **object (principal) ID** if you're using a system managed identity or enter a **name** if you're using a user assigned manged identity. Then select **Next** to configure the next tab.
+
+ :::image type="content" source="./media/managed-identity/system-principal.png" alt-text="Screenshot of the principal tab for the Key Vault access policy.":::
+
+1. On the *Application* tab, the application has already been selected for you. Select **Next** to go to the *Review + create* tab.
+
+ :::image type="content" source="./media/managed-identity/application.png" alt-text="Screenshot of the application tab for the Key Vault access policy.":::
+
+1. Review the access policy settings and then select **Create** to set up the access policy.
+
+ :::image type="content" source="./media/managed-identity/create.png" alt-text="Screenshot of the review and create tab for the Key Vault access policy.":::
+
+## Verify access
+
+1. Go to the Azure Front Door profile you enabled managed identity and select **Secret** from under *Settings*.
+
+ :::image type="content" source="./media/managed-identity/secrets.png" alt-text="Screenshot of accessing secrets from under settings of a Front Door profile.":::
+
+1. Confirm **Managed identity** appears under the *Access role* column for the certificate used in Front Door.
+
+ :::image type="content" source="./media/managed-identity/confirm-set-up.png" alt-text="Screenshot of Azure Front Door using managed identity to access certificate in Key Vault.":::
+
+## Next steps
+
+* Learn how to [configure HTTPS on an Azure Front Door custom domain](standard-premium/how-to-configure-https-custom-domain.md).
+* Learn more about [End-to-end TLS encryption](end-to-end-tls.md).
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Azure Policy supports the following types of effect:
- **Deny**: generates an event in the activity log and fails the request - **DeployIfNotExists**: deploys a related resource if it doesn't already exist - **Disabled**: doesn't evaluate resources for compliance to the policy rule-- **Modify**: adds, updates, or removes the defined tags from a resource or subscription
+- **Modify**: adds, updates, or removes the defined set of fields in the request
- **EnforceOPAConstraint** (deprecated): configures the Open Policy Agent admissions controller with Gatekeeper v3 for self-managed Kubernetes clusters on Azure - **EnforceRegoPolicy** (deprecated): configures the Open Policy Agent admissions controller with
hdinsight Secure Spark Kafka Streaming Integration Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/secure-spark-kafka-streaming-integration-scenario.md
+
+ Title: Secure Spark and Kafka ΓÇô Spark streaming integration scenario - Azure HDInsight
+description: Learn how to secure Spark and Kafka streaming integration.
++++ Last updated : 11/03/2022++
+# Secure Spark and Kafka ΓÇô Spark streaming integration scenario
+
+In this document, you'll learn how to execute a Spark job in a secure Spark cluster that reads from a topic in secure Kafka cluster, provided the virtual networks are same/peered.
+
+**Pre-requisites**
+
+* Create a secure Kafka cluster and secure spark cluster with the same Microsoft Azure Active Directory Domain Services (Azure AD DS) domain and same vnet. If you prefer not to create both clusters in the same vnet, you can create them in two separate vnets and pair the vnets also. If you prefer not to create both clusters in the same vnet.
+* If your clusters are in different vnets, see here [Connect virtual networks with virtual network peering using the Azure portal](/azure/virtual-network/tutorial-connect-virtual-networks-portal)
+* Create key tabs for two users. For example, `alicetest` and `bobadmin`.
+
+## What is a keytab?
+
+A keytab is a file containing pairs of Kerberos principles and encrypted keys (which are derived from the Kerberos password). You can use a keytab file to authenticate to various remote systems using Kerberos without entering a password.
+
+For more information about this topic, see
+
+1. [KTUTIL](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/admin_commands/ktutil.html)
+
+1. [Creating a Kerberos principal and keytab file](https://www.ibm.com/docs/en/pasc/1.1?topic=file-creating-kerberos-principal-keytab)
+
+```
+ktutil
+ktutil: addent -password -p user1@TEST.COM -k 1 -e RC4-HMAC
+Password for user1@TEST.COM:
+ktutil: wkt user1.keytab
+ktutil: q
+```
+
+3. Create a spark streaming java application that reads from Kafka topics. This document uses a DirectKafkaWorkCount example that was based off spark streaming examples from https://github.com/apache/spark/blob/branch-2.3/examples/src/main/java/org/apache/spark/examples/streaming/JavaDirectKafkaWordCount.java
+
+### High level walkthrough of the Scenarios
+
+Set up on Kafka cluster:
+1. Create topics `alicetopic2`, `bobtopic2`
+1. Produce data to topics `alicetopic2`, `bobtopic2`
+1. Set up Ranger policy to allow `alicetest` user to read from `alicetopic*`
+1. Set up Ranger policy to allow `bobadmin` user to read from `*`
++
+### Scenarios to be executed on Spark cluster
+1. Consume data from `alicetopic2` as `alicetest` user. The spark job would run successfully and the count of the words in the topic should be output in the YARN UI. The Ranger audit records in kafka cluster would show that access is allowed.
+1. Consume data from `bobtopic2` as `alicetest` user. The spark job would fail with `org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [bobtopic2]`. The Ranger audit records in kafka cluster would show that the access is denied.
+1. Consume data from `alicetopic2` as `bobadmin` user. The spark job would run successfully and the count of the words in the topic should be output in the YARN UI. The Ranger audit records in kafka cluster would show that access is allowed.
+1. Consume data from `bobtopic2` as `bobadmin` user. The spark job would run successfully and the count of the words in the topic should be output in the YARN UI. The Ranger audit records in kafka cluster would show that access is allowed.
++
+### Steps to be performed on Kafka cluster
+
+In the Kafka cluster, set up Ranger policies and produce data from Kafka cluster that are explained in this section
+
+1. Go to Ranger UI on kafka cluster and set up two Ranger policies
+
+1. Add a Ranger policy for `alicetest` with consume access to topics with wildcard pattern `alicetopic*`
+
+1. Add a Ranger policy for `bobadmin` with all accesses to all topics with wildcard pattern `*`
+
+1. Execute the commands below based on your parameter values
+
+ ```
+ sshuser@hn0-umasec:~$ sudo apt -y install jq
+ sshuser@hn0-umasec:~$ export clusterName='YOUR_CLUSTER_NAME'
+ sshuser@hn0-umasec:~$ export TOPICNAME='YOUR_TOPIC_NAME'
+ sshuser@hn0-umasec:~$ export password='YOUR_SSH_PASSWORD'
+ sshuser@hn0-umasec:~$ export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
+ sshuser@hn0-umasec:~$ export KAFKAZKHOSTS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2);
+ sshuser@hn0-umasec:~$ echo $KAFKABROKERS
+ wn0-umasec.securehadooprc.onmicrosoft.com:9092,
+ wn1-umasec.securehadooprc.onmicrosoft.com:9092
+ ```
+1. Create a keytab for user `bobadmin` using `ktutil` tool.
+1. Let's call this file `bobadmin.keytab`
+
+ ```
+ sshuser@hn0-umasec:~$ ktutil
+ ktutil: addent -password -p bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM -k 1 -e RC4-HMAC
+ Password for <username>@<DOMAIN.COM>
+ ktutil: wkt bobadmin.keytab
+ ktutil: q
+ Kinit the created keytab
+ sudo kinit bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM -t bobadmin.keytab
+ ```
+1. Create a `bobadmin_jaas.config`
+
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="./bobadmin.keytab"
+ useTicketCache=false
+ serviceName="kafka"
+ principal="bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM";
+ };
+ ```
+1. Create topics `alicetopic2` and `bobtopic2` as `bobadmin`
+
+ ```
+ sshuser@hn0-umasec:~$ java -jar -Djava.security.auth.login.config=bobadmin_jaas.conf kafka-producer-consumer.jar create alicetopic2 $KAFKABROKERS
+ sshuser@hn0-umasec:~$ java -jar -Djava.security.auth.login.config=bobadmin_jaas.conf kafka-producer-consumer.jar create bobtopic2 $KAFKABROKERS
+ ```
+1. Produce data to `alicetopic2` as `bobadmin`
+
+ ```
+ sshuser@hn0-umasec:~$ java -jar -Djava.security.auth.login.config=bobadmin_jaas.conf kafka-producer-consumer.jar producer alicetopic2 $KAFKABROKERS
+ ```
+1. Produce data to `bobtopic2` as `bobadmin`
+
+ ```
+ sshuser@hn0-umasec:~$ java -jar -Djava.security.auth.login.config=bobadmin_jaas.conf kafka-producer-consumer.jar producer bobadmin2 $KAFKABROKERS
+ ```
+
+### Set up steps to be performed on Spark cluster
+
+In the Spark cluster, add entries in `/etc/hosts` in spark worker nodes, for Kafka worker nodes, create keytabs, jaas_config files, and perform a spark-submit to submit a spark job to read from the kafka topic:
+
+1. ssh into spark cluster with sshuser credentials
+
+1. Make entries for the kafka worker nodes in `/etc/hosts` of the spark cluster.
+
+ > [!Note]
+ > Make the entry of these kafka worker nodes in every spark node (head node + worker node). You can get these details from kafka cluster in /etc/hosts of head node of Kafka.
+
+ ```
+ 10.3.16.118 wn0-umasec.securehadooprc.onmicrosoft.com wn0-umasec wn0-umasec.securehadooprc.onmicrosoft.com. wn0-umasec.cu1cgjaim53urggr4psrgczloa.cx.internal.cloudapp.net
+
+ 10.3.16.145 wn1-umasec.securehadooprc.onmicrosoft.com wn1-umasec wn1-umasec.securehadooprc.onmicrosoft.com. wn1-umasec.cu1cgjaim53urggr4psrgczloa.cx.internal.cloudapp.net
+
+ 10.3.16.176 wn2-umasec.securehadooprc.onmicrosoft.com wn2-umasec wn2-umasec.securehadooprc.onmicrosoft.com. wn2-umasec.cu1cgjaim53urggr4psrgczloa.cx.internal.cloudapp.net
+ ```
+1. Create a keytab for user `bobadmin` using ktutil tool. Let's call this file `bobadmin.keytab`
+
+1. Create a keytab for user `alicetest` using ktutil tool. Let's call this file `alicetest.keytab`
+
+1. Create a `bobadmin_jaas.conf` as shown in below sample
+
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="./bobadmin.keytab"
+ useTicketCache=false
+ serviceName="kafka"
+ principal="bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM";
+ };
+ ```
+1. Create an `alicetest_jaas.conf` as shown in below sample
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="./alicetest.keytab"
+ useTicketCache=false
+ serviceName="kafka"
+ principal="alicetest@SECUREHADOOPRC.ONMICROSOFT.COM";
+ };
+ ```
+1. Get the spark-streaming jar ready.
+
+1. Build your own jar that reads from a Kafka topic by following the example and instructions [here](https://github.com/apache/spark/blob/branch-2.3/examples/src/main/scala/org/apache/spark/examples/streaming/DirectKafkaWordCount.scala) for `DirectKafkaWorkCount`
+
+> [!Note]
+> For convenience, this sample jar used in this example was built from https://github.com/markgrover/spark-secure-kafka-app by following these steps.
+
+```
+sudo apt install maven
+git clone https://github.com/markgrover/spark-secure-kafka-app.git
+cd spark-secure-kafka-app
+mvn clean package
+cd target
+```
+
+## Scenario 1
+
+From Spark cluster, read from kafka topic `alicetopic2` as user `alicetest` is allowed
+
+1. Run a `kdestroy` command to remove the Kerberos tickets in credential cache by issuing following command
+
+ ```
+ sshuser@hn0-umaspa:~$ kdestroy
+ ```
+1. Run the command `kinit` with `alicetest`
+
+ ```
+ sshuser@hn0-umaspa:~$ kinit alicetest@SECUREHADOOPRC.ONMICROSOFT.COM -t alicetest.keytab
+ ```
+
+1. Run a `spark-submit` command to read from kafka topic `alicetopic2` as `alicetest`
+
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages <list of packages the jar depends on> --repositories <repository for the dependency packages> --files alicetest_jaas.conf#alicetest_jaas.conf,alicetest.keytab#alicetest.keytab --driver-java-options "-Djava.security.auth.login.config=./alicetest_jaas.conf" --class <classname to execute in jar> --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./alicetest_jaas.conf" <path to jar> <kafkabrokerlist> <topicname> false
+ ```
+ For example,
+
+ ```
+ sshuser@hn0-umaspa:~$ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.2.3.1.0.4-1 --repositories http://repo.hortonworks.com/content/repositories/releases/ --files alicetest_jaas.conf#alicetest_jaas.conf,alicetest.keytab#alicetest.keytab --driver-java-options "-Djava.security.auth.login.config=./alicetest_jaas.conf" --class com.cloudera.spark.examples.DirectKafkaWordCount --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./alicetest_jaas.conf" /home/sshuser/spark-secure-kafka-app/target/spark-secure-kafka-app-1.0-SNAPSHOT.jar 10.3.16.118:9092 alicetopic2 false
+ ```
+
+ If you see the below error, which denotes the DNS (Domain Name Server) issue. Make sure to check Kafka worker nodes entry in `/etc/hosts` file in Spark cluster.
+
+ ```
+ Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))
+ at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770)
+ at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
+ at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
+ at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
+ ```
+
+1. From YARN UI, access the YARN job output you can see the `alicetest` user is able to read from `alicetopic2`. You can see the word count in the output.
+
+1. Below are the detailed steps on how to check the application output from YARN UI.
+
+ 1. Go to YARN UI and open your application. Wait for the job to go to RUNNING state. You'll see the application details as below.
+
+ 1. Click on Logs. You'll see the list of logs as shown below.
+
+ 1. Click on 'stdout'. You'll see the output with the count of words from your Kafka topic.
+
+ 1. On the Kafka clusterΓÇÖs Ranger UI, audit logs for the same will be shown.
+
+## Scenario 2
+
+From Spark cluster, read Kafka topic `bobtopic2` as user `alicetest` is denied
+
+1. Run `kdestroy` command to remove the Kerberos tickets in credential cache by issuing following command
+
+ ```
+ sshuser@hn0-umaspa:~$ kdestroy
+ ```
+1. Run the command `kinit` with `alicetest`
+
+ ```
+ sshuser@hn0-umaspa:~$ kinit alicetest@SECUREHADOOPRC.ONMICROSOFT.COM -t alicetest.keytab
+ ```
+
+1. Run spark-submit command to read from kafka topic `bobtopic2` as `alicetest`
+
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages <list of packages the jar depends on> --repositories <repository for the dependency packages> --files alicetest_jaas.conf#alicetest_jaas.conf,alicetest.keytab#alicetest.keytab --driver-java-options "-Djava.security.auth.login.config=./alicetest_jaas.conf" --class <classname to execute in jar> --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./alicetest_jaas.conf" <path to jar> <kafkabrokerlist> <topicname> false
+ ```
+
+ For example,
+
+ ```
+ sshuser@hn0-umaspa:~$ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.2.3.1.0.4-1 --repositories http://repo.hortonworks.com/content/repositories/releases/ --files alicetest_jaas.conf#alicetest_jaas.conf,alicetest.keytab#alicestest.keytab --driver-java-options "-Djava.security.auth.login.config=./alicetest_jaas.conf" --class com.cloudera.spark.examples.DirectKafkaWordCount --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./alicetest_jaas.conf" /home/sshuser/spark-secure-kafka-app/target/spark-secure-kafka-app-1.0-SNAPSHOT.jar 10.3.16.118:9092 bobtopic2 false
+ ```
+
+1. From yarn UI, access the yarn job output you can see that `alicetest` user is unable to read from `bobtopic2` and the job fails.
+
+1. On the Kafka clusterΓÇÖs Ranger UI, audit logs for the same will be shown.
+
+## Scenario 3
+
+From Spark cluster, read from kafka topic `alicetopic2` as user `bobadmin` is allowed
+
+1. Run `kdestroy` command to remove the Kerberos tickets in credential cache
+ ```
+ sshuser@hn0-umaspa:~$ kdestroy
+ ```
+1. Run `kinit` command with `bobadmin`
+
+ ```
+ sshuser@hn0-umaspa:~$ kinit bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM -t bobadmin.keytab
+ ```
+
+1. Run `spark-submit` command to read from kafka topic `alicetopic2` as `bobadmin`
+
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages <list of packages the jar depends on> --repositories <repository for the dependency packages> --files bobadmin_jaas.conf#bobadmin_jaas.conf,bobadmin.keytab#bobadmin.keytab --driver-java-options "-Djava.security.auth.login.config=./bobadmin_jaas.conf" --class <classname to execute in jar> --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./bobadmin_jaas.conf" <path to jar> <kafkabrokerlist> <topicname> false
+ ```
+
+ For example,
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.2.3.1.0.4-1 --repositories http://repo.hortonworks.com/content/repositories/releases/ --files bobadmin_jaas.conf#bobadmin_jaas.conf,bobadmin.keytab#bobadmin.keytab --driver-java-options "-Djava.security.auth.login.config=./bobadmin_jaas.conf" --class com.cloudera.spark.examples.DirectKafkaWordCount --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./bobadmin_jaas.conf" /home/sshuser/spark-secure-kafka-app/target/spark-secure-kafka-app-1.0-SNAPSHOT.jar wn0-umasec:9092, wn1-umasec:9092 alicetopic2 false
+ ```
+
+1. From YARN UI, access the yarn job output you can see that `bobadmin` user is able to read from `alicetopic2` and the count of words is seen in the output.
+
+1. On the Kafka clusterΓÇÖs Ranger UI, audit logs for the same will be shown.
+
+## Scenario 4
+
+From Spark cluster, read from Kafka topic `bobtopic2` as user `bobadmin` is allowed.
+
+1. Remove the Kerberos tickets in Credential Cache by running following command
+ ```
+ sshuser@hn0-umaspa:~$ kdestroy
+ ```
+
+1. Run `kinit` with `bobadmin`
+ ```
+ sshuser@hn0-umaspa:~$ kinit bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM -t bobadmin.keytab
+ ```
+1. Run a `spark-submit` command to read from Kafka topic `bobtopic2` as `bobadmin`
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages <list of packages the jar depends on> --repositories <repository for the dependency packages> --files bobadmin_jaas.conf#bobadmin_jaas.conf,bobadmin.keytab#bobadmin.keytab --driver-java-options "-Djava.security.auth.login.config=./bobadmin_jaas.conf" --class <classname to execute in jar> --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./bobadmin_jaas.conf" <path to jar> <kafkabrokerlist> <topicname> false
+ ```
+ For example,
+ ```
+ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.2.3.1.0.4-1 --repositories http://repo.hortonworks.com/content/repositories/releases/ --files bobadmin_jaas.conf#bobadmin_jaas.conf,bobadmin.keytab#bobadmin.keytab --driver-java-options "-Djava.security.auth.login.config=./bobadmin_jaas.conf" --class com.cloudera.spark.examples.DirectKafkaWordCount --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./bobadmin_jaas.conf" /home/sshuser/spark-secure-kafka-app/target/spark-secure-kafka-app-1.0-SNAPSHOT.jar wn0-umasec:9092, wn1-umasec:9092 bobtopic2 false
+ ```
+1. From YARN UI, access the YARN job output you can see that `bobtest` user is able to read from `bobtopic2` and the count of words is seen in the output.
+
+1. On the Kafka clusterΓÇÖs Ranger UI, audit logs for the same will be shown.
+
+
+## Next steps
+
+* [Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight](apache-kafka-ssl-encryption-authentication.md)
industry Generate Soil Moisture Map In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/generate-soil-moisture-map-in-azure-farmbeats.md
Ensure the following:
## Create a farm
-A farm is a geographical area of interest for which you want to create a soil moisture heatmap. You can create a farm using the [Farms API](https://aka.ms/FarmBeatsDatahubSwagger) or in the [FarmsBeats Accelerator UI](manage-farms-in-azure-farmbeats.md#create-farms)
+A farm is a geographical area of interest for which you want to create a soil moisture heatmap. You can create a farm using the Farms API or in the [FarmsBeats Accelerator UI](manage-farms-in-azure-farmbeats.md#create-farms)
## Deploy sensors
industry Get Drone Imagery In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-drone-imagery-in-azure-farmbeats.md
Provide the following information to your device provider to enable integration
Follow these steps.
-1. Download this [script](https://aka.ms/farmbeatspartnerscript), and extract it to your local drive. Two files are inside the zip file.
+1. Download this script, and extract it to your local drive. Two files are inside the zip file.
2. Sign in to the [Azure portal](https://portal.azure.com/) and open Azure Cloud Shell. This option is available on the toolbar in the upper-right corner of the portal. ![Open Azure Cloud Shell on upper-right bar of the portal](./media/get-drone-imagery-from-drone-partner/navigation-bar-1.png)
industry Imagery Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/imagery-partner-integration-in-azure-farmbeats.md
You must use the following credentials in the drone partner software to link Far
## API development
-The APIs contain Swagger technical documentation. For information about the APIs and corresponding requests or responses, see [Swagger](https://aka.ms/FarmBeatsDatahubSwagger).
+The APIs contain Swagger technical documentation.
## Authentication
The POST call to the /SceneFile API returns an SAS upload URL, which can be used
## Next steps
-For more information on REST API-based integration details, see [REST API](rest-api-in-azure-farmbeats.md).
+For more information on REST API-based integration details, see [REST API](rest-api-in-azure-farmbeats.md).
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Follow these steps:
| Description | Provide a meaningful description. | | Properties | Additional properties from the manufacturer. |
-For more information about objects, see [Swagger](https://aka.ms/FarmBeatsDatahubSwagger).
- ### API request to create metadata To make an API request, you combine the HTTP (POST) method, the URL to the API service, and the URI to a resource to query, submit data to, create, or delete a request. Then you add one or more HTTP request headers. The URL to the API service is the API endpoint, that is, the Datahub URL (https://\<yourdatahub>.azurewebsites.net).
Here's an example of a telemetry message:
## Next steps
-For more information about REST API-based integration details, see [REST API](rest-api-in-azure-farmbeats.md).
+For more information about REST API-based integration details, see [REST API](rest-api-in-azure-farmbeats.md).
industry References For Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/references-for-azure-farmbeats.md
# Reference information for FarmBeats [FarmBeats REST API](rest-api-in-azure-farmbeats.md).-
-[FarmBeats Data hub Swagger](https://aka.ms/FarmBeatsDatahubSwagger).
industry Rest Api In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/rest-api-in-azure-farmbeats.md
This article describes the Azure FarmBeats APIs. The Azure FarmBeats APIs provid
## Application development
-The FarmBeats APIs contain Swagger technical documentation. For information on all the APIs and their corresponding requests or responses, see [Swagger](https://aka.ms/FarmBeatsDatahubSwagger).
+The FarmBeats APIs contain Swagger technical documentation.
The following table summarizes all the objects and resources in FarmBeats Datahub:
Use the access token to send it in subsequent API requests in the header section
```http headers = {"Authorization": "Bearer " + **access_token**, "Content-Type" : "application/json" }
-```
+```
industry Sensor Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/sensor-partner-integration-in-azure-farmbeats.md
The telemetry data is mapped to a canonical message that's published on Azure Ev
**API development**
-The APIs contain Swagger technical documentation. For more information on the APIs and their corresponding requests or responses, see [Swagger](https://aka.ms/FarmBeatsSwagger).
+The APIs contain Swagger technical documentation.
**Authentication**
FarmBeats Datahub has the following APIs that enable device partners to create a
Description | Provide a meaningful description. Properties | Additional properties from the manufacturer.
- For information on each of the objects and their properties, see [Swagger](https://aka.ms/FarmBeatsDatahubSwagger).
- > [!NOTE] > The APIs return unique IDs for each instance created. This ID needs to be retained by the Translator for device management and metadata sync.
The Translator should have the ability to add new devices or sensors that were i
### Add new types and units
-FarmBeats supports adding new sensor measure types and units. For more information about the /ExtendedType API, see [Swagger](https://aka.ms/FarmBeatsSwagger).
+FarmBeats supports adding new sensor measure types and units.
## Telemetry specifications
Device manufacturers or partners can use the following checklist to ensure that
## Next steps
-For more information about the REST API, see [REST API](rest-api-in-azure-farmbeats.md).
+For more information about the REST API, see [REST API](rest-api-in-azure-farmbeats.md).
industry Troubleshoot Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/troubleshoot-azure-farmbeats.md
This issue can occur if any maintenance activities are being done on the Sentine
**Issue**: The **Soil Moisture map** was generated, but the map has mostly white areas.
-**Corrective action**: This issue can occur if the satellite indices generated for the time for which the map was requested has NDVI values that is less than 0.3. For more information, visit [Technical Guide from Sentinel](https://earth.esa.int/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm).
+**Corrective action**: This issue can occur if the satellite indices generated for the time for which the map was requested has NDVI values that is less than 0.3. For more information, visit [Technical Guide from Sentinel](https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-2-msi).
1. Rerun the job for a different date range and check if the NDVI values in the satellite indices are more than 0.3.
This issue can occur if any maintenance activities are being done on the Sentine
:::image type="content" source="./media/troubleshoot-Azure-farmbeats/weather-log-6.png" alt-text="Project FarmBeats":::
-8. The search result will contain the folder which has the logs pertaining to the job. Download the logs and send it to farmbeatssupport@microsoft.com for assistance in debugging the issue.
+8. The search result will contain the folder which has the logs pertaining to the job. Download the logs and send it to farmbeatssupport@microsoft.com for assistance in debugging the issue.
industry Weather Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/weather-partner-integration-in-azure-farmbeats.md
Customers use this Docker information to register a weather partner in their Far
**REST API-based integration**
-The FarmBeats APIs contain Swagger technical documentation. For more information about the APIs and their
-corresponding requests or responses, see the [FarmBeats Swagger](https://aka.ms/farmbeatsswagger).
+The FarmBeats APIs contain Swagger technical documentation.
If you've already installed FarmBeats, access your FarmBeats Swagger at `https://yourfarmbeatswebsitename-api.azurewebsites.net/swagger`
If you customize the *bootstrap_manifest.json* file, then the reference bootstra
- /**WeatherDataModel**: The WeatherDataModel metadata represents weather data. It corresponds to data sets that the source provides. For example, a DailyForecastSimpleModel might provide average temperature, humidity, and precipitation information once a day. By contrast, a DailyForecastAdvancedModel might provide much more information at hourly granularity. You can create any number of weather data models. - /**JobType**: FarmBeats has an extensible job management system. As a weather data provider, you'll have various datasets and APIs (for example, GetDailyForecasts). You can enable these datasets and APIs in FarmBeats by using JobType. After a job type is created, a customer can
-trigger jobs of that type to get weather data for their location or their farm of interest. For more information, see JobType and Job APIs in the [FarmBeats Swagger](https://aka.ms/farmbeatsswagger).
+trigger jobs of that type to get weather data for their location or their farm of interest.
### Jobs
Description | Description of the weather data location. |
farmId | (Optional) ID of the farm. The customer provides this ID as part of the job parameter. | Properties | Additional properties from the manufacturer.
-For more information about the objects and their properties, see the [FarmBeats Swagger](https://aka.ms/FarmBeatsSwagger).
- > [!NOTE] > The APIs return unique IDs for each instance that's created. The translator for device management and metadata sync needs to retain this ID.
Follow these steps to add a new WeatherMeasure type, for example, PrecipitationD
2. Note the ID of the returned object. 3. Add the new type to the list in the returned object. Make a `PUT` request on the /ExtendedType{ID} with the following new list. The input payload should be the same as the response that you received earlier. The new unit should be appended at the end of the list of values.
-For more information about the /ExtendedType API, see the [FarmBeats Swagger](https://aka.ms/FarmBeatsSwagger).
- ## Next steps Now you have a Connector Docker component that integrates with FarmBeats. Next, find out how to [get weather data](get-weather-data-from-weather-partner.md) by using your Docker image in FarmBeats.
iot-dps Quick Setup Auto Provision Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-terraform.md
Title: Quickstart - Use Terraform to create a DPS instance
description: Learn how to deploy an Azure IoT Device Provisioning Service (DPS) resource with Terraform in this quickstart. keywords: azure, devops, terraform, device provisioning service, DPS, IoT, IoT Hub DPS Previously updated : 10/27/2022 Last updated : 11/03/2022
In this article, you learn how to:
## Verify the results
-**Azure CLI**
+#### [Bash](#tab/bash)
+ Run [az iot dps show](/cli/azure/iot/dps#az-iot-dps-show) to display the Azure DPS resource.
- ```azurecli
- az iot dps show \
- --name my_terraform_dps \
- --resource-group rg
- ```
+```azurecli
+az iot dps show \
+ --name <azurerm_iothub_dps_name> \
+ --resource-group <resource_group_name>
+```
+
+**Key points:**
+
+- The names of the resource group and the DPS instance display in the `terraform apply` output. You can also run [terraform output](https://www.terraform.io/cli/commands/output) to view these output values.
+
+#### [Azure PowerShell](#tab/azure-powershell)
-**Azure PowerShell**
Run [Get-AzIoTDeviceProvisioningService](/powershell/module/az.deviceprovisioningservices/get-aziotdeviceprovisioningservice) to display the Azure DPS resource.
- ```powershell
- Get-AzIoTDeviceProvisioningService `
- -ResourceGroupName "rg" `
- -Name "my_terraform_dps"
- ```
+```powershell
+Get-AzIoTDeviceProvisioningService `
+ -ResourceGroupName <resource_group_name> `
+ -Name <azurerm_iothub_dps_name>
+```
-The names of the resource group and the DPS instance are displayed in the terraform apply output. You can also run the [terraform output](https://www.terraform.io/cli/commands/output) command to view these output values.
+**Key points:**
+
+- The names of the resource group and the DPS instance display in the `terraform apply` output. You can also run [terraform output](https://www.terraform.io/cli/commands/output) to view these output values.
++ ## Clean up resources
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
This step takes place once on the IoT Edge device during initial device setup.
2. In the config file, find the `[agent]` section, which contains all the configuration information for the edgeAgent module to use on startup. Check and make sure that the `[agent]`section is uncommented or add it if it is not included in the `config.toml`. The IoT Edge agent definition includes an `[agent.env]` subsection where you can add environment variables. +
+<!-- 1.3 -->
+ 3. Add the **https_proxy** parameter to the environment variables section, and set your proxy URL as its value.
- ```toml
- [agent]
- name = "edgeAgent"
- type = "docker"
-
- [agent.env]
- # "RuntimeLogLevel" = "debug"
- # "UpstreamProtocol" = "AmqpWs"
- "https_proxy" = "<proxy URL>"
- ```
+ ```toml
+ [agent]
+ name = "edgeAgent"
+ type = "docker"
+
+ [agent.config]
+ image = "mcr.microsoft.com/azureiotedge-agent:1.3"
+
+ [agent.env]
+ # "RuntimeLogLevel" = "debug"
+ # "UpstreamProtocol" = "AmqpWs"
+ "https_proxy" = "<proxy URL>"
+ ```
4. The IoT Edge runtime uses AMQP by default to talk to IoT Hub. Some proxy servers block AMQP ports. If that's the case, then you also need to configure edgeAgent to use AMQP over WebSocket. Uncomment the `UpstreamProtocol` parameter.
- ```toml
- [agent.env]
- # "RuntimeLogLevel" = "debug"
- "UpstreamProtocol" = "AmqpWs"
- "https_proxy" = "<proxy URL>"
- ```
+ ```toml
+ [agent.config]
+ image = "mcr.microsoft.com/azureiotedge-agent:1.3"
+
+ [agent.env]
+ # "RuntimeLogLevel" = "debug"
+ "UpstreamProtocol" = "AmqpWs"
+ "https_proxy" = "<proxy URL>"
+ ```
++
+<!-- 1.4 -->
+
+3. Add the **https_proxy** parameter to the environment variables section, and set your proxy URL as its value.
+
+ ```toml
+ [agent]
+ name = "edgeAgent"
+ type = "docker"
+
+ [agent.config]
+ image = "mcr.microsoft.com/azureiotedge-agent:1.4"
+
+ [agent.env]
+ # "RuntimeLogLevel" = "debug"
+ # "UpstreamProtocol" = "AmqpWs"
+ "https_proxy" = "<proxy URL>"
+ ```
+
+4. The IoT Edge runtime uses AMQP by default to talk to IoT Hub. Some proxy servers block AMQP ports. If that's the case, then you also need to configure edgeAgent to use AMQP over WebSocket. Uncomment the `UpstreamProtocol` parameter.
+
+ ```toml
+ [agent.config]
+ image = "mcr.microsoft.com/azureiotedge-agent:1.4"
+
+ [agent.env]
+ # "RuntimeLogLevel" = "debug"
+ "UpstreamProtocol" = "AmqpWs"
+ "https_proxy" = "<proxy URL>"
+ ```
+
+
+<!-- >= 1.3 -->
5. Save the changes and close the editor. Apply your latest changes.
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md
Title: Security recommendations for Azure IoT | Microsoft Docs description: This article summarizes additional steps to ensure security in your Azure IoT Hub solution. -+ Last updated 08/24/2022-+
iot-hub-device-update Device Update Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-resources.md
Title: Understand Device Update for Azure IoT Hub resources | Microsoft Docs
description: Understand Device Update for Azure IoT Hub resources Previously updated : 06/14/2022 Last updated : 11/02/2022
During public preview, two Device update accounts can be created per subscriptio
## Configure the linked IoT hub
-In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the built-in Event Hubs. Clicking the "Configure IoT Hub" button within your instance configures the required message routes, consumer groups, and access policy required to communicate with IoT devices.
+In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the built-in Event Hubs. The IoT Hub will be configured automatically as part of the resource creation process with the required message routes, consumer groups, and access policy required to communicate with IoT devices.
### Message Routing
The following Message Routes are automatically configured in your linked IoT hub
### Consumer group
-Configuring the IoT hub also creates an event hub consumer group called **adum** that is required by the Device Update management services.
+ The IoT hub also creates an event hub consumer group called **adum** that is required by the Device Update management services. This should be added automatically as part of the resource creation process.
:::image type="content" source="media/device-update-resources/consumer-group.png" alt-text="Screenshot of consumer groups." lightbox="media/device-update-resources/consumer-group.png":::
-### Access policy
+### Configuring access for Azure Device Update service principal in the IoT Hub
-A shared access policy named **deviceupdateservice** is used by the Device Update Management services to query for update-capable devices. The **deviceupdateservice** policy is created and given the following permissions as part of configuring the IoT Hub:
+Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
-- Registry read-- Service connect-- Device connect
+Deployment, device and update management and diagnostic actions will not be allowed if these permissions are not set. Operations that will be blocked will include:
+* Create Deployment
+* Cancel Deployment
+* Retry Deployment
+* Get Device
+The permission can be set from IoT Hub Access Control (IAM). Refer to [Configure Access for Azure Device update service principal in linked IoT hub](configure-access-control-device-update.md#configure-access-for-azure-device-update-service-principal-in-linked-iot-hub)
## Next steps
iot-hub Iot Hub Amqp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-amqp-support.md
Title: Understand Azure IoT Hub AMQP support | Microsoft Docs
-description: Developer guide - support for devices connecting to IoT Hub device-facing and service-facing endpoints using the AMQP Protocol. Includes information about built-in AMQP support in the Azure IoT device SDKs.
+description: This article describes support for devices connecting to IoT Hub device-facing and service-facing endpoints using the AMQP Protocol. Includes information about built-in AMQP support in the Azure IoT device SDKs.
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management-cli.md
az iot hub configuration delete --config-id [configuration id] \
## Next steps
-In this article, you learned how to configure and monitor IoT devices at scale. Follow these links to learn more about managing Azure IoT Hub:
+In this article, you learned how to configure and monitor IoT devices at scale.
-* [Manage your IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-
-To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+To learn how to manage IoT Hub device identities in bulk, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md)
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
When you delete a configuration, any device twins take on their next highest pri
## Next steps
-In this article, you learned how to configure and monitor IoT devices at scale. Follow these links to learn more about managing Azure IoT Hub:
+In this article, you learned how to configure and monitor IoT devices at scale.
-* [Manage your IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-
-To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+To learn how to manage IoT Hub device identities in bulk, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md)
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
while(true)
## Device import/export job limits
-Only 1 active device import or export job is allowed at a time for all IoT Hub tiers. IoT Hub also has limits for rate of jobs operations. To learn more, see [Reference - IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
+Only 1 active device import or export job is allowed at a time for all IoT Hub tiers. IoT Hub also has limits for rate of jobs operations. To learn more, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
## Export devices
static string GetContainerSasUri(CloudBlobContainer container)
In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the [Managing devices registered to the IoT hub section of How to Clone an IoT Hub](iot-hub-how-to-clone.md#managing-the-devices-registered-to-the-iot-hub). The cloning article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides), with the project being ImportExportDevicesSample. You can download the sample and try it out; there are instructions in the [How to Clone an IoT Hub](iot-hub-how-to-clone.md) article.-
-To learn more about managing Azure IoT Hub, check out the following articles:
-
-* [Monitor IoT Hub](monitor-iot-hub.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
-
-To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Compare Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-compare-event-hubs.md
The following table provides details about how the two tiers of IoT Hub compare
Even if the only use case is device-to-cloud data ingestion, we highly recommend using IoT Hub as it provides a service that is designed for IoT device connectivity.
-### Next steps
-
-To further explore the capabilities of IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
-
-<!-- This one reference link is used over and over. --robinsh -->
[checkmark]: ./media/iot-hub-compare-event-hubs/ic195031.png
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
If you have a custom endpoint to add, select the **Custom endpoints** tab. You s
> [!NOTE] > If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, select the **Custom endpoints** tab, select the endpoint you want to delete, then choose **Delete**.
-You can read more about custom endpoints in [Reference - IoT hub endpoints](iot-hub-devguide-endpoints.md).
+You can read more about custom endpoints in [IoT hub endpoints](iot-hub-devguide-endpoints.md).
You can define up to 10 custom endpoints for an IoT hub.
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
This article shows you how to create an IoT hub using Azure CLI.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] When you create an IoT hub, you must create it in a resource group. Either use an existing resource group, or run the following [command to create a resource group](/cli/azure/resource):
-
+ ```azurecli-interactive az group create --name {your resource group name} --location westus ```
When you create an IoT hub, you must create it in a resource group. Either use a
Use the Azure CLI to create a resource group and then add an IoT hub. Run the following [command to create an IoT hub](/cli/azure/iot/hub#az-iot-hub-create) in your resource group, using a globally unique name for your IoT hub:
-
+ ```azurecli-interactive az iot hub create --name {your iot hub name} \ --resource-group {your resource group name} --sku S1
Run the following [command to create an IoT hub](/cli/azure/iot/hub#az-iot-hub-c
[!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)] - The previous command creates an IoT hub in the S1 pricing tier for which you're billed. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/). For more information on Azure IoT Hub commands, see the [`az iot hub`](/cli/azure/iot/hub) reference article.
az iot hub delete --name {your iot hub name} -\
## Next steps
-Learn more about using an IoT hub:
+Learn more about the commands available in the Microsoft Azure IoT extension for Azure CLI:
-* [IoT Hub developer guide](iot-hub-devguide.md)
-* [Using the Azure portal to manage IoT Hub](iot-hub-create-through-portal.md)
+* [IoT Hub-specific commands (az iot hub)](/cli/azure/iot/hub)
+* [All commands (az iot)](/cli/azure/iot)
iot-hub Iot Hub Csharp Csharp C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-c2d.md
In this section, you modify the **SendCloudToDevice** app to request feedback, a
In this article, you learned how to send and receive cloud-to-device messages.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+
+* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
+).
iot-hub Iot Hub Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-data-requests.md
Many of the devices managed in Azure IoT Hub are not personal devices, for examp
Tenant administrators can use either the Azure portal or the service's REST APIs to fulfill information requests by exporting or deleting data associated with a device ID.
-If you use the routing feature of the Azure IoT Hub service to forward device messages to other services, then data requests must be performed by the tenant admin for each routing endpoint in order to complete a full request for a given device. For more details, see the reference documentation for each endpoint. For more information about supported endpoints, see [Reference - IoT Hub endpoints](iot-hub-devguide-endpoints.md).
+If you use the routing feature of the Azure IoT Hub service to forward device messages to other services, then data requests must be performed by the tenant admin for each routing endpoint in order to complete a full request for a given device. For more details, see the reference documentation for each endpoint. For more information about supported endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
If you use the Azure Event Grid integration feature of the Azure IoT Hub service, then data requests must be performed by the tenant admin for each subscriber of these events. For more information, see [React to IoT Hub events by using Event Grid](iot-hub-event-grid.md).
iot-hub Iot Hub Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-managed-keys.md
Previously updated : 07/07/2021 Last updated : 11/03/2022
-# Encryption of Azure Iot Hub data at rest using customer-managed keys
+# Encryption of Azure Iot Hub data at rest using customer-managed keys (preview)
-IoT Hub supports encryption of data at rest using customer-managed keys (CMK), also known as Bring your own key (BYOK). Azure IoT Hub provides encryption of data at rest and in-transit as it's written in our datacenters; the data is encrypted when read and decrypted when written.
+IoT Hub supports encryption of data at rest using customer-managed keys (CMK), also known as Bring your own key (BYOK). Azure IoT Hub provides encryption of data at rest and in-transit as it's written in our datacenters; the data is encrypted when read and decrypted when written.
+
+>[!NOTE]
+>The customer-managed keys feature is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
By default, IoT Hub uses Microsoft-managed keys to encrypt the data. With CMK, you can get another layer of encryption on top of default encryption and can choose to encrypt data at rest with a key encryption key, managed through your [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). This gives you the flexibility to create, rotate, disable, and revoke access controls. If BYOK is configured for your IoT Hub, we also provide double encryption, which offers a second layer of protection, while still allowing you to control the encryption key through your Azure Key Vault.
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Title: Control access to IoT Hub by using Azure Active Directory
-description: Developer guide. How to control access to IoT Hub for back-end apps by using Azure AD and Azure RBAC.
+description: This article describes how to control access to IoT Hub for back-end apps by using Azure AD and Azure RBAC.
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Title: Azure IoT Hub cloud-to-device options | Microsoft Docs
-description: Developer guide - guidance on when to use direct methods, device twin's desired properties, or cloud-to-device messages for cloud-to-device communications.
+description: This article provides guidance on when to use direct methods, device twin's desired properties, or cloud-to-device messages for cloud-to-device communications.
iot-hub Iot Hub Devguide D2c Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-d2c-guidance.md
Title: Azure IoT Hub device-to-cloud options | Microsoft Docs
-description: Developer guide - guidance on when to use device-to-cloud messages, reported properties, or file upload for cloud-to-device communications.
+description: This article provides guidance on when to use device-to-cloud messages, reported properties, or file upload for cloud-to-device communications.
iot-hub Iot Hub Devguide Develop For Constrained Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-develop-for-constrained-devices.md
Title: Azure IoT Hub Develop for Constrained Devices using IoT Hub C SDK
-description: Developer guide - guidance on how to develop using Azure IoT SDKs for constrained devices.
+description: This article provides guidance on how to develop using Azure IoT SDKs for constrained devices.
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Title: Understand Azure IoT Hub device twins | Microsoft Docs
-description: Developer guide - use device twins to synchronize state and configuration data between IoT Hub and your devices
+description: This article describes how to use device twins to synchronize state and configuration data between IoT Hub and your devices
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
 Title: Understand Azure IoT Hub direct methods | Microsoft Docs
-description: Developer guide - use direct methods to invoke code on your devices from a service app.
+description: This article describes how use direct methods to invoke code on your devices from a service app.
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
Title: Understand Azure IoT Hub endpoints | Microsoft Docs
-description: Developer guide - reference information about IoT Hub device-facing and service-facing endpoints.
+description: This article provides information about IoT Hub device-facing and service-facing endpoints.
Last updated 06/10/2019
-# Reference - IoT Hub endpoints
+# IoT Hub endpoints
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-file-upload.md
Title: Understand Azure IoT Hub file upload | Microsoft Docs
-description: Developer guide - use the file upload feature of IoT Hub to manage uploading files from a device to an Azure storage blob container.
+description: This article shows how to use the file upload feature of IoT Hub to manage uploading files from a device to an Azure storage blob container.
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Title: Understand the Azure IoT Hub identity registry
-description: Developer guide - description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk.
+description: This article provides a description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk.
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-jobs.md
Title: Understand Azure IoT Hub jobs | Microsoft Docs
-description: Developer guide - scheduling jobs to run on multiple devices connected to your IoT hub. Jobs can update tags and desired properties and invoke direct methods on multiple devices.
+description: This article describes scheduling jobs to run on multiple devices connected to your IoT hub. Jobs can update tags and desired properties and invoke direct methods on multiple devices.
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
Title: Understand Azure IoT Hub message format | Microsoft Docs
-description: Developer guide - describes the format and expected content of IoT Hub messages.
+description: This article describes the format and expected content of IoT Hub messages.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Title: Understand Azure IoT Hub message routing | Microsoft Docs
-description: Developer guide - how to use message routing to send device-to-cloud messages. Includes information about sending both telemetry and non-telemetry data.
+description: This article describes how to use message routing to send device-to-cloud messages. Includes information about sending both telemetry and non-telemetry data.
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
Title: Understand the Azure IoT Hub built-in endpoint | Microsoft Docs
-description: Developer guide - describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages.
+description: This article describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages.
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
Title: Understand Azure IoT Hub custom endpoints | Microsoft Docs
-description: Developer guide - using routing queries to route device-to-cloud messages to custom endpoints.
+description: This article describes using routing queries to route device-to-cloud messages to custom endpoints.
iot-hub Iot Hub Devguide Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messaging.md
Title: Understand Azure IoT Hub messaging | Microsoft Docs
-description: Developer guide - device-to-cloud and cloud-to-device messaging with IoT Hub. Includes information about message formats and supported communications protocols.
+description: This article describes device-to-cloud and cloud-to-device messaging with IoT Hub. Includes information about message formats and supported communications protocols.
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md
Title: Understand Azure IoT Hub module twins | Microsoft Docs
-description: Developer guide - use module twins to synchronize state and configuration data between IoT Hub and your devices
+description: This article describes how to use module twins to synchronize state and configuration data between IoT Hub and your devices
iot-hub Iot Hub Devguide No Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-no-sdk.md
Title: Develop without an Azure IoT SDK | Microsoft Docs
-description: Developer guide - information about and links to topics that you can use to build device apps and back-end apps without using an Azure IoT SDK.
+description: This article provides information about and links to topics that you can use to build device apps and back-end apps without using an Azure IoT SDK.
iot-hub Iot Hub Devguide Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-pricing.md
Title: Understand Azure IoT Hub pricing | Microsoft Docs
-description: Developer guide - information about how metering and pricing works with IoT Hub including worked examples.
+description: This article provides information about how metering and pricing works with IoT Hub including worked examples.
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-protocols.md
Title: Azure IoT Hub communication protocols and ports | Microsoft Docs
-description: Developer guide - describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open.
+description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open.
iot-hub Iot Hub Devguide Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-query-language.md
Title: Understand the Azure IoT Hub query language
-description: Developer guide - description of the SQL-like IoT Hub query language used to retrieve information about device/module twins and jobs from your IoT hub.
+description: This article provides a description of the SQL-like IoT Hub query language used to retrieve information about device/module twins and jobs from your IoT hub.
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Title: Understand Azure IoT Hub quotas and throttling
-description: Developer guide - description of the quotas that apply to IoT Hub and the expected throttling behavior.
+description: This article provides a description of the quotas that apply to IoT Hub and the expected throttling behavior.
Last updated 06/01/2022
-# Reference - IoT Hub quotas and throttling
+# IoT Hub quotas and throttling
This article explains the quotas for an IoT Hub, and provides information to help you understand how throttling works.
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide.md
Title: Developer guide for Azure IoT Hub | Microsoft Docs
-description: The Azure IoT Hub developer guide includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, and messaging.
+ Title: Concepts overview for Azure IoT Hub | Microsoft Docs
+description: The Azure IoT Hub conceptual documentation includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, messaging and many other features. This article helps get you to the right articles to learn about a particular feature.
- Previously updated : 01/29/2018 Last updated : 11/03/2022
-# Azure IoT Hub developer guide
+# Azure IoT Hub concepts overview
Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-Azure IoT Hub provides you with:
+Azure IoT Hub provides many features, including:
* Secure communications by using per-device security credentials and access control.
Azure IoT Hub provides you with:
* Easy device connectivity with device libraries for the most popular languages and platforms.
-This IoT Hub developer guide includes the following articles:
+The following articles can help you get started exploring IoT Hub features in more depth:
* [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md) helps you choose between device-to-cloud messages, device twin's reported properties, and file upload.
This IoT Hub developer guide includes the following articles:
* [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md) describes how you can schedule jobs on multiple devices. The article describes how to submit jobs that perform tasks as executing a direct method, updating a device using a device twin. It also describes how to query the status of a job.
-* [Reference - choose a communication protocol](iot-hub-devguide-protocols.md) describes the communication protocols that IoT Hub supports for device communication and lists the ports that should be open.
+* [Choose a device communication protocol](iot-hub-devguide-protocols.md) describes the communication protocols that IoT Hub supports for device communication and lists the ports that should be open.
-* [Reference - IoT Hub endpoints](iot-hub-devguide-endpoints.md) describes the various endpoints that each IoT hub exposes for runtime and management operations. The article also describes how you can create additional endpoints in your IoT hub, and how to use a field gateway to enable connectivity to your IoT Hub endpoints in non-standard scenarios.
+* [IoT Hub endpoints](iot-hub-devguide-endpoints.md) describes the various endpoints that each IoT hub exposes for runtime and management operations. The article also describes how you can create additional endpoints in your IoT hub, and how to use a field gateway to enable connectivity to your IoT Hub endpoints in non-standard scenarios.
-* [Reference - IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) describes that IoT Hub query language that enables you to retrieve information from your hub about your device twins and jobs.
+* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) describes that IoT Hub query language that enables you to retrieve information from your hub about your device twins and jobs.
-* [Reference - quotas and throttling](iot-hub-devguide-quotas-throttling.md) summarizes the quotas set in the IoT Hub service and the throttling that occurs when you exceed a quota.
+* [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md) summarizes the quotas set in the IoT Hub service and the throttling that occurs when you exceed a quota.
-* [Reference - pricing](iot-hub-devguide-pricing.md) provides general information on different SKUs and pricing for IoT Hub and details on how the various IoT Hub functionalities are metered as messages by IoT Hub.
+* [IoT Hub pricing](iot-hub-devguide-pricing.md) provides general information on different SKUs and pricing for IoT Hub and details on how the various IoT Hub functionalities are metered as messages by IoT Hub.
-* [Reference - Device and service SDKs](iot-hub-devguide-sdks.md) lists the Azure IoT SDKs for developing device and service apps that interact with your IoT hub. The article includes links to online API documentation.
+* [Azure IoT Hub SDKs](iot-hub-devguide-sdks.md) lists the Azure IoT SDKs for developing device and service apps that interact with your IoT hub. The article includes links to online API documentation.
-* [Reference - IoT Hub MQTT support](iot-hub-mqtt-support.md) provides detailed information about how IoT Hub supports the MQTT protocol. The article describes the support for the MQTT protocol built-in to the Azure IoT SDKs and provides information about using the MQTT protocol directly.
+* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides detailed information about how IoT Hub supports the MQTT protocol. The article describes the support for the MQTT protocol built in to the Azure IoT SDKs and provides information about using the MQTT protocol directly.
iot-hub Iot Hub Device Embedded C Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-embedded-c-sdk.md
Title: Developer guide for Azure IoT Embedded C SDK | Microsoft Docs description: Get started with the Azure IoT Embedded C SDK and learn how to create device apps that communicate with an IoT hub. -
iot-hub Iot Hub How To Android Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-android-things.md
Title: Develop for Android Things platform using Azure IoT SDKs | Microsoft Docs
-description: Developer guide - Learn about how to develop on Android Things using Azure IoT Hub SDKs.
+description: In this article, learn about how to develop on Android Things using Azure IoT Hub SDKs.
iot-hub Iot Hub How To Clone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-clone.md
Don't clean up until you are certain the new hub is up and running and the devic
You have cloned an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT Hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md).
-For more information about IoT Hub and development for the hub, see the following articles:
-
-* [IoT Hub developer's guide](iot-hub-devguide.md)
-
-* [IoT Hub routing tutorial](tutorial-routing.md)
-
-* [IoT Hub device management overview](iot-hub-device-management-overview.md)
- If you want to deploy the sample application, see [.NET Core application deployment](/dotnet/core/deploying/index).
iot-hub Iot Hub How To Develop For Mobile Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-develop-for-mobile-devices.md
Title: Develop for mobile devices using Azure IoT SDKs | Microsoft Docs
-description: Developer guide - Learn about how to develop for mobile devices using Azure IoT Hub SDKs.
+description: In this artcile, learn about how to develop for mobile devices using Azure IoT Hub SDKs.
iot-hub Iot Hub Ios Swift C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ios-swift-c2d.md
At the end of this article, you run the following Swift iOS project:
> [!NOTE] > IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-You can find more information on cloud-to-device messages in the [messaging section of the IoT Hub developer guide](iot-hub-devguide-messaging.md).
+To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
## Prerequisites
You are now ready to receive cloud-to-device messages. Use the Azure portal to s
In this article, you learned how to send and receive cloud-to-device messages.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+
+* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Iot Hub Java Java C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-c2d.md
At the end of this article, you run two Java console apps:
> [!NOTE] > IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-You can find more information on [cloud-to-device messages in the IoT Hub developer guide](iot-hub-devguide-messaging.md).
+To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
## Prerequisites
You are now ready to run the applications.
In article, you learned how to send and receive cloud-to-device messages.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+
+* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
To learn more about planning your IoT Hub deployment, see:
To further explore the capabilities of IoT Hub, see:
-* [IoT Hub developer guide](iot-hub-devguide.md)
* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Node Node C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-c2d.md
At the end of this article, you run two Node.js console apps:
> [!NOTE] > IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
+To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
## Prerequisites
You are now ready to run the applications.
In this article, you learned how to send and receive cloud-to-device messages.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+
+* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Iot Hub Python Python C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-c2d.md
At the end of this article, you run two Python console apps:
* **SendCloudToDeviceMessage.py**: sends cloud-to-device messages to the simulated device app through IoT Hub.
->You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
+To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
> [!NOTE] > IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
You are now ready to run the applications.
In this article, you learned how to send and receive cloud-to-device messages.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+
+* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Iot Hub Query Avro Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-query-avro-data.md
In this section, you query Avro data and export it to a CSV file in Azure Blob s
In this tutorial, you learned how to query Avro data to efficiently route messages from Azure IoT Hub to Azure services.
-To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
+* To learn more about message routing in IoT Hub, see [Use IoT Hub message routing](iot-hub-devguide-messages-d2c.md).
-To learn more about message routing in IoT Hub, see [Send and receive messages with IoT Hub](iot-hub-devguide-messaging.md).
+* To learn more about routing query syntax, see [IoT Hub message routing query syntax](iot-hub-devguide-routing-query-syntax.md).
iot-hub Iot Hub Restrict Outbound Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-restrict-outbound-network-access.md
Title: Restrict IoT Hub outbound network access and data loss prevention
-description: Developer guide - how to configure IoT Hub to egress to trusted locations only.
+description: This article describes how to configure IoT Hub to egress to trusted locations only.
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
Last updated 10/26/2021 + # Weather forecast using the sensor data from your IoT hub in Machine Learning Studio (classic)
iot-hub Query Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-jobs.md
Title: Run queries on Azure IoT Hub jobs
-description: Developer guide - retrieve information about device jobs from your Azure IoT hub using the query language.
+description: This article describes how to retrieve information about device jobs from your Azure IoT hub using the query language.
iot-hub Query Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-twins.md
Title: Query Azure IoT Hub device twins and module twins
-description: Developer guide - retrieve information about device/module twins from your IoT hub using the query language.
+description: This article describes how to retrieve information about device/module twins from your IoT hub using the query language.
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
This section lists out some key differences between these two Load Balancer SKUs
| - | - | - | | **Backend type** | IP based, NIC based | NIC based | | **Protocol** | TCP, UDP | TCP, UDP |
-| **[Frontend IP configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 600 configurations | Supports up to 200 configurations |
-| **[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 1000 instances | Supports up to 300 instances |
| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set | | **[Health probe types](load-balancer-custom-probe-overview.md#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP | | **[Health probe down behavior](load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down and on all probes down | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down |
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
To compare and understand the differences between Basic and Standard SKU, see th
| **Scenario** | Equipped for load-balancing network layer traffic when high performance and ultra-low latency is needed. Routes traffic within and across regions, and to availability zones for high resiliency. | Equipped for small-scale applications that don't need high availability or redundancy. Not compatible with availability zones. | | **Backend type** | IP based, NIC based | NIC based | | **Protocol** | TCP, UDP | TCP, UDP |
-| **[Frontend IP Configurations](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 600 configurations | Supports up to 200 configurations |
-| **[Backend pool size](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)** | Supports up to 5000 instances | Supports up to 300 instances |
| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set | | **[Health probes](./load-balancer-custom-probe-overview.md#probe-types)** | TCP, HTTP, HTTPS | TCP, HTTP | | **[Health probe down behavior](./load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down __and__ on all probes down. | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down. |
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
This guide assumes you don't have a managed identity, a storage account or an on
git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/sdk/endpoints/online/managed/managed-identities ```
-* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
* Additional Python packages are required for this example:
Install them with the following code:
git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/sdk/endpoints/online/managed/managed-identities ```
-* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
* Additional Python packages are required for this example:
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) - [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md)-- [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-torchserve.sh)
+- [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-densenet.sh)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Previously updated : 11/01/2022 Last updated : 10/06/2022
The main example in this doc uses managed online endpoints for deployment. To us
* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
-# [ARM template](#tab/arm)
-
-> [!NOTE]
-> While the Azure CLI and CLI extension for machine learning are used in these steps, they are not the main focus. They are used more as utilities, passing templates to Azure and checking the status of template deployments.
--
-* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
-
-* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
-
- ```azurecli
- az account set --subscription <subscription ID>
- az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
- ```
-
-> [!IMPORTANT]
-> The examples in this document assume that you are using the Bash shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
- ## Prepare your system
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
) ```
-# [ARM template](#tab/arm)
-
-### Clone the sample repository
-
-To follow along with this article, first clone the [samples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, run the following code to go to the samples directory:
-
-```azurecli
-git clone --depth 1 https://github.com/Azure/azureml-examples
-cd azureml-examples
-```
-
-> [!TIP]
-> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
-
-### Set an endpoint name
-
-To set your endpoint name, run the following command (replace `YOUR_ENDPOINT_NAME` with a unique name).
-
-For Unix, run this command:
--
-> [!NOTE]
-> Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
-
-Also set the following environment variables, as they are used in the examples in this article. Replace the values with your Azure subscription ID, the Azure region where your workspace is located, the resource group that contains the workspace, and the workspace name:
-
-```bash
-export SUBSCRIPTION_ID="your Azure subscription ID"
-export LOCATION="Azure region where your workspace is located"
-export RESOURCE_GROUP="Azure resource group that contains your workspace"
-export WORKSPACE="Azure Machine Learning workspace name"
-```
-
-A couple of the template examples require you to upload files to the Azure Blob store for your workspace. The following steps will query the workspace and store this information in environment variables used in the examples:
-
-1. Get an access token:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="get_access_token":::
-
-1. Set the REST API version:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="api_version":::
-
-1. Get the storage information:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="get_storage_details":::
- ## Define the endpoint and deployment
In this article, we first define names of online endpoint and deployment for deb
) ```
-# [ARM template](#tab/arm)
-
-The Azure Resource Manager templates [online-endpoint.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint.json) and [online-endpoint-deployment.json](https://github.com/Azure/azureml-examples/tree/main/arm-templates/online-endpoint-deployment.json) are used by the steps in this article.
- ### Register your model and environment separately
For more information on registering your model as an asset, see [Register your m
For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment)
-# [ARM template](#tab/arm)
-
-1. To register the model using a template, you must first upload the model file to an Azure Blob store. The following example uses the `az storage blob upload-batch` command to upload a file to the default storage for your workspace:
-
- :::code language="{language}" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="upload_model":::
-
-1. After uploading the file, use the template to create a model registration. In the following example, the `modelUri` parameter contains the path to the model:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_model":::
-
-1. Part of the environment is a conda file that specifies the model dependencies needed to host the model. The following example demonstrates how to read the contents of the conda file into an environment variables:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="read_condafile":::
-
-1. The following example demonstrates how to use the template to register the environment. The contents of the conda file from the previous step are passed to the template using the `condaFile` parameter:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_environment":::
- ### Use different CPU and GPU instance types
For supported general-purpose and GPU instance types, see [Managed online endpoi
### Use more than one model
-Currently, you can specify only one model per deployment in the YAML. If you've more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained. For an example of the scoring script for multi models, see [multimodel-minimal-score.py](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/multimodel-minimal-score.py).
+Currently, you can specify only one model per deployment in the YAML. If you've more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained. For an example of deploying multiple models to one deployment, see [Deploy multiple models to one deployment](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel/README.md).
## Understand the scoring script
As noted earlier, the script specified in `code_configuration.scoring_script` mu
# [Python](#tab/python) As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/model-1/onlinescoring/score.py).
-# [ARM template](#tab/arm)
-
-As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
-
-When using a template for deployment, you must first upload the scoring file(s) to an Azure Blob store and then register it:
-
-1. The following example uses the Azure CLI command `az storage blob upload-batch` to upload the scoring file(s):
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="upload_code":::
-
-1. The following example demonstrates hwo to register the code using a template:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_code":::
- The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
First create an endpoint. Optionally, for a local endpoint, you can skip this st
ml_client.online_endpoints.begin_create_or_update(endpoint, local=True) ```
-# [ARM template](#tab/arm)
-
-The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
- Now, create a deployment named `blue` under the endpoint.
ml_client.online_deployments.begin_create_or_update(
The `local=True` flag directs the SDK to deploy the endpoint in the Docker environment.
-# [ARM template](#tab/arm)
-
-The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
- > [!TIP]
The output should appear similar to the following JSON. The `provisioning_state`
ml_client.online_endpoints.get(name=local_endpoint_name, local=True) ```
-The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint.md). The `provisioning_state` is `Succeeded`.
+The method returns [`ManagedOnlineEndpoint` entity](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint). The `provisioning_state` is `Succeeded`.
```python ManagedOnlineEndpoint({'public_network_access': None, 'provisioning_state': 'Succeeded', 'scoring_uri': 'http://localhost:49158/score', 'swagger_uri': None, 'name': 'local-10061534497697', 'description': 'this is a sample local endpoint', 'tags': {}, 'properties': {}, 'id': None, 'Resource__source_path': None, 'base_path': '/path/to/your/working/directory', 'creation_context': None, 'serialize': <msrest.serialization.Serializer object at 0x7ffb781bccd0>, 'auth_mode': 'key', 'location': 'local', 'identity': None, 'traffic': {}, 'mirror_traffic': {}, 'kind': None}) ```
-# [ARM template](#tab/arm)
-
-The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
- The following table contains the possible values for `provisioning_state`:
endpoint = ml_client.online_endpoints.get(endpoint_name)
scoring_uri = endpoint.scoring_uri ```
-# [ARM template](#tab/arm)
-
-The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
- ### Review the logs for output from the invoke operation
ml_client.online_deployments.get_logs(
) ```
-# [ARM template](#tab/arm)
-
-The template doesn't support local endpoints. See the Azure CLI or Python tabs for steps to test the endpoint locally.
- ## Deploy your online endpoint to Azure
This deployment might take up to 15 minutes, depending on whether the underlying
ml_client.online_endpoints.begin_create_or_update(endpoint) ```
-# [ARM template](#tab/arm)
-
-1. The following example demonstrates using the template to create an online endpoint:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_endpoint":::
-
-1. After the endpoint has been created, the following example demonstrates how to deploy the model to the endpoint:
-
- :::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_deployment":::
- > [!TIP]
for endpoint in ml_client.online_endpoints.list():
print(endpoint.name) ```
-The method returns list (iterator) of `ManagedOnlineEndpoint` entities. You can get other information by specifying [parameters](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint.md#parameters).
+The method returns list (iterator) of `ManagedOnlineEndpoint` entities. You can get other information by specifying [parameters](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint#parameters).
For example, output the list of endpoints like a table:
for endpoint in ml_client.online_endpoints.list():
print(f"{endpoint.kind}\t{endpoint.location}\t{endpoint.name}") ```
-# [ARM template](#tab/arm)
-
-The `show` command contains information in `provisioning_status` for endpoint and deployment:
--
-You can list all the endpoints in the workspace in a table format by using the `list` command:
-
-```azurecli
-az ml online-endpoint list --output table
-```
- ### Check the status of the online deployment
ml_client.online_deployments.get_logs(
name="blue", endpoint_name=online_endpoint_name, lines=50, container_type="storage-initializer" ) ```-
-# [ARM template](#tab/arm)
--
-By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
- For more information on deployment logs, see [Get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs).
ml_client.online_endpoints.invoke(
) ```
-# [ARM template](#tab/arm)
-
-You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data:
--
-The following example shows how to get the key used to authenticate to the endpoint:
-
-> [!TIP]
-> You can control which Azure Active Directory security principals can get the authentication key by assigning them to a custom role that allows `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
-- ### (Optional) Update the deployment
To understand how `begin_create_or_update` works:
The `begin_create_or_update` method also works with local deployments. Use the same method with the `local=True` flag.
-# [ARM template](#tab/arm)
-
-There currently is not an option to update the deployment using an ARM template.
- > [!Note]
If you aren't going use the deployment, you should delete it by running the foll
ml_client.online_endpoints.begin_delete(name=online_endpoint_name) ```
-# [ARM template](#tab/arm)
-- ## Next steps
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
This section shows how you can define a Triton deployment to deploy to a managed
endpoint_name = f"endpoint-{random.randint(0, 10000)}" ```
-1. We use these details above in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. Check the [configuration notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
+1. We use these details above in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. Check the [configuration notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
```python from azure.ai.ml import MLClient
Once your deployment completes, use the following command to make a scoring requ
keys = ml_client.online_endpoints.list_keys(endpoint_name) auth_key = keys.primary_key
-1. The following scoring code uses the [Triton Inference Server Client](https://github.com/triton-inference-server/client) to submit the image of a peacock to the endpoint. This script is available in the companion notebook to this example - [Deploy a model to online endpoints using Triton](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb).
+1. The following scoring code uses the [Triton Inference Server Client](https://github.com/triton-inference-server/client) to submit the image of a peacock to the endpoint. This script is available in the companion notebook to this example - [Deploy a model to online endpoints using Triton](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/triton/single-model/online-endpoints-triton.ipynb).
```python # Test the blue deployment with some sample data
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
To request an exception from the Azure Machine Learning product team, use the st
| Steps in a pipeline | 30,000 | | Workspaces per resource group | 800 |
-### Azure Machine Learning integration with Synapse
-Synapse spark clusters have a default limit of 12-2000, depending on your subscription offer type. This limit can be increased by submitting a support ticket and requesting for quota increase under the "Machine Learning Service: Spark vCore Quota" category
-
### Virtual machines Each Azure subscription has a limit on the number of virtual machines across all services. Virtual machine cores have a regional total limit and a regional limit per size series. Both limits are separately enforced.
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
Title: Configure secure online endpoint with TLS/SSL
-description: Learn about how to use TLS/SSL to configure secure Kubernetes online endpoint
+ Title: Configure a secure online endpoint with TLS/SSL
+description: Learn about how to use TLS/SSL to configure a secure Kubernetes online endpoint.
-# Configure secure online endpoint with TLS/SSL
+# Configure a secure online endpoint with TLS/SSL
This article shows you how to secure a Kubernetes online endpoint that's created through Azure Machine Learning.
-You use [HTTPS](https://en.wikipedia.org/wiki/HTTPS) to restrict access to online endpoints and secure the data that clients submit. HTTPS helps secure communications between a client and an online endpoint by encrypting communications between the two. Encryption uses [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). TLS is sometimes still referred to as *Secure Sockets Layer* (SSL), which was the predecessor of TLS.
+You use [HTTPS](https://en.wikipedia.org/wiki/HTTPS) to restrict access to online endpoints and help secure the data that clients submit. HTTPS encrypts communications between a client and an online endpoint by using [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). TLS is sometimes still called *Secure Sockets Layer* (SSL), which was the predecessor of TLS.
> [!TIP]
-> * Specifically, Kubernetes online endpoints support TLS version 1.2 for AKS and Arc Kubernetes.
-> * TLS version 1.3 for Azure Machine Learning Kubernetes Inference is unsupported.
+> * Specifically, Kubernetes online endpoints support TLS version 1.2 for Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes.
+> * TLS version 1.3 for Azure Machine Learning Kubernetes inference is unsupported.
TLS and SSL both rely on *digital certificates*, which help with encryption and identity verification. For more information on how digital certificates work, see the Wikipedia topic [Public key infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure). > [!WARNING] > If you don't use HTTPS for your online endpoints, data that's sent to and from the service might be visible to others on the internet. >
-> HTTPS also enables the client to verify the authenticity of the server that it's connecting to. This feature protects clients against [**man-in-the-middle**](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.
+> HTTPS also enables the client to verify the authenticity of the server that it's connecting to. This feature protects clients against [man-in-the-middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.
This is the general process to secure an online endpoint:
-1. Get a [domain name.](#get-a-domain-name)
+1. [Get a domain name](#get-a-domain-name).
-1. Get a [digital certificate.](#get-a-tlsssl-certificate)
+1. [Get a digital certificate](#get-a-tlsssl-certificate).
-1. [Configure TLS/SSL in AzureML Extension.](#configure-tlsssl-in-azureml-extension)
+1. [Configure TLS/SSL in the Azure Machine Learning extension](#configure-tlsssl-in-the-azure-machine-learning-extension).
-1. [Update your DNS with FQDN to point to the online endpoint.](#update-your-dns-with-fqdn)
+1. [Update your DNS with a fully qualified domain name (FQDN) to point to the online endpoint](#update-your-dns-with-an-fqdn).
> [!IMPORTANT]
-> You need to purchase your own certificate to get a domain name or TLS/SSL certificate, and then configure them in AzureML Extension. For more detailed information, see the following sections of this article.
+> You need to purchase your own certificate to get a domain name or TLS/SSL certificate, and then configure them in the Azure Machine Learning extension. For more detailed information, see the following sections of this article.
## Get a domain name
-If you don't already own a domain name, purchase one from a *domain name registrar*. The process and price differ among registrars. The registrar provides tools to manage the domain name. You use these tools to map a fully qualified domain name (FQDN) (such as `www.contoso.com`) to the IP address that hosts your online endpoint.
+If you don't already own a domain name, purchase one from a *domain name registrar*. The process and price differ among registrars. The registrar provides tools to manage the domain name. You use these tools to map an FQDN (such as `www.contoso.com`) to the IP address that hosts your online endpoint.
-For more information on how to get the IP address of your online endpoints, see the [Update your DNS with FQDN](#update-your-dns-with-fqdn) section of this article.
+For more information on how to get the IP address of your online endpoints, see the [Update your DNS with an FQDN](#update-your-dns-with-an-fqdn) section of this article.
## Get a TLS/SSL certificate
-There are many ways to get an TLS/SSL certificate (digital certificate). The most common is to purchase one from a *certificate authority* (CA). Regardless of where you get the certificate, you need the following files:
+There are many ways to get a TLS/SSL certificate (digital certificate). The most common is to purchase one from a *certificate authority*. Regardless of where you get the certificate, you need the following files:
-- A **certificate**. The certificate must contain the full certificate chain, and it must be "PEM-encoded."-- A **key**. The key must also be PEM-encoded.
+- A certificate that contains the full certificate chain and is PEM encoded
+- A key that's PEM encoded
> [!NOTE]
-> SSL Key in PEM file with pass phrase protected isn't supported.
+> An SSL key in a PEM file with passphrase protection is not supported.
When you request a certificate, you must provide the FQDN of the address that you plan to use for the online endpoint (for example, `www.contoso.com`). The address that's stamped into the certificate and the address that the clients use are compared to verify the identity of the online endpoint. If those addresses don't match, the client gets an error message.
-For more information on how to configure IP banding with FQDN, see the [Update your DNS with FQDN](#update-your-dns-with-fqdn) section of this article.
+For more information on how to configure IP banding with an FQDN, see the [Update your DNS with an FQDN](#update-your-dns-with-an-fqdn) section of this article.
> [!TIP]
-> If the certificate authority can't provide the certificate and key as PEM-encoded files, you can use a utility such as [**OpenSSL**](https://www.openssl.org/) to change the format.
+> If the certificate authority can't provide the certificate and key as PEM-encoded files, you can use a tool like [OpenSSL](https://www.openssl.org/) to change the format.
> [!WARNING]
-> Use ***self-signed*** certificates only for development. Don't use them in production environments. Self-signed certificates can cause problems in your client applications. For more information, see the documentation for the network libraries that your client application uses.
+> Use *self-signed* certificates only for development. Don't use them in production environments. Self-signed certificates can cause problems in your client applications. For more information, see the documentation for the network libraries that your client application uses.
-## Configure TLS/SSL in AzureML Extension
+## Configure TLS/SSL in the Azure Machine Learning extension
-For a Kubernetes online endpoint which is set to use inference HTTPS for secure connections, you can enable TLS termination with deployment configuration settings when you [deploy the AzureML extension](how-to-deploy-managed-online-endpoints.md) in an Kubernetes cluster.
+For a Kubernetes online endpoint that's set to use inference HTTPS for secure connections, you can enable TLS termination with deployment configuration settings when you [deploy the Azure Machine Learning extension](how-to-deploy-managed-online-endpoints.md) in a Kubernetes cluster.
-At AzureML extension deployment time, the config `allowInsecureConnections` by default will be `False`, and you would need to specify either `sslSecret` config setting or combination of `sslKeyPemFile` and `sslCertPemFile` config-protected settings to ensure successful extension deployment, otherwise you can set `allowInsecureConnections=True` to support HTTP and disable TLS termination.
+At deployment time for the Azure Machine Learning extension, the `allowInsecureConnections` configuration setting is `False` by default. To ensure successful extension deployment, you need to specify either the `sslSecret` configuration setting or a combination of `sslKeyPemFile` and `sslCertPemFile` configuration-protected settings. Otherwise, you can set `allowInsecureConnections=True` to support HTTP and disable TLS termination.
> [!NOTE]
-> To support HTTPS online endpoint, `allowInsecureConnections` must be set to `False`.
+> To support the HTTPS online endpoint, `allowInsecureConnections` must be set to `False`.
-To enable an HTTPS endpoint for real-time inference, you need to provide both PEM-encoded TLS/SSL certificate and key. There are two ways to specify the certificate and key at AzureML extension deployment time:
-1. Specify `sslSecret` config setting.
-1. Specify combination of `sslCertPemFile` and `slKeyPemFile` config-protected settings.
+To enable an HTTPS endpoint for real-time inference, you need to provide a PEM-encoded TLS/SSL certificate and key. There are two ways to specify the certificate and key at deployment time for the Azure Machine Learning extension:
+
+- Specify the `sslSecret` configuration setting.
+- Specify a combination of `sslCertPemFile` and `slKeyPemFile` configuration-protected settings.
### Configure sslSecret The best practice is to save the certificate and key in a Kubernetes secret in the `azureml` namespace.
-To configure `sslSecret`, you need to save a Kubernetes Secret in your Kubernetes cluster in `azureml` namespace to store **cert.pem** (PEM-encoded TLS/SSL cert) and **key.pem** (PEM-encoded TLS/SSL key).
+To configure `sslSecret`, you need to save a Kubernetes secret in your Kubernetes cluster in the `azureml` namespace to store *cert.pem* (PEM-encoded TLS/SSL certificate) and *key.pem* (PEM-encoded TLS/SSL key).
-Below is a sample YAML definition of an TLS/SSL secret:
+The following code is a sample YAML definition of a TLS/SSL secret:
``` apiVersion: v1
metadata:
type: Opaque ```
-For more information on configuring [an sslSecret](reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl).
+For more information on configuring `sslSecret`, see [Reference for configuring a Kubernetes cluster for Azure Machine Learning](reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl).
-After saving the secret in your cluster, you can specify the sslSecret to be the name of this Kubernetes secret with the following CLI command (this command will work only if you are using AKS):
+After you save the secret in your cluster, you can use the following Azure CLI command to specify `sslSecret` as the name of this Kubernetes secret. (This command will work only if you're using AKS.)
-<!--CLI command-->
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config inferenceRouterServiceType=LoadBalancer sslSecret=<Kubernetes secret name> sslCname=<ssl cname> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` ### Configure sslCertPemFile and sslKeyPemFile
-You can specify the `sslCertPemFile` config to be the path to TLS/SSL certificate file(PEM-encoded), and the `sslKeyPemFile` config to be the path to TLS/SSL key file (PEM-encoded).
+You can specify the `sslCertPemFile` configuration setting to be the path to the PEM-encoded TLS/SSL certificate file, and the `sslKeyPemFile` configuration setting to be the path to the PEM-encoded TLS/SSL key file.
-The following example (assuming you are using AKS) demonstrates how to use Azure CLI to specify .pem files to AzureML extension that uses a TLS/SSL certificate that you purchased:
+The following example demonstrates how to use the Azure CLI to specify PEM files to the Azure Machine Learning extension that uses a TLS/SSL certificate that you purchased. The example assumes that you're using AKS.
-<!--CLI command-->
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` > [!NOTE]
-> 1. The PEM file with pass phrase protection is not supported.
-> 1. Both `sslCertPemFIle` and `sslKeyPemFIle` are using config-protected parameter, and do not configure sslSecret and sslCertPemFile/sslKeyPemFile at the same time.
-
+> - A PEM file with passphrase protection is not supported.
+> - Both `sslCertPemFIle` and `sslKeyPemFIle` use configuration-protected parameters. They don't configure `sslSecret` and `sslCertPemFile`/`sslKeyPemFile` at the same time.
-## Update your DNS with FQDN
+## Update your DNS with an FQDN
-For model deployment on Kubernetes online endpoint with custom certificate, you must update your DNS record to point to the IP address of the online endpoint. This IP address is provided by AzureML inference router service(`azureml-fe`), for more information about `azureml-fe`, see the [Managed AzureML inference router](how-to-kubernetes-inference-routing-azureml-fe.md).
+For model deployment on a Kubernetes online endpoint with a custom certificate, you must update your DNS record to point to the IP address of the online endpoint. The Azure Machine Learning inference router service (`azureml-fe`) provides this IP address. For more information about `azureml-fe`, see [Managed Azure Machine Learning inference router](how-to-kubernetes-inference-routing-azureml-fe.md).
-You can follow following steps to update DNS record for your custom domain name:
+To update the DNS record for your custom domain name:
-1. Get online endpoint IP address from scoring URI, which is usually in the format of `http://104.214.29.152:80/api/v1/service/<service-name>/score`. In this example, the IP address is 104.214.29.152.
+1. Get the online endpoint's IP address from the scoring URI, which is usually in the format of `http://104.214.29.152:80/api/v1/service/<service-name>/score`. In this example, the IP address is 104.214.29.152.
- <!-- where to find out your IP address-->
- Once you have configured your custom domain name, the IP address in scoring URI would be replaced by that specific domain name. For Kubernetes clusters that using `LoadBalancer` as Inference Router Service, the `azureml-fe` will be exposed externally using a cloud provider's load balancer and TLS/SSL termination, and the IP address of Kubernetes online endpoint is the external IP of the `azureml-fe` service deployed in the cluster.
+ After you configure your custom domain name, it replaces the IP address in the scoring URI. For Kubernetes clusters that use `LoadBalancer` as the inference router service, `azureml-fe` is exposed externally through a cloud provider's load balancer and TLS/SSL termination. The IP address of the Kubernetes online endpoint is the external IP address of the `azureml-fe` service deployed in the cluster.
- If you use AKS, you can easily get the IP address from [Azure portal](https://portal.azure.com/#home). Go to your AKS resource page, navigate to **Service and ingresses** and then find the **azureml-fe** service under the **azuerml** namespace, then you can find the IP address in the **External IP** column.
+ If you use AKS, you can get the IP address from the [Azure portal](https://portal.azure.com/#home). Go to your AKS resource page, go to **Service and ingresses**, and then find the **azureml-fe** service under the **azuerml** namespace. Then you can find the IP address in the **External IP** column.
- :::image type="content" source="media/how-to-secure-kubernetes-online-endpoint/get-ip-address-from-aks-ui.png" alt-text="Screenshot of adding new extension to the Arc-enabled Kubernetes cluster from Azure portal.":::
+ :::image type="content" source="media/how-to-secure-kubernetes-online-endpoint/get-ip-address-from-aks-ui.png" alt-text="Screenshot of adding a new extension to the Azure Arc-enabled Kubernetes cluster from the Azure portal.":::
- In addition, you can run this Kubernetes command `kubectl describe svc azureml-fe -n azureml` in your cluster to get the IP address from the **LoadBalancer Ingress** parameter in the output.
+ In addition, you can run the Kubernetes command `kubectl describe svc azureml-fe -n azureml` in your cluster to get the IP address from the `LoadBalancer Ingress` parameter in the output.
> [!NOTE]
- > For Kubernetes clusters that using either `nodePort` or `clusterIP` as Inference Router Service, you need to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`, and get the IP address of the `azureml-fe` service in cluster scope.
-
+ > For Kubernetes clusters that use either `nodePort` or `clusterIP` as the inference router service, you need to set up your own load-balancing solution and TLS/SSL termination for `azureml-fe`. You also need to get the IP address of the `azureml-fe` service in the cluster scope.
1. Use the tools from your domain name registrar to update the DNS record for your domain name. The record maps the FQDN (for example, `www.contoso.com`) to the IP address. The record must point to the IP address of the online endpoint. > [!TIP]
- > Microsoft does not responsible for updating the DNS for your custom DNS name or certificate. You must update it with your domain name registrar.
+ > Microsoft is not responsible for updating the DNS for your custom DNS name or certificate. You must update it with your domain name registrar.
+1. After the DNS record update, you can validate DNS resolution by using the `nslookup custom-domain-name` command. If the DNS record is correctly updated, the custom domain name will point to the IP address of the online endpoint.
-1. After DNS record update, you can validate DNS resolution using `nslookup custom-domain-name` command. If DNS record is correctly updated, the custom domain name will point to the IP address of online endpoint.
-
- There can be a delay of minutes or hours before clients can resolve the domain name, depending on the registrar and the "time to live" (TTL) that's configured for the domain name.
+ There can be a delay of minutes or hours before clients can resolve the domain name, depending on the registrar and the time to live (TTL) that's configured for the domain name.
For more information on DNS resolution with Azure Machine Learning, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md). - ## Update the TLS/SSL certificate
-TLS/SSL certificates expire and must be renewed. Typically this happens every year. Use the information in the following steps to update and renew your certificate for models deployed to Kubernetes (AKS and Arc Kubernetes).
+TLS/SSL certificates expire and must be renewed. Typically, this happens every year. Use the information in the following steps to update and renew your certificate for models deployed to Kubernetes (AKS and Azure Arc-enabled Kubernetes):
-1. Use the documentation provided by the certificate authority to renew the certificate. This process creates new certificate files.
+1. Use the documentation from the certificate authority to renew the certificate. This process creates new certificate files.
-1. Update your AzureML extension and specify the new certificate files with this az-k8s extension update command:
+1. Update your Azure Machine Learning extension and specify the new certificate files by using the `az k8s-extension update` command.
- <!--Update sslSecret-->
- If you used a Kubernetes Secret to configure TLS/SSL before, you need to first update the Kubernetes Secret with new `cert.pem` and `key.pem` configuration in your Kubernetes cluster, and then run the extension update command to update the certificate:
+ If you used a Kubernetes secret to configure TLS/SSL before, you need to first update the Kubernetes secret with the new *cert.pem* and *key.pem* configuration in your Kubernetes cluster. Then run the extension update command to update the certificate:
```azurecli az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config inferenceRouterServiceType=LoadBalancer sslSecret=<Kubernetes secret name> sslCname=<ssl cname> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ```
- <!--CLI command-->
- If you directly configured the PEM files in extension deployment command before, you need to run extension update command with specifying the new PEM files path,
+
+ If you directly configured the PEM files in the extension deployment command before, you need to run the extension update command and specify the new PEM file's path:
```azurecli az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
TLS/SSL certificates expire and must be renewed. Typically this happens every ye
## Disable TLS
-To disable TLS for a model deployed to Kubernetes, update the AzureML extension with `allowInsercureconnection` to be `True`, and then remove sslCname config, also remove sslSecret or sslPem config settings, run CLI command in your Kubernetes cluster (assuming you are using AKS), then perform an update:
+To disable TLS for a model deployed to Kubernetes:
-<!--CLI command-->
-```azurecli
- az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer allowInsercureconnection=True --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
-```
+1. Update the Azure Machine Learning extension with `allowInsercureconnection` set to `True`.
+1. Remove the `sslCname` configuration setting, along with the `sslSecret` or `sslPem` configuration settings.
+1. Run the following Azure CLI command in your Kubernetes cluster, and then perform an update. This command assumes that you're using AKS.
-> [!WARNING]
-> By default, AzureML extension deployment expects config settings for HTTPS support. HTTP support is only recommended for development or testing purposes, and it is conveniently provided through config setting `allowInsecureConnections=True`.
+ ```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer allowInsercureconnection=True --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+> [!WARNING]
+> By default, the Azure Machine Learning extension deployment expects configuration settings for HTTPS support. We recommend HTTP support only for development or testing purposes. The `allowInsecureConnections=True` configuration setting provides HTTP support.
## Next steps Learn how to: - [Consume a machine learning model deployed as an online endpoint](how-to-deploy-managed-online-endpoints.md#invoke-the-local-endpoint-to-score-data-by-using-your-model)-- [How to secure Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md)-- [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
+- [Secure a Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md)
+- [Use your workspace with a custom DNS server](how-to-custom-dns.md)
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
In this article, we've provided the training script *train_iris.py*. In practice
> - downloads and extracts the training data using `iris = datasets.load_iris()`; and > - trains a model, then saves and registers it.
-To use and access your own data, see [how to train with datasets](v1/how-to-train-with-datasets.md) to make data available during training.
+To use and access your own data, see [how to read and write data in a job](how-to-read-write-data-v2.md) to make data available during training.
To use the training script, first create a directory where you will store the file.
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
-| China North 3 |:heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Previously updated : 09/09/2022 Last updated : 10/26/2022
To view a topology, follow these steps:
5. In the **Networks** screen that appears, select **Topology**. 6. Select **Scope** to define the scope of the Topology. 7. In the **Select scope** pane, select the list of **Subscriptions**, **Resource groups**, and **Locations** of the resources for which you want to view the topology. Select **Save**.
+
+ :::image type="content" source="./media/network-insights-topology/topology-scope-inline.png" alt-text="Screenshot of selecting the scope of the topology." lightbox="./media/network-insights-topology/topology-scope-expanded.png":::
The duration to render the topology may vary depending on the number of subscriptions selected. 8. Select the [**Resource type**](#supported-resource-types) that you want to include in the topology and select **Apply**. The topology containing the resources according to the scope and resource type specified, appears.
+ :::image type="content" source="./media/network-insights-topology/topology-start-screen-inline.png" alt-text="Screenshot of the generated resource topology." lightbox="./media/network-insights-topology/topology-start-screen-expanded.png":::
+ Each edge of the topology represents an association between each of the resources. In the topology, similar types of resources are grouped together. ## Add regions
To add a region, follow these steps:
1. Hover on **Regions** under **Azure Regions**. 2. From the list of **Hidden Resources**, select the regions to be added and select **Add to View**.
+ :::image type="content" source="./media/network-insights-topology/add-resources-inline.png" alt-text="Screenshot of the add resources and regions pane." lightbox="./media/network-insights-topology/add-resources-expanded.png":::
+ You can view the resources in the added region as part of the topology. ## Drilldown resources To drill down to the basic unit of each network, select the plus sign on each resource. When you hover on the resource, you can see the details of that resource. Selecting a resource displays a pane on the right with a summary of the resource.
+ :::image type="content" source="./media/network-insights-topology/resource-details-inline.png" alt-text="Screenshot of the details of each resource." lightbox="./media/network-insights-topology/resource-details-expanded.png":::
+
+ Drilling down into Azure resources such as Application Gateways and Firewalls displays the resource view diagram of that resource.
+ :::image type="content" source="./media/network-insights-topology/drill-down-inline.png" alt-text="Screenshot of drilling down a resource." lightbox="./media/network-insights-topology/drill-down-expanded.png":::
+ ## Integration with diagnostic tools
-When you drill down to a VM within the topology, the summary pane contains the **Insights + Diagnostics** section from where you can find the next hop. Follow these steps to find the next hop.
+When you drill down to a VM within the topology, the summary pane contains the **Insights + Diagnostics** section from where you can find the next hop.
+
+ :::image type="content" source="./media/network-insights-topology/resource-summary-inline.png" alt-text="Screenshot of the summary and insights of each resource." lightbox="./media/network-insights-topology/resource-summary-expanded.png":::
+
+Follow these steps to find the next hop.
+ 1. Click **Next hop** and enter the destination IP address. 2. Select **Check Next Hop**. The [Next hop](network-watcher-next-hop-overview.md) checks if the destination IP address is reachable from the source VM.
+ :::image type="content" source="./media/network-insights-topology/next-hop-inline.png" alt-text="Screenshot of the next hop option in the summary and insights tab." lightbox="./media/network-insights-topology/next-hop-expanded.png":::
+ ## Next steps [Learn more](/azure/network-watcher/connection-monitor-overview) about connectivity related metrics.
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
# View the topology of an Azure virtual network > [!IMPORTANT]
-> Try the new [Topology](network-insights-topology.md) experience which offers visualization of Azure resources for ease of inventory management and monitoring network at scale. Leverage it to visualize resources and their dependencies across subscriptions, regions and locations. [Click](https://ms.portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/overview) to navigate to the experience.
+> Try the new [Topology (Preview)](network-insights-topology.md) experience which offers visualization of Azure resources for ease of inventory management and monitoring network at scale. Leverage it to visualize resources and their dependencies across subscriptions, regions and locations. [Click](https://ms.portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/overview) to navigate to the experience.
In this article, you learn how to view resources in a Microsoft Azure virtual network, and the relationships between the resources. For example, a virtual network contains subnets. Subnets contain resources, such as Azure Virtual Machines (VM). VMs have one or more network interfaces. Each subnet can have a network security group and a route table associated to it. The topology capability of Azure Network Watcher enables you to view all of the resources in a virtual network, the resources associated to resources in a virtual network, and the relationships between the resources.
openshift Howto Add Update Pull Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md
First, modify the Samples Operator configuration file. Then, you can run the fol
oc edit configs.samples.operator.openshift.io/cluster -o yaml ```
-Change the `spec.architectures.managementState` and `status.architecture.managementState` values from `Removed` to `Managed`.
+Change the `spec.architectures.managementState` value from `Removed` to `Managed`.
The following YAML snippet shows only the relevant sections of the edited YAML file:
spec:
architectures: - x86_64 managementState: Managed
-status:
- architectures:
-
- ...
-
- managementState: Managed
- version: 4.3.27
``` Second, run the following command to edit the Operator Hub configuration file:
Second, run the following command to edit the Operator Hub configuration file:
oc edit operatorhub cluster -o yaml ```
-Change the `Spec.Sources.Disabled` and `Status.Sources.Disabled` values from `true` to `false` for any sources you want enabled.
+Change the `Spec.Sources.Disabled` value from `true` to `false` for any sources you want enabled.
The following YAML snippet shows only the relevant sections of the edited YAML file:
Spec:
Name: certified-operators Disabled: false Name: redhat-operators
-Status:
- Sources:
- Disabled: false
- Name: certified-operators
- Status: Success
- Disabled: false
- Name: community-operators
- Status: Success
- Disabled: false
- Name: redhat-operators
- Status: Success
-Events: <none>
``` Save the file to apply your edits.
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
If you choose to install and use the CLI locally, this tutorial requires that yo
az provider register -n Microsoft.RedHatOpenShift --wait ```
-1. Register the `Microsoft.Compute` resource provider:
+1. Register the `Microsoft.Compute` resource provider (if you haven't already):
```azurecli-interactive az provider register -n Microsoft.Compute --wait ```
-1. Register the `Microsoft.Storage` resource provider:
+1. Register the `Microsoft.Network` resource provider (if you haven't already):
+
+ ```azurecli-interactive
+ az provider register -n Microsoft.Network --wait
+ ```
+
+1. Register the `Microsoft.Storage` resource provider (if you haven't already):
```azurecli-interactive az provider register -n Microsoft.Storage --wait
After executing the `az aro create` command, it normally takes about 35 minutes
>[!IMPORTANT] > If you choose to specify a custom domain, for example **foo.example.com**, the OpenShift console will be available at a URL such as `https://console-openshift-console.apps.foo.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`. >
-> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.3/authentication/certificates/replacing-default-ingress-certificate.html) and [custom CA for your API server](https://docs.openshift.com/container-platform/4.3/authentication/certificates/api-server.html).
+> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
## Connect to the private cluster
openshift Howto Service Principal Credential Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-service-principal-credential-rotation.md
To check the expiration date of service principal credentials run the following:
# Service principal expiry in ISO 8601 UTC format SP_ID=$(az aro show --name MyManagedCluster --resource-group MyResourceGroup \ --query servicePrincipalProfile.clientId -o tsv)
-az ad sp credential list --id $SP_ID --query "[].endDate" -o tsv
+az ad app credential list --id $SP_ID --query "[].endDate" -o tsv
``` If the service principal credentials are expired please update using one of the two credential rotation methods.
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
param aadClientSecret string
@description('The ObjectID of the Resource Provider Service Principal') param rpObjectId string
-var contribRole = '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'
+var contributorRoleDefinitionId = resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')
+var resourceGroupId = '/subscriptions/${subscription().subscriptionId}/resourceGroups/aro-${domain}-${location}'
+var masterSubnetId=resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'master')
+var workerSubnetId=resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'worker')
resource clusterVnetName_resource 'Microsoft.Network/virtualNetworks@2020-05-01' = { name: clusterVnetName
resource clusterVnetName_resource 'Microsoft.Network/virtualNetworks@2020-05-01'
} }
-resource clusterVnetName_Microsoft_Authorization_id_name_aadObjectId 'Microsoft.Network/virtualNetworks/providers/roleAssignments@2018-09-01-preview' = {
- name: '${clusterVnetName}/Microsoft.Authorization/${guid(resourceGroup().id, deployment().name, aadObjectId)}'
+resource clusterVnetName_Microsoft_Authorization_id_name_aadObjectId 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
+ name: guid(aadObjectId, clusterVnetName_resource.id, contributorRoleDefinitionId)
+ scope: clusterVnetName_resource
properties: {
- roleDefinitionId: contribRole
+ roleDefinitionId: contributorRoleDefinitionId
principalId: aadObjectId
+ principalType: 'ServicePrincipal'
}
- dependsOn: [
- clusterVnetName_resource
- ]
}
-resource clusterVnetName_Microsoft_Authorization_id_name_rpObjectId 'Microsoft.Network/virtualNetworks/providers/roleAssignments@2018-09-01-preview' = {
- name: '${clusterVnetName}/Microsoft.Authorization/${guid(resourceGroup().id, deployment().name, rpObjectId)}'
+resource clusterVnetName_Microsoft_Authorization_id_name_rpObjectId 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
+ name: guid(rpObjectId, clusterVnetName_resource.id, contributorRoleDefinitionId)
+ scope: clusterVnetName_resource
properties: {
- roleDefinitionId: contribRole
+ roleDefinitionId: contributorRoleDefinitionId
principalId: rpObjectId
+ principalType: 'ServicePrincipal'
}
- dependsOn: [
- clusterVnetName_resource
- ]
} resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2020-04-30' = {
resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2020-
properties: { clusterProfile: { domain: domain
- resourceGroupId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/aro-${domain}-${location}'
+ resourceGroupId: resourceGroupId
pullSecret: pullSecret } networkProfile: {
resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2020-
} masterProfile: { vmSize: masterVmSize
- subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'master')
+ subnetId: masterSubnetId
} workerProfiles: [ { name: 'worker' vmSize: workerVmSize diskSizeGB: workerVmDiskSize
- subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'worker')
+ subnetId: workerSubnetId
count: workerCount } ]
az group create --name $RESOURCEGROUP --location $LOCATION
- Azure CLI ```azurecli-interactive
-az ad sp create-for-rbac --name "sp-$RG_NAME-${RANDOM}" --role Contributor > app-service-principal.json
+az ad sp create-for-rbac --name "sp-$RG_NAME-${RANDOM}" > app-service-principal.json
SP_CLIENT_ID=$(jq -r '.appId' app-service-principal.json) SP_CLIENT_SECRET=$(jq -r '.password' app-service-principal.json) SP_OBJECT_ID=$(az ad sp show --id $SP_CLIENT_ID | jq -r '.id')
az aro delete --resource-group $RESOURCEGROUP --name $ARO_CLUSTER_NAME
``` > [!TIP]
-> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Red Hat Openshift (ARO) repo](https://github.com/Azure/OpenShift).
## Next steps
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
Last updated 06/16/2021
# Support lifecycle for Azure Red Hat OpenShift 4
-Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every three months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
+Red Hat releases minor versions of Red Hat OpenShift Container Platform (OCP) roughly every four months. These releases include new features and improvements. Patch releases are more frequent (typically weekly) and are only intended for critical bug fixes within a minor version. These patch releases may include fixes for security vulnerabilities or major bugs.
Azure Red Hat OpenShift is built from specific releases of OCP. This article covers the versions of OCP that are supported for Azure Red Hat OpenShift and details about upgrades, deprecations, and support policy.
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
Next, you will create a virtual network containing two empty subnets. If you hav
--resource-group $RESOURCEGROUP \ --vnet-name aro-vnet \ --name master-subnet \
- --address-prefixes 10.0.0.0/23 \
- --service-endpoints Microsoft.ContainerRegistry
+ --address-prefixes 10.0.0.0/23
``` 4. **Add an empty subnet for the worker nodes.**
Next, you will create a virtual network containing two empty subnets. If you hav
--resource-group $RESOURCEGROUP \ --vnet-name aro-vnet \ --name worker-subnet \
- --address-prefixes 10.0.2.0/23 \
- --service-endpoints Microsoft.ContainerRegistry
- ```
-
-5. **[Disable subnet private endpoint policies](../private-link/disable-private-link-service-network-policy.md) on the master subnet.** This is required for the service to be able to connect to and manage the cluster.
-
- ```azurecli-interactive
- az network vnet subnet update \
- --name master-subnet \
- --resource-group $RESOURCEGROUP \
- --vnet-name aro-vnet \
- --disable-private-link-service-network-policies true
+ --address-prefixes 10.0.2.0/23
``` ## Create the cluster
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Relay (Microsoft.Relay/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Event Grid (Microsoft.EventGrid/topics) / topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | | Azure Event Grid (Microsoft.EventGrid/domains) / domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
-| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net | azurewebsites.net |
+| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net |
| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net | | Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net | | Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com | cognitiveservices.azure.com |
-| Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | privatelink.{region}.afs.azure.net | {region}.afs.azure.net |
+| Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | {region}.privatelink.afs.azure.net | {region}.afs.azure.net |
| Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
For Azure services, use the recommended zone names as described in the following
| Microsoft Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com | | Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br/>guestconfiguration.azure.com |
+| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> kubernetesconfiguration.azure.com |
| Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
purview Available Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/available-metadata.md
Title: Available metadata for Power BI in the Microsoft Purview governance portal description: This reference article provides a list of metadata that is available for a Power BI tenant in the Microsoft Purview governance portal.--++
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-asset-details.md
Title: Asset details page in the Microsoft Purview Data Catalog description: View relevant information and take action on assets in the data catalog--++
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
Title: Data Catalog lineage user guide description: This article provides an overview of the catalog lineage feature of Microsoft Purview.--++ Last updated 09/20/2022
purview Concept Asset Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-asset-normalization.md
Title: Asset normalization description: Learn how Microsoft Purview prevents duplicate assets in your data map through asset normalization--++
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-automation.md
Previously updated : 06/20/2022 Last updated : 11/03/2022 # Microsoft Purview automation best practices
This article provides a summary of the options available, and guidance on what t
## Tools
-| Tool Type | Tool | Scenario | Management | Catalog | Scanning |
-| | | | | | |
-**Resource Management** | <ul><li><a href="/azure/templates/" target="_blank">ARM Templates</a></li><li><a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/purview_account" target="_blank">Terraform</a></li></ul> | Infrastructure as Code | Γ£ô | | |
-**Command Line** | <ul><li><a href="/cli/azure/purview" target="_blank">Azure CLI</a></li><li><a href="/powershell/module/az.purview" target="_blank">Azure PowerShell</a></li></ul> | Interactive | Γ£ô | | |
-**API** | <ul><li><a href="/rest/api/purview/" target="_blank">REST API</a></li></ul> | On-Demand | Γ£ô | Γ£ô | Γ£ô |
-**Streaming** (Atlas Kafka) | <ul><li><a href="/azure/purview/manage-kafka-dotnet" target="_blank">Apache Kafka</a></li></ul> | Real Time | | Γ£ô | |
-**Streaming** (Diagnostic Logs) | <ul><li><a href="/azure/azure-monitor/essentials/diagnostic-settings?tabs=CMD#destinations" target="_blank">Event Hubs</a></li></ul> | Real Time | | | Γ£ô |
-**SDK** | <ul><li><a href="/dotnet/api/overview/azure" target="_blank">.NET</a></li><li><a href="/java/api/overview/azure" target="_blank">Java</a></li><li><a href="/javascript/api/overview/azure" target="_blank">JavaScript</a></li><li><a href="/python/api/overview/azure" target="_blank">Python</a></li></ul> | Custom Development | Γ£ô | Γ£ô | Γ£ô |
+| Tool Type | Tool | Scenario | Management | Catalog | Scanning | Logs |
+| | | | | | | |
+**Resource Management** | <ul><li><a href="/azure/templates/microsoft.purview/accounts" target="_blank">ARM Templates</a></li><li><a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/purview_account" target="_blank">Terraform</a></li></ul> | Infrastructure as Code | Γ£ô | | | |
+**Command Line** | <ul><li><a href="/cli/azure/service-page/azure%20purview" target="_blank">Azure CLI</a></li></ul> | Interactive | Γ£ô | | | |
+**Command Line** | <ul><li><a href="/powershell/module/az.purview" target="_blank">Azure PowerShell</a></li></ul> | Interactive | Γ£ô | Γ£ô | | |
+**API** | <ul><li><a href="/rest/api/purview/" target="_blank">REST API</a></li></ul> | On-Demand | Γ£ô | Γ£ô | Γ£ô | |
+**Streaming** (Apache Atlas) | <ul><li><a href="/azure/purview/manage-kafka-dotnet" target="_blank">Event Hubs</a></li></ul> | Real-Time | | Γ£ô | | |
+**Monitoring** | <ul><li><a href="/azure/azure-monitor/essentials/diagnostic-settings?tabs=CMD#destinations" target="_blank">Azure Monitor</a></li></ul> | Monitoring | | | | Γ£ô |
+**SDK** | <ul><li><a href="/dotnet/api/overview/azure/purviewresourceprovider" target="_blank">.NET</a></li><li><a href="/java/api/overview/azure/purview" target="_blank">Java</a></li><li><a href="/javascript/api/overview/azure/purview" target="_blank">JavaScript</a></li><li><a href="/python/api/overview/azure/purview" target="_blank">Python</a></li></ul> | Custom Development | Γ£ô | Γ£ô | Γ£ô | |
## Resource Management [Azure Resource Manager](../azure-resource-manager/management/overview.md) is a deployment and management service, which enables customers to create, update, and delete resources in Azure. When deploying Azure resources repeatedly, ARM templates can be used to ensure consistency, this approach is referred to as Infrastructure as Code.
When to use?
* Required operations not available via Azure CLI, Azure PowerShell, or native client libraries. * Custom application development or process automation.
-## Streaming (Atlas Kafka)
+## Streaming (Apache Atlas)
Each Microsoft Purview account can enable a fully managed event hub that is accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties. To enable this Event Hubs namespace, you can follow these steps:
Once the namespace is enabled, Microsoft Purview events can be monitored by cons
When to use? * Applications or processes that need to publish or consume Apache Atlas events in real time.
-## Streaming (Diagnostic Logs)
+## Monitoring
Microsoft Purview can send platform logs and metrics via "Diagnostic settings" to one or more destinations (Log Analytics Workspace, Storage Account, or Azure Event Hubs). [Available metrics](./how-to-monitor-with-azure-monitor.md#available-metrics) include `Data Map Capacity Units`, `Data Map Storage Size`, `Scan Canceled`, `Scan Completed`, `Scan Failed`, and `Scan Time Taken`. Once configured, Microsoft Purview automatically sends these events to the destination as a JSON payload. From there, application subscribers that need to consume and act on these events can do so with the option of orchestrating downstream logic. When to use?
-* Applications or processes that need to consume diagnostic events in real time.
+* Applications or processes that need to consume diagnostic events.
## SDK Microsoft provides Azure SDKs to programmatically manage and interact with Azure services. Microsoft Purview client libraries are available in several languages (.NET, Java, JavaScript, and Python), designed to be consistent, approachable, and idiomatic.
+* [.NET](/dotnet/api/overview/azure/purviewresourceprovider)
+* [Java](/java/api/overview/azure/purview)
+* [JavaScript](/javascript/api/overview/azure/purview)
+* [Python](/python/api/overview/azure/purview)
+ When to use? * Recommended over the REST API as the native client libraries (where available) will follow standard programming language conventions in line with the target language that will feel natural to the developer.
-**Azure SDK for .NET**
-* [Docs](/dotnet/api/azure.analytics.purview.account?view=azure-dotnet-preview&preserve-view=true) | [NuGet](https://www.nuget.org/packages/Azure.Analytics.Purview.Account/1.0.0-beta.1) Azure.Analytics.Purview.Account
-* [Docs](/dotnet/api/azure.analytics.purview.administration?view=azure-dotnet-preview&preserve-view=true) | [NuGet](https://www.nuget.org/packages/Azure.Analytics.Purview.Administration/1.0.0-beta.1) Azure.Analytics.Purview.Administration
-* [Docs](/dotnet/api/azure.analytics.purview.catalog?view=azure-dotnet-preview&preserve-view=true) | [NuGet](https://www.nuget.org/packages/Azure.Analytics.Purview.Catalog/1.0.0-beta.2) Azure.Analytics.Purview.Catalog
-* [Docs](/dotnet/api/azure.analytics.purview.scanning?view=azure-dotnet-preview&preserve-view=true) | [NuGet](https://www.nuget.org/packages/Azure.Analytics.Purview.Scanning/1.0.0-beta.2) Azure.Analytics.Purview.Scanning
-* [Docs](/dotnet/api/microsoft.azure.management.purview?view=azure-dotnet-preview&preserve-view=true) | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Purview/) Microsoft.Azure.Management.Purview
-
-**Azure SDK for Java**
-* [Docs](/java/api/com.azure.analytics.purview.account?view=azure-java-preview&preserve-view=true) | [Maven](https://search.maven.org/artifact/com.azure/azure-analytics-purview-account/1.0.0-beta.1/jar) com.azure.analytics.purview.account
-* Docs | [Maven](https://search.maven.org/artifact/com.azure/azure-analytics-purview-administration/1.0.0-beta.1/jar) com.azure.analytics.purview.administration
-* [Docs](/java/api/com.azure.analytics.purview.catalog?view=azure-java-preview&preserve-view=true) | [Maven](https://search.maven.org/artifact/com.azure/azure-analytics-purview-catalog/1.0.0-beta.2/jar) com.azure.analytics.purview.catalog
-* [Docs](/java/api/com.azure.analytics.purview.scanning?view=azure-java-preview&preserve-view=true) | [Maven](https://search.maven.org/artifact/com.azure/azure-analytics-purview-scanning/1.0.0-beta.2/jar) com.azure.analytics.purview.scanning
-* [Docs](/java/api/com.azure.resourcemanager.purview?view=azure-java-preview&preserve-view=true) | [Maven](https://search.maven.org/artifact/com.azure.resourcemanager/azure-resourcemanager-purview/1.0.0-beta.1/jar) com.azure.resourcemanager.purview
-
-**Azure SDK for JavaScript**
-* [Docs](/javascript/api/overview/azure/purview-account-rest-readme?view=azure-node-preview&preserve-view=true) | [npm](https://www.npmjs.com/package/@azure-rest/purview-account) @azure-rest/purview-account
-* [Docs](/javascript/api/overview/azure/purview-administration-rest-readme?view=azure-node-preview&preserve-view=true) | [npm](https://www.npmjs.com/package/@azure-rest/purview-administration) @azure-rest/purview-administration
-* [Docs](/javascript/api/overview/azure/purview-catalog-rest-readme?view=azure-node-preview&preserve-view=true) | [npm](https://www.npmjs.com/package/@azure-rest/purview-catalog) @azure-rest/purview-catalog
-* [Docs](/javascript/api/overview/azure/purview-scanning-rest-readme?view=azure-node-preview&preserve-view=true) | [npm](https://www.npmjs.com/package/@azure-rest/purview-scanning) @azure-rest/purview-scanning
-* [Docs](/javascript/api/@azure/arm-purview/?view=azure-node-preview&preserve-view=true) | [npm](https://www.npmjs.com/package/@azure/arm-purview) @azure/arm-purview
-
-**Azure SDK for Python**
-* [Docs](/python/api/azure-purview-account/?view=azure-python-preview&preserve-view=true) | [PyPi](https://pypi.org/project/azure-purview-account/) azure-purview-account
-* [Docs](/python/api/azure-purview-administration/?view=azure-python-preview&preserve-view=true) | [PyPi](https://pypi.org/project/azure-purview-administration/) azure-purview-administration
-* [Docs](/python/api/azure-purview-catalog/?view=azure-python-preview&preserve-view=true) | [PyPi](https://pypi.org/project/azure-purview-catalog/) azure-purview-catalog
-* [Docs](/python/api/azure-purview-scanning/?view=azure-python-preview&preserve-view=true) | [PyPi](https://pypi.org/project/azure-purview-scanning/) azure-purview-scanning
-* [Docs](/python/api/azure-mgmt-purview/?view=azure-python&preserve-view=true) | [PyPi](https://pypi.org/project/azure-mgmt-purview/) azure-mgmt-purview
- ## Next steps
-* [Microsoft Purview REST API](/rest/api/purview)
+* [Microsoft Purview REST API](/rest/api/purview)
purview Concept Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-business-glossary.md
Title: Understand business glossary features in Microsoft Purview description: This article explains what business glossary is in Microsoft Purview.--++
purview Concept Data Lineage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-lineage.md
Title: Data lineage in Microsoft Purview description: Describes the concepts for data lineage. --++ Last updated 09/27/2021
purview Concept Elastic Data Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-elastic-data-map.md
Title: Elastic data map description: This article explains the concepts of the Elastic Data Map in Microsoft Purview--++
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-resource-sets.md
Title: Understanding resource sets description: This article explains what resource sets are and how Microsoft Purview creates them.--++
purview Create Microsoft Purview Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal.md
For more information about the governance capabilities of Microsoft Purview, for
1. You can choose to enable the optional Event Hubs namespace by selecting the toggle. It's disabled by default. Enable this option if you want to be able to programmatically monitor your Microsoft Purview account using Event Hubs and Atlas Kafka**: - [Use Event Hubs and .NET to send and receive Atlas Kafka topics messages](manage-kafka-dotnet.md)
- - [Publish and consume events for Microsoft Purview with Atlas Kafka](concept-best-practices-automation.md#streaming-atlas-kafka)
+ - [Publish and consume events for Microsoft Purview with Atlas Kafka](concept-best-practices-automation.md#streaming-apache-atlas)
:::image type="content" source="media/create-catalog-portal/event-hubs-namespace.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted under the Managed resources section of the Create Microsoft Purview account page.":::
purview How To Browse Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-browse-catalog.md
Title: 'How to: browse the Data Catalog' description: This article gives an overview of how to browse the Microsoft Purview data catalog by asset type--++
purview How To Certify Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-certify-assets.md
Title: Asset certification in the Microsoft Purview data catalog description: How to certify assets in the Microsoft Purview data catalog--++
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
Title: Create, import, export, and delete glossary terms description: Learn how to create, import, export, and manage business glossary terms in Microsoft Purview.--++
purview How To Lineage Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-azure-synapse-analytics.md
Title: Metadata and lineage from Azure Synapse Analytics description: This article describes how to connect Azure Synapse Analytics and Microsoft Purview to track data lineage.--++
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-powerbi.md
Title: Metadata and Lineage from Power BI description: This article describes the data lineage extraction from Power BI source.--++
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-spark-atlas-connector.md
Title: Metadata and Lineage from Apache Atlas Spark connector description: This article describes the data lineage extraction from Spark using Atlas Spark connector.--++
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-link-azure-data-factory.md
Title: Connect to Azure Data Factory description: This article describes how to connect Azure Data Factory and Microsoft Purview to track data lineage.--++
purview How To Link Azure Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-link-azure-data-share.md
Title: Connect to Azure Data Share description: This article describes how to connect an Azure Data Share account with Microsoft Purview to search assets and track data lineage.--++
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
Title: Managed attributes in the Microsoft Purview Data Catalog description: Apply business context to assets using managed attributes--++
purview How To Resource Set Pattern Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-resource-set-pattern-rules.md
Title: How to create resource set pattern rules description: Learn how to create a resource set pattern rule to overwrite how assets get grouped into resource sets--++
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-search-catalog.md
Title: 'How to: search the Data Catalog' description: This article gives an overview of how to search the Microsoft Purview data catalog.--++
purview Reference Microsoft Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-microsoft-purview-glossary.md
Title: Microsoft Purview governance portal product glossary description: A glossary defining the terminology used throughout the Microsoft Purview governance portal--++
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
Title: 'Connect to and manage Azure Database for MySQL' description: This guide describes how to connect to Azure Database for MySQL in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Database for MySQL source.-+
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
Title: Connect to and manage a Power BI tenant (cross-tenant) description: This guide describes how to connect to a Power BI tenant in a cross-tenant scenario. You use Microsoft Purview to scan and manage your Power BI tenant source.--++
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Title: Connect to and manage a Power BI tenant same tenant description: This guide describes how to connect to a Power BI tenant in the same tenant as Microsoft Purview, and use Microsoft Purview's features to scan and manage your Power BI tenant source.--++
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Previously updated : 09/27/2021 Last updated : 11/03/2022
This tutorial assumes the machine where you'll install your self-hosted integrat
:::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/successfully-registered.png" alt-text="successfully registered.":::
-## Set up SQL authentication
+## Set up authentication
-There is only one way to set up authentication for SQL server on-premises:
+There are two ways to set up authentication for SQL server on-premises:
- SQL Authentication
+- Windows Authentication
+
+This tutorial includes steps to use SQL authentication. For more information about scanning on-premises SQL Server with Windows authentication, see [Set up SQL server authentication](/register-scan-on-premises-sql-server.md#set-up-sql-server-authentication)
### SQL authentication
If you would like to delete your Microsoft Purview account after completing this
## Next steps > [!div class="nextstepaction"]
-> [Use Microsoft Purview REST APIs](tutorial-using-rest-apis.md)
+> [Use Microsoft Purview REST APIs](tutorial-using-rest-apis.md)
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 06/08/2022 Last updated : 11/01/2022 # Data encryption models
To obtain a key for use in encrypting or decrypting data at rest the service ide
## Server-side encryption using customer-managed keys in customer-controlled hardware
-Some Azure services enable the Host Your Own Key (HYOK) key management model. This management mode is useful in scenarios where there is a need to encrypt the data at rest and manage the keys in a proprietary repository outside of Microsoft's control. In this model, the service must retrieve the key from an external site. Performance and availability guarantees are impacted, and configuration is more complex. Additionally, since the service does have access to the DEK during the encryption and decryption operations the overall security guarantees of this model are similar to when the keys are customer-managed in Azure Key Vault. As a result, this model is not appropriate for most organizations unless they have specific key management requirements. Due to these limitations, most Azure services do not support server-side encryption using server-managed keys in customer-controlled hardware.
+Some Azure services enable the Host Your Own Key (HYOK) key management model. This management mode is useful in scenarios where there is a need to encrypt the data at rest and manage the keys in a proprietary repository outside of Microsoft's control. In this model, the service must use the key from an external site to decrypt the Data Encryption Key (DEK). Performance and availability guarantees are impacted, and configuration is more complex. Additionally, since the service does have access to the DEK during the encryption and decryption operations the overall security guarantees of this model are similar to when the keys are customer-managed in Azure Key Vault. As a result, this model is not appropriate for most organizations unless they have specific key management requirements. Due to these limitations, most Azure services do not support server-side encryption using customer-managed keys in customer-controlled hardware. One of two keys in [Double Key Encryption](/microsoft-365/compliance/double-key-encryption) follows this model.
### Key Access
-When server-side encryption using service-managed keys in customer-controlled hardware is used, the keys are maintained on a system configured by the customer. Azure services that support this model provide a means of establishing a secure connection to a customer supplied key store.
+When server-side encryption using customer-managed keys in customer-controlled hardware is used, the key encryption keys are maintained on a system configured by the customer. Azure services that support this model provide a means of establishing a secure connection to a customer supplied key store.
**Advantages**
security Trusted Hardware Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/trusted-hardware-identity-management.md
THIM defines the Azure security baseline for Azure Confidential computing (ACC)
## Frequently asked questions
-**The "next update" date of the Azure-internal caching service API, used by Microsoft Azure Attestation, seems to be out of date. Is it still in operation and can it be used?**
+### The "next update" date of the Azure-internal caching service API, used by Microsoft Azure Attestation, seems to be out of date. Is it still in operation and can it be used?
-The "tcbinfo" field contains the TCB information. The THIM service by default provides an older tcbinfo -- updating to the latest tcbinfo from Intel would cause attestation failures for those customers who have not migrated to the latest Intel SDK, and could results in outages.
+The "tcbinfo" field contains the TCB information. The THIM service by default provides an older tcbinfo -- updating to the latest tcbinfo from Intel would cause attestation failures for those customers who haven't migrated to the latest Intel SDK, and could results in outages.
-Open Enclave SDK and Microsoft Azure Attestation do not look at nextUpdate date, however, and will pass attestation.
+Open Enclave SDK and Microsoft Azure Attestation don't look at nextUpdate date, however, and will pass attestation.
### What is the Azure DCAP Library?
Azure Data Center Attestation Primitives (DCAP), a replacement for Intel Quote P
### Why are there different baselines between THIM and Intel?
-THIM and Intel provide different baseline levels of the trusted computing base. While Intel can be viewed as having the latest and greatest, this imposes requirements upon the consumer to ensure that all the requirements are satisfied, thus leading to a potential breakage of customers if they have not updated to the specified requirements. THIM takes a slower approach to updating the TCB baseline to allow customers to make the necessary changes at their own pace. This approach, while does provide an older TCB baseline, ensures that customers will not break if they have not been able to meet the requirements of the new TCB baseline. This reason is why THIM's TCB baseline is of a different version from Intel's. We are customer-focused and want to empower the customer to meet the requirements imposed by the new TCB baseline on their pace, instead of forcing them to update and causing them a disruption that would require reprioritization of their workstreams.
+THIM and Intel provide different baseline levels of the trusted computing base. While Intel can be viewed as having the latest and greatest, this imposes requirements upon the consumer to ensure that all the requirements are satisfied, thus leading to a potential breakage of customers if they haven't updated to the specified requirements. THIM takes a slower approach to updating the TCB baseline to allow customers to make the necessary changes at their own pace. This approach, while does provide an older TCB baseline, ensures that customers will not break if they haven't been able to meet the requirements of the new TCB baseline. This reason is why THIM's TCB baseline is of a different version from Intel's. We're customer-focused and want to empower the customer to meet the requirements imposed by the new TCB baseline on their pace, instead of forcing them to update and causing them a disruption that would require reprioritization of their workstreams.
THIM is also introducing a new feature that will enable customers to select their own custom baseline. This feature will allow customers to decide between the newest TCB or using an older TCB than provided by Intel, enabling customers to ensure that the TCB version to enforce is compliant with their specific configuration. This new feature will be reflected in a future iteration of the THIM documentation.
The certificates are fetched and cached in THIM service using platform manifest
To retrieve the certificate, you must install the [Azure DCAP library](#what-is-the-azure-dcap-library) which replaces Intel QPL. This library directs the fetch requests to THIM service running in Azure cloud. For the downloading the latest DCAP packages, please see: [Where can I download the latest DCAP packages?](#where-can-i-download-the-latest-dcap-packages)
-### How do I request collateral in a Confidential Virtual Machine (CVM)?**
+### How do I request collateral in a Confidential Virtual Machine (CVM)?
Use the following sample in a CVM guest for requesting AMD collateral that includes the VCEK certificate and certificate chain. For details on this collateral and where it originates from, see [Versioned Chip Endorsement Key (VCEK) Certificate and KDS Interface Specification](https://www.amd.com/system/files/TechDocs/57230.pdf) (from <amd.com>).
Use the following sample in a CVM guest for requesting AMD collateral that inclu
GET "http://169.254.169.254/metadat/certification" ```
-##### Request body
+#### Request body
| Name | Type | Description | |--|--|--| | Metadata | Boolean | Setting to True allows for collateral to be returned |
-##### Sample request
+#### Sample request
```bash curl GET "http://169.254.169.254/metadat/certification" -H "Metadata: trueΓÇ¥ ```
-##### Responses
+#### Responses
| Name | Description | |--|--| | 200 OK | Lists available collateral in http body within JSON format. For details on the keys in the JSON, please see Definitions | | Other Status Codes | Error response describing why the operation failed |
-##### Definitions
+#### Definitions
| Key | Description | |--|--|
sentinel Anomalies Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/anomalies-reference.md
Microsoft Sentinel uses two different models to create baselines and detect anom
- [UEBA anomalies](#ueba-anomalies) - [Machine learning-based anomalies](#machine-learning-based-anomalies)
-> [!NOTE]
-> Anomalies are in **PREVIEW**.
- ## UEBA anomalies Sentinel UEBA detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Built-in detections include:
| <a name="fusion"></a>**Fusion**<br>(some detections in Preview) | Microsoft Sentinel uses the Fusion correlation engine, with its scalable machine learning algorithms, to detect advanced multistage attacks by correlating many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result. | | **Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. | | **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule is not customizable, but when enabled, will automatically match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators will contain additional context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more details on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
-| <a name="anomaly"></a>**Anomaly**<br>(Preview) | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). |
+| <a name="anomaly"></a>**Anomaly** | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). |
| <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.| | <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). | > [!IMPORTANT]
-> - The rule templates so indicated above are currently in **PREVIEW**, as are some of the **Fusion** detection templates (see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md) to see which ones). See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> - By creating and enabling any rules based on the **ML behavior analytics** templates, **you give Microsoft permission to copy ingested data outside of your Microsoft Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
->
+> The rule templates so indicated above are currently in **PREVIEW**, as are some of the **Fusion** detection templates (see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md) to see which ones). See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Use built-in analytics rules
sentinel Soc Ml Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-ml-anomalies.md
Title: Use customizable anomalies to detect threats in Microsoft Sentinel | Micr
description: This article explains how to use the new customizable anomaly detection capabilities in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 11/02/2022 - # Use customizable anomalies to detect threats in Microsoft Sentinel -
-> [!IMPORTANT]
->
-> - Customizable anomalies are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## What are customizable anomalies? With attackers and defenders constantly fighting for advantage in the cybersecurity arms race, attackers are always finding ways to evade detection. Inevitably, though, attacks will still result in unusual behavior in the systems being attacked. Microsoft Sentinel's customizable, machine learning-based anomalies can identify this behavior with analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting:
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-anomaly-rules.md
Title: Work with anomaly detection analytics rules in Microsoft Sentinel | Micro
description: This article explains how to view, create, manage, assess, and fine-tune anomaly detection analytics rules in Microsoft Sentinel. Previously updated : 01/30/2022 Last updated : 11/02/2022 - # Work with anomaly detection analytics rules in Microsoft Sentinel -
-> [!IMPORTANT]
->
-> - Anomaly rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules are enabled, or activated, by default, so they will generate anomalies out-of-the-box. You can find and query these anomalies in the **Anomalies** table in the **Logs** section.
## View customizable anomaly rule templates
-Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules are enabled, or activated, by default, so they will generate anomalies out-of-the-box. You can find and query these anomalies in the **Anomalies** table in the **Logs** section.
- You can now find anomaly rules displayed in a grid in the **Anomalies** tab in the **Analytics** page. The list can be filtered by the following criteria: - **Status** - whether the rule is enabled or disabled.
site-recovery Asr Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/asr-arm-templates.md
Title: Azure Resource Manager Templates description: Azure Resource Manager templates for using Azure Site Recovery features. -+ Last updated 02/04/2021-+ # Azure Resource Manager templates for Azure Site Recovery
site-recovery Avs Tutorial Dr Drill Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-dr-drill-azure.md
Title: Run a disaster recovery drill from Azure VMware Solution to Azure with Azure Site Recovery description: Learn how to run a disaster recovery drill from Azure VMware Solution private cloud to Azure, with Azure Site Recovery.-+ Last updated 09/30/2020-+
site-recovery Avs Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failback.md
Title: Fail back Azure VMware Solution VMsfrom Azure with Azure Site Recovery description: Learn how to failback to the Azure VMware Solution private cloud after failover to Azure, during disaster recovery.-+ Last updated 09/30/2020-+
site-recovery Avs Tutorial Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failover.md
Title: Fail over Azure VMware Solution VMs to Azure with Site Recovery description: Learn how to fail over Azure VMware Solution VMs to Azure in Azure Site Recovery-+ Last updated 09/30/2020-+ # Fail over Azure VMware Solution VMs
site-recovery Avs Tutorial Prepare Avs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-avs.md
Title: Prepare Azure VMware Solution for disaster recovery to Azure Site Recovery description: Learn how to prepare Azure VMware Solution servers for disaster recovery to Azure using the Azure Site Recovery service.-+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-azure.md
Title: Prepare Azure Site Recovery resources for disaster recovery of Azure VMware Solution VMs description: Learn how to prepare Azure resources for disaster recovery of Azure VMware Solution machines using Azure Site Recovery. -+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Title: Setup Azure Site Recovery for Azure VMware Solution VMs description: Learn how to set up disaster recovery to Azure for Azure VMware Solution VMs with Azure Site Recovery.-+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-reprotect.md
Title: Reprotect Azure VMs to an Azure VMware Solution private cloud with Azure Site Recovery description: Learn how to reprotect Azure VMware Solution VMs after failover to Azure with Azure Site Recovery.-+ Last updated 09/30/2020-+
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md
Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery | Microsoft
description: Learn how to set up disaster recovery to Azure for Azure Stack VMs with the Azure Site Recovery service. Last updated 08/05/2019+ # Replicate Azure Stack VMs to Azure
site-recovery Azure To Azure About Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-about-networking.md
Title: About networking in Azure VM disaster recovery with Azure Site Recovery description: Provides an overview of networking for replication of Azure VMs using Azure Site Recovery. -+ Last updated 3/13/2020-+ # About networking in Azure VM disaster recovery
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
Last updated 4/28/2022-+
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
Title: Automatic update of the Mobility service in Azure Site Recovery description: Overview of automatic update of the Mobility service when replicating Azure VMs by using Azure Site Recovery. -+ Last updated 04/02/2020-+ # Automatic update of the Mobility service in Azure-to-Azure replication
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Title: Common questions about Azure VM disaster recovery with Azure Site Recovery description: This article answers common questions about Azure VM disaster recovery when you use Azure Site Recovery.-+ Last updated 04/28/2022
site-recovery Azure To Azure Customize Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-customize-networking.md
Title: Customize networking configurations for a failover VM | Microsoft Docs description: Provides an overview of customize networking configurations for a failover VM in the replication of Azure VMs using Azure Site Recovery. -+ Last updated 10/01/2021-+ # Customize networking configurations of the target Azure VM
site-recovery Azure To Azure Enable Global Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-global-disaster-recovery.md
Last updated 08/09/2021-+ # Enable global disaster recovery using Azure Site Recovery
site-recovery Azure To Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-replication-added-disk.md
Title: Enable replication for an added Azure VM disk in Azure Site Recovery description: This article describes how to enable replication for a disk added to an Azure VM that's enabled for disaster recovery with Azure Site Recovery-+ Last updated 04/29/2019
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
Title: Exclude Azure VM disks from replication with Azure Site Recovery and Azure PowerShell description: Learn how to exclude disks of Azure virtual machines during Azure Site Recovery by using Azure PowerShell.-+ Last updated 02/18/2019
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
Title: Enable Azure Site Recovery for your VMs by using Azure Policy description: Learn how to enable policy support to help protect your VMs by using Azure Site Recovery.--++ Last updated 07/25/2021
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Title: Enable replication for encrypted Azure VMs in Azure Site Recovery description: This article describes how to configure replication for Azure Disk Encryption-enabled VMs from one Azure region to another by using Site Recovery.-+ Last updated 10/19/2022-+
Use the following procedure to replicate Azure Disk Encryption-enabled VMs to an
:::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/availability-option.png" alt-text="Screenshot of availability option.":::
- 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md).
Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group. :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
site-recovery Azure To Azure How To Enable Replication Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md
Title: Enable replication of encrypted Azure VMs in Azure Site Recovery description: This article describes how to configure replication for VMs with customer-managed key (CMK) enabled disks from one Azure region to another by using Site Recovery.-+ Last updated 10/19/2022-+
As an example, the primary Azure region is East Asia, and the secondary region i
:::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/availability-option.png" alt-text="Screenshot of availability option.":::
- 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md).
Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group. :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
Title: Enable replication for private endpoints in Azure Site Recovery description: This article describes how to configure replication for VMs with private endpoints from one Azure region to another by using Site Recovery.--++ Last updated 07/14/2020
site-recovery Azure To Azure How To Enable Replication S2d Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-s2d-vms.md
Title: Replicate Azure VMs running Storage Spaces Direct with Azure Site Recovery description: Learn how to replicate Azure VMs running Storage Spaces Direct using Azure Site Recovery.-+ Last updated 01/29/2019
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Title: Configure replication for Azure VMs in Azure Site Recovery description: Learn how to configure replication to another region for Azure VMs, using Site Recovery.-+ Last updated 10/19/2022
Use the following procedure to replicate Azure VMs to another Azure region. As a
:::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication/availability-option.png" alt-text="Screenshot of availability option.":::
- 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md).
Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group. :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Title: Enable Zone to Zone Disaster Recovery for Azure Virtual Machines description: This article describes when and how to use Zone to Zone Disaster Recovery for Azure virtual machines.-+ Last updated 03/23/2022-+
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Title: Reprotect Azure VMs to the primary region with Azure Site Recovery | Microsoft Docs description: Describes how to reprotect Azure VMs after failover, the secondary to primary region, using Azure Site Recovery. -+ Last updated 11/27/2018-+ # Reprotect failed over Azure VMs to the primary region
site-recovery Azure To Azure Move Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-move-overview.md
Title: Moving Azure VMs to another region with Azure Site Recovery description: Using Azure Site Recovery to move Azure VMs from one Azure region to another.-+ Last updated 01/28/2019-+
site-recovery Azure To Azure Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md
Title: Map virtual networks between two regions in Azure Site Recovery description: Learn about mapping virtual networks between two Azure regions for Azure VM disaster recovery with Azure Site Recovery.-+ Last updated 10/15/2019-+ # Set up network mapping and IP addressing for VNets
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
Title: Disaster recovery for Azure VMs using Azure PowerShell and Azure Site Recovery description: Learn how to set up disaster recovery for Azure virtual machines with Azure Site Recovery using Azure PowerShell. -+ Last updated 3/29/2019-+
site-recovery Azure To Azure Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md
description: Quickly set up disaster recovery to another Azure region for an Azu
Last updated 05/02/2022 + # Quickstart: Set up disaster recovery to a secondary Azure region for an Azure VM
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Last updated 11/14/2019+ # Set up disaster recovery for Azure VMs after migration to Azure
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery
description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Last updated 05/05/2022--++ # Support matrix for Azure VM disaster recovery between Azure regions
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
Title: Troubleshoot Azure VM replication in Azure Site Recovery description: Troubleshoot errors when replicating Azure virtual machines for disaster recovery.-+ Last updated 04/29/2022-+ # Troubleshoot Azure-to-Azure VM replication errors
site-recovery Azure To Azure Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-network-connectivity.md
Title: Troubleshoot connectivity for Azure to Azure disaster recovery with Azure Site Recovery description: Troubleshoot connectivity issues in Azure VM disaster recovery-+ Last updated 04/06/2020
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
Title: Troubleshoot replication of Azure VMs with Azure Site Recovery description: Troubleshoot replication in Azure VM disaster recovery with Azure Site Recovery-+ Last updated 04/03/2020
site-recovery Azure To Azure Tutorial Dr Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-dr-drill.md
Last updated 11/05/2020 + #Customer intent: As an Azure admin, I want to run a drill to check that VM disaster recovery is working.
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
description: In this tutorial, set up disaster recovery for Azure VMs to another
Last updated 10/19/2022 + #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable. # Tutorial: Set up disaster recovery for Azure VMs
site-recovery Azure To Azure Tutorial Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-migrate.md
Title: Move Azure VMs to a different Azure region with Azure Site Recovery description: Use Azure Site Recovery to move Azure VMs from one Azure region to another. -+ Last updated 01/28/2019-+
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
Title: Enable accelerated networking for Azure VM disaster recovery with Azure S
description: Describes how to enable Accelerated Networking with Azure Site Recovery for Azure virtual machine disaster recovery documentationcenter: ''-+ Last updated 04/08/2019-+ # Accelerated Networking with Azure virtual machine disaster recovery
site-recovery Azure Vm Disaster Recovery With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-expressroute.md
Title: Integrate Azure ExpressRoute Azure VM disaster recovery with Azure Site Recovery description: Describes how to set up disaster recovery for Azure VMs using Azure Site Recovery and Azure ExpressRoute -+ Last updated 07/25/2021-+ # Integrate ExpressRoute with disaster recovery for Azure VMs
site-recovery Concepts Expressroute With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-expressroute-with-site-recovery.md
Title: About using ExpressRoute with Azure Site Recovery description: Describes how to use Azure ExpressRoute with the Azure Site Recovery service for disaster recovery and migration. -+ Last updated 10/13/2019-+ # Azure ExpressRoute with Azure Site Recovery
site-recovery Concepts Multiple Ip Address Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-multiple-ip-address-failover.md
Title: Configure failover of multiple IP addresses with Azure Site Recovery description: Describes how to configure the failover of secondary IP configs for Azure VMs -+ Last updated 11/01/2021-+ # Configure failover of multiple IP addresses with Azure Site Recovery
site-recovery Concepts Network Security Group With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-network-security-group-with-site-recovery.md
Title: Network Security Groups with Azure Site Recovery | Microsoft Docs description: Describes how to use Network Security Groups with Azure Site Recovery for disaster recovery and migration-+ Last updated 04/08/2019-+ # Network Security Groups with Azure Site Recovery
site-recovery Concepts On Premises To Azure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-on-premises-to-azure-networking.md
Title: Connect to Azure VMs on-premises failover with Azure Site Recovery description: Describes how to connect to Azure VMs after failover from on-premises to Azure using Azure Site Recovery-+ Last updated 07/26/2022-+ # Connect to Azure VMs after failover from on-premises
site-recovery Concepts Public Ip Address With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-public-ip-address-with-site-recovery.md
Title: Assign public IP addresses after failover with Azure Site Recovery description: Describes how to set up public IP addresses with Azure Site Recovery and Azure Traffic Manager for disaster recovery and migration -+ Last updated 04/08/2019-+ # Set up public IP addresses after failover
site-recovery Concepts Traffic Manager With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-traffic-manager-with-site-recovery.md
Title: Azure Traffic Manager with Azure Site Recovery | Microsoft Docs description: Describes how to use Azure Traffic Manager with Azure Site Recovery for disaster recovery and migration -+ Last updated 04/08/2019-+ # Azure Traffic Manager with Azure Site Recovery
site-recovery Configure Mobility Service Proxy Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/configure-mobility-service-proxy-settings.md
Title: Configure Mobility Service Proxy Settings for Azure to Azure Disaster Recovery | Microsoft Docs description: Provides details on how to configure mobility service when customers use a proxy in their source environment. -+ Last updated 03/18/2020-+ # Configure Mobility Service Proxy Settings for Azure to Azure Disaster Recovery
site-recovery Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-vault.md
Title: Delete an Azure Site Recovery vault description: Learn how to delete a Recovery Services vault configured for Azure Site Recovery-+ Last updated 11/05/2019-+
site-recovery Encryption Feature Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/encryption-feature-deprecation.md
Title: Deprecation of Azure Site Recovery data encryption feature | Microsoft Docs description: Details regarig Azure Site Recovery data encryption feature -+ Last updated 11/15/2019-+ # Deprecation of Site Recovery data encryption feature
site-recovery File Server Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/file-server-disaster-recovery.md
Title: Protect a file server by using Azure Site Recovery description: This article describes how to protect a file server by using Azure Site Recovery -+ Last updated 07/31/2019-+ # Protect a file server by using Azure Site Recovery
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Title: Replicate Azure VMs running in a proximity placement group description: Learn how to replicate Azure VMs running in proximity placement groups by using Azure Site Recovery.-+ Last updated 02/11/2021
site-recovery How To Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md
Title: How to move from classic to modernized VMware disaster recovery? description: This article describes how to move from classic to modernized VMware disaster recovery.-+ Last updated 07/15/2022
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
Title: Enable replication for on-premises machines with private endpoints description: This article describes how to configure replication for on-premises machines by using private endpoints in Site Recovery. --++ Last updated 09/21/2022
site-recovery Hyper V Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-failback.md
Title: Fail back Hyper-V VMs from Azure with Azure Site Recovery description: How to fail back Hyper-V VMs to an on-premises site from Azure with Azure Site Recovery. -+ Last updated 09/12/2019-+
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
Title: Hyper-V VM disaster recovery using Azure Site Recovery and PowerShell description: Automate disaster recovery of Hyper-V VMs to Azure with the Azure Site Recovery service using PowerShell and Azure Resource Manager.-+ Last updated 01/10/2020-+ ms.tool: azure-powershell
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
description: Summarizes the supported components and requirements for Hyper-V VM
Last updated 7/14/2020--++
site-recovery Hyper V Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-troubleshoot.md
Title: Troubleshoot Hyper-V disaster recovery with Azure Site Recovery description: Describes how to troubleshoot disaster recovery issues with Hyper-V to Azure replication using Azure Site Recovery -+ Last updated 04/14/2019-+ # Troubleshoot Hyper-V to Azure replication and failover
site-recovery Hyper V Deployment Planner Analyze Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-analyze-report.md
Title: Analyze the Hyper-V Deployment Planner report in Azure Site Recovery description: This article describes how to analyze a report generated by the Azure Site Recovery Deployment Planner for disaster recovery of Hyper-V VMs to Azure. -+ Last updated 10/21/2019-+ # Analyze the Azure Site Recovery Deployment Planner report
site-recovery Hyper V Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-cost-estimation.md
Title: Review the Azure Site Recovery Deployment Planner cost estimation report for disaster recovery of Hyper-V VMs to Azure| Microsoft Docs description: This article describes how to review the cost estimation report generated the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure. -+ Last updated 4/9/2019-+ # Cost estimation report by Azure Site Recovery Deployment Planner
site-recovery Hyper V Deployment Planner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-overview.md
Title: Deployment Planner for Hyper-V disaster recovery with Azure Site Recovery description: Learn about the Azure Site Recovery Deployment Planner Hyper-V disaster recovery to Azure.-+ Last updated 3/13/2020-+ # About the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure
site-recovery Hyper V Deployment Planner Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-run.md
Title: Run the Hyper-V Deployment Planner in Azure Site Recovery description: This article describes how to run the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure.-+ Last updated 04/09/2019-+
site-recovery Hyper V Exclude Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-exclude-disk.md
Title: Exclude Hyper-V VM disks from disaster recovery to Azure with Azure Site Recovery description: How to exclude Hyper-V VM disks from replication to Azure with Azure Site Recovery.-+ -+ Last updated 11/12/2019
site-recovery Hyper V Vmm Performance Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-performance-results.md
Title: Test Hyper-V VM replication to a secondary site with VMM using Azure Site Recovery description: This article provides information about performance testing for replication of Hyper-V VMs in VMM clouds to a secondary site using Azure Site Recovery.-+ Last updated 12/27/2018-+ # Test results for Hyper-V replication to a secondary site
site-recovery Hyper V Vmm Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-powershell-resource-manager.md
Title: Set up Hyper-V (with VMM) disaster recovery to a secondary site with Azure Site Recovery/PowerShell description: Describes how to set up disaster recovery of Hyper-V VMs in VMM clouds to a secondary VMM site using Azure Site Recovery and PowerShell. -+ Last updated 1/10/2020-+
site-recovery Hyper V Vmm Recovery Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-recovery-script.md
Title: Add a script to a recovery plan in Azure Site Recovery description: Learn how to add a VMM script to a recovery plan for disaster recovery of Hyper-V VMs in VMM clouds. -+ Last updated 11/27/2018-+ # Add a VMM script to a recovery plan
site-recovery Hyper V Vmm Test Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-test-failover.md
Title: Run a NHyper-V disaster recovery drill to a secondary site with Azure Site Recovery description: Learn how to run a DR drill for Hyper-V VMs in VMM clouds to a secondary on-premises datacenter using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Run a DR drill for Hyper-V VMs to a secondary site
site-recovery Monitoring High Churn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitoring-high-churn.md
Title: Monitoring churn patterns on virtual machines description: Learn how to monitor churn patterns on Virtual Machines protected using Azure Site Recovery-+ Last updated 09/09/2020-+ # Monitoring churn patterns on virtual machines
site-recovery Move Azure Vms Avset Azone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-AVset-Azone.md
Title: Move VMs to an Azure region with availability zones using Azure Site Recovery description: Learn how to move VMs to an availability zone in a different region with Site Recovery -+ Last updated 01/28/2019-+
site-recovery Move Azure Vms Cross Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-cross-region.md
Title: Move Azure VMs to another region with Azure Site Recovery description: Use Azure Site Recovery to move Azure IaaS VMs from one Azure region to another. -+ Last updated 01/28/2019-+
site-recovery Move Vaults Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-vaults-across-regions.md
Title: Move an Azure Site Recovery vault to another region description: Describes how to move a Recovery Services vault (Azure Site Recovery) to another Azure region -+ Last updated 07/31/2019-+
site-recovery Physical Azure Set Up Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-set-up-source.md
Title: Set up the configuration server for disaster recovery of physical servers to Azure using Azure Site Recovery | Microsoft Docs' description: This article describes how to set up the on-premises configuration server for disaster recovery of on-premises physical servers to Azure. -+ Last updated 07/03/2019-+ # Set up the configuration server for disaster recovery of physical servers to Azure
site-recovery Physical Azure Set Up Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-set-up-target.md
Title: Set up the target environment for physical servers in Azure Site Recovery description: This article describes how to set up the target Azure environment for disaster recovery of physical servers using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Prepare target (VMware to Azure)
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Title: Manage the configuration server for physical servers in Azure Site Recovery description: This article describes how to manage the Azure Site Recovery configuration server for physical server disaster recovery to Azure. -+ Last updated 07/27/2022-+ # Manage the configuration server for physical server disaster recovery
site-recovery Physical Server Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-server-enable-replication.md
Title: Enable replication for a physical server ΓÇô Modernized description: This article describes how to enable physical servers replication for disaster recovery using the Azure Site Recovery service-+ -+ Last updated 10/20/2022
site-recovery Quickstart Create Vault Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-bicep.md
Title: Quickstart to create an Azure Recovery Services vault using Bicep. description: In this quickstart, you learn how to create an Azure Recovery Services vault using Bicep.--++ Last updated 06/27/2022
site-recovery Region Move Cross Geos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/region-move-cross-geos.md
Title: Move Azure VMs between government and public regions with Azure Site Recovery description: Use Azure Site Recovery to move Azure VMs between Azure government and public regions.-+ Last updated 04/16/2019-+ # Move Azure VMs between Azure Government and Public regions
site-recovery Service Updates How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/service-updates-how-to.md
Title: Updates and component upgrades in Azure Site Recovery description: Provides an overview of Azure Site Recovery service updates, and component upgrades.-+ -+ Last updated 08/11/2021 # Service updates in Site Recovery
site-recovery Site Recovery Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-active-directory.md
Title: Set up Active Directory/DNS disaster recovery with Azure Site Recovery description: This article describes how to implement a disaster recovery solution for Active Directory and DNS with Azure Site Recovery.-+ Last updated 04/01/2020-+ # Set up disaster recovery for Active Directory and DNS
site-recovery Site Recovery Backup Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-backup-interoperability.md
Title: Support for using Azure Site Recovery with Azure Backup description: Provides an overview of how Azure Site Recovery and Azure Backup can be used together.-+ Last updated 10/15/2019-+ # Support for using Site Recovery with Azure Backup
site-recovery Site Recovery Citrix Xenapp And Xendesktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-citrix-xenapp-and-xendesktop.md
Title: Set up Citrix XenDesktop/XenApp disaster recovery with Azure Site Recovery description: This article describes how to set up disaster recovery fo Citrix XenDesktop and XenApp deployments using Azure Site Recovery.-+ Last updated 11/27/2018-+ # End of support for disaster recovery of Citrix workloads
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
Title: Azure Site Recovery Deployment Planner Version History description: Known different Site Recovery Deployment Planner Versions fixes and known limitations along with their release dates. -+ Last updated 6/4/2020-+ # Azure Site Recovery Deployment Planner Version History
site-recovery Site Recovery Deployment Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md
Title: Azure Site Recovery Deployment Planner for VMware disaster recovery description: Learn about the Azure Site Recovery Deployment Planner for disaster recovery of VMware VMs to Azure.-+ -+ Last updated 04/06/2022
site-recovery Site Recovery Dynamicsax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-dynamicsax.md
Title: Disaster recovery of Dynamics AX with Azure Site Recovery description: Learn how to set up disaster recovery for Dynamics AX with Azure Site Recovery-+ Last updated 11/27/2018
site-recovery Site Recovery Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-extension-troubleshoot.md
Title: Troubleshoot the Azure VM extension for disaster recovery with Azure Site Recovery description: Troubleshoot issues with the Azure VM extension for disaster recovery with Azure Site Recovery.-+ Last updated 11/27/2018
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
Title: 'Troubleshoot failover to Azure failures | Microsoft Docs' description: This article describes ways to troubleshoot common errors in failing over to Azure-+ Last updated 01/08/2020-+ # Troubleshoot errors when failing over VMware VM or physical machine to Azure
site-recovery Site Recovery Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-iis.md
Title: Set up disaster recovery for an IIS web app using Azure Site Recovery description: Learn how to replicate IIS web farm virtual machines using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Set up disaster recovery for a multi-tier IIS-based web application
site-recovery Site Recovery Ipconfig Cmdlet Parameter Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-ipconfig-cmdlet-parameter-deprecation.md
Title: Deprecation of IPConfig parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig | Microsoft Docs description: Details about deprecation of IPConfig parameters of the cmdlet New-AzRecoveryServicesAsrVMNicConfig and information about the use of new cmdlet New-AzRecoveryServicesAsrVMNicIPConfig -+ Last updated 04/30/2021-+ # Deprecation of IP Config parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig
site-recovery Site Recovery Manage Network Interfaces On Premises To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-manage-network-interfaces-on-premises-to-azure.md
Title: Manage network adapters for on-premises disaster recovery with Azure Site Recovery description: Describes how to manage network interfaces for on-premises disaster recovery to Azure with Azure Site Recovery-+ Last updated 4/9/2019-+ # Manage VM network interfaces for on-premises disaster recovery to Azure
site-recovery Site Recovery Manage Registration And Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-manage-registration-and-protection.md
Title: Remove servers and disable protection | Microsoft Docs description: This article describes how to unregister servers from a Site Recovery vault, and to disable protection for virtual machines and physical servers.-+ Last updated 06/18/2019-+
site-recovery Site Recovery Plan Capacity Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-plan-capacity-vmware.md
Title: Plan capacity for VMware disaster recovery with Azure Site Recovery description: This article can help you plan capacity and scaling when you set up disaster recovery of VMware VMs to Azure by using Azure Site Recovery.-+ -+ Last updated 08/19/2021
site-recovery Site Recovery Retain Ip Azure Vm Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-retain-ip-azure-vm-failover.md
Title: Keep IP addresses after Azure VM failover with Azure Site Recovery
description: Describes how to retain IP addresses when failing over Azure VMs for disaster recovery to a secondary region with Azure Site Recovery Last updated 07/25/2021-+ -+ # Retain IP addresses during failover
site-recovery Site Recovery Role Based Linked Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-role-based-linked-access-control.md
Title: Manage Azure role-based access control in Azure Site Recovery
description: This article describes how to apply Azure role-based access control (Azure RBAC) to manage Azure Site Recovery access. Last updated 04/08/2019-+ -+ # Manage Site Recovery access with Azure role-based access control (Azure RBAC)
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Title: Add Azure Automation runbooks to Site Recovery recovery plans description: Learn how to extend recovery plans with Azure Automation for disaster recovery using Azure Site Recovery.-+ -+ Last updated 08/10/2022
site-recovery Site Recovery Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sap.md
Title: Set up SAP NetWeaver disaster recovery with Azure Site Recovery description: Learn how to set up disaster recovery for SAP NetWeaver with Azure Site Recovery.-+ Last updated 11/27/2018
site-recovery Site Recovery Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sharepoint.md
Title: Disaster recovery for a multi-tier SharePoint app using Azure Site Recovery description: This article describes how to set up disaster recovery for a multi-tier SharePoint application using Azure Site Recovery capabilities.-+ Last updated 6/27/2019-+ # Set up disaster recovery for a multi-tier SharePoint application for disaster recovery using Azure Site Recovery
site-recovery Site Recovery Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sql.md
Title: Set up disaster recovery for SQL Server with Azure Site Recovery description: This article describes how to set up disaster recovery for SQL Server by using SQL Server and Azure Site Recovery. -+ Last updated 08/02/2019-+ # Set up disaster recovery for SQL Server
site-recovery Site Recovery Vmware Deployment Planner Analyze Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-analyze-report.md
Title: Analyze the Deployment Planner report for VMware disaster recovery with Azure Site Recovery description: This article describes how to analyze the report generated by the Recovery Deployment Planner for VMware disaster recovery to Azure, using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Site Recovery Vmware Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-cost-estimation.md
Title: Review cost estimations in the Azure Site Recovery Deployment Planner description: This articles describes how to review the cost estimations in the Azure Site Recovery Deployment Planner for VMware disaster recovery.-+ -+ Last updated 05/27/2021
site-recovery Site Recovery Vmware Deployment Planner Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-run.md
Title: Run the Deployment Planner for VMware disaster recovery with Azure Site Recovery description: This article describes how to run Azure Site Recovery Deployment Planner for VMware disaster recovery to Azure.-+ -+ Last updated 05/27/2021
site-recovery Site To Site Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-to-site-deprecation.md
Title: Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery | Microsoft Docs description: Details about Upcoming deprecation of DR between customer owned sites using Hyper-V and between sites managed by SCVMM to Azure and alternate options -+ Last updated 02/25/2020-+ # Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery
site-recovery Upgrade 2012R2 To 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-2012R2-to-2016.md
Title: Upgrade Windows Server/System Center VMM 2012 R2 to Windows Server 2016-Azure Site Recovery description: Learn how to upgrade Windows Server 2012 R2 hosts & SCVMM 2012 R2 that are configured with Azure Site Recovery, to Windows Server 2016 & SCVMM 2016. -+ Last updated 12/03/2018-+ # Upgrade Windows Server Server/System Center 2012 R2 VMM to Windows Server/VMM 2016
site-recovery Vmware Azure Deploy Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-deploy-configuration-server.md
Title: Deploy the configuration server in Azure Site Recovery description: This article describes how to deploy a configuration server for VMware disaster recovery with Azure Site Recovery -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
Title: Set up VMware disaster recovery using PowerShell in Azure Site Revoery description: Learn how to set up replication and failover to Azure for disaster recovery of VMware VMs using PowerShell in Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-enable-replication.md
Title: Enable VMware VMs for disaster recovery using Azure Site Recovery description: This article describes how to enable VMware VM replication for disaster recovery using the Azure Site Recovery service-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Exclude Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-exclude-disk.md
Title: Exclude VMware VM disks from disaster recovery to Azure with Azure Site Recovery description: How to exclude VMware VM disks from replication to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-failback.md
Title: Fail back VMware VMs/physical servers from Azure with Azure Site Recovery description: Learn how to fail back to the on-premises site after failover to Azure, during disaster recovery of VMware VMs and physical servers to Azure.-+
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Title: Install a master target server for Linux VM failback with Azure Site Recovery description: Learn how to set up a Linux master target server for failback to an on-premises site during disaster recovery of VMware VMs to Azure using Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
Title: Prepare source machines to install the Mobility Service through push installation for disaster recovery of VMware VMs and physical servers to Azure | Microsoft Docs description: Learn how to prepare your server to install Mobility agent through push installation for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
Title: Manage the configuration server for disaster recovery with Azure Site Recovery description: Learn about the common tasks to manage an on-premises configuration server for disaster recovery of VMware VMs and physical servers to Azure with Azure Site Recovery.-+ -+ Last updated 08/03/2022
site-recovery Vmware Azure Manage Process Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-process-server.md
Title: Manage a process server for VMware VMs/physical server disaster recovery in Azure Site Recovery description: This article describes manage a process server for disaster recovery of VMware VMs/physical servers using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Manage Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-vcenter.md
Title: Manage VMware vCenter servers in Azure Site Recovery description: This article describes how to add and manage VMware vCenter for disaster recovery of VMware VMs to Azure with Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Title: Automate Mobility Service for disaster recovery of installation in Azure Site Recovery description: How to automatically install the Mobility Service for VMware /physical server disaster recovery with Azure Site Recovery. -+ -+ Last updated 05/02/2022
site-recovery Vmware Azure Multi Tenant Csp Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-csp-disaster-recovery.md
Title: Set up VMware disaster recovery to Azure in a multi-tenancy environment using Site Recovery and the Cloud Solution Provider (CSP) program | Microsoft Docs description: Describes how to set up VMware disaster recovery in a multi-tenant environment with Azure Site Recovery.-+ Last updated 11/27/2018-+
site-recovery Vmware Azure Multi Tenant Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-overview.md
Title: VMware VM multi-tenant disaster recovery with Azure Site Recovery description: Provides an overview of Azure Site Recovery support for VMWare disaster recovery to Azure in a multi-tenant environment (CSP) program.-+ Last updated 11/27/2018-+ # Overview of multi-tenant support for VMware disaster recovery to Azure with CSP
site-recovery Vmware Azure Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-reprotect.md
Title: Reprotect VMware VMs to an on-premises site with Azure Site Recovery description: Learn how to reprotect VMware VMs after failover to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Process Server Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-process-server-azure.md
Title: Set up a process server VMware/physical failback in Azure Site Recovery description: This article describes how to set up a process server in Azure, to failback Azure VMs to VMware. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Process Server Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-process-server-scale.md
Title: Set up a scale-out process server during disaster recovery of VMware VMs and physical servers with Azure Site Recovery | Microsoft Docs' description: This article describes how to set up scale-out process server during disaster recovery of VMware VMs and physical servers.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication.md
Title: Set up replication policies for VMware disaster recovery with Azure Site Recovery| Microsoft Docs description: Describes how to configure replication settings for VMware disaster recovery to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-source.md
Title: Set up source settings for VMware disaster recovery to Azure with Azure Site Recovery description: This article describes how to set up your on-premises environment to replicate VMware VMs to Azure with Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-target.md
Title: Prepare the VMware VM replication target in Azure Site Recovery description: This article describes how to prepare your target Azure environment for VMware VM replication to Azure. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-configuration-server.md
Title: Troubleshoot issues with the configuration server during disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery | Microsoft Docs description: This article provides troubleshooting information for deploying the configuration server for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Failback Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-failback-reprotect.md
Title: Troubleshoot failback in VMware VM disaster recovery with Azure Site Recovery description: This article describes ways to troubleshoot failback and reprotection issues during VMware VM disaster recovery to Azure with Azure Site Recovery.-+ Last updated 11/27/2018-+
site-recovery Vmware Azure Troubleshoot Push Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-push-install.md
Title: Troubleshoot Mobility Service push installation with Azure Site Recovery description: Troubleshoot Mobility Services installation errors when enabling replication for disaster recovery with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-replication.md
Title: Troubleshoot replication issues for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery | Microsoft Docs description: This article provides troubleshooting information for common replication issues during disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery.-+ Last updated 05/02/2022-+ # Troubleshoot replication issues for VMware VMs and physical servers
site-recovery Vmware Azure Troubleshoot Vcenter Discovery Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-vcenter-discovery-failures.md
Title: Troubleshoot VMware vCenter discovery failures in Azure Site Recovery description: This article describes how to troubleshooting VMware vCenter discovery failures in Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Title: Manage the Mobility agent for VMware/physical servers with Azure Site Recovery description: Manage Mobility Service agent for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service.-+ -+ Last updated 05/27/2021
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Title: About the Mobility service for disaster recovery of VMware VMs and physical servers with Azure Site Recovery | Microsoft Docs description: Learn about the Mobility service agent for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service.-+ -+ Last updated 09/21/2022
site-recovery Vmware Physical Secondary Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-architecture.md
description: This article provides an overview of components and architecture us
Last updated 11/12/2019+ # Architecture for VMware/physical server replication to a secondary on-premises site
site-recovery Vmware Physical Secondary Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-disaster-recovery.md
Last updated 11/05/2019+ # Set up disaster recovery of on-premises VMware virtual machines or physical servers to a secondary site
site-recovery Vmware Physical Secondary Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-support-matrix.md
Last updated 11/14/2019+ # Support matrix for disaster recovery of VMware VMs and physical servers to a secondary site
spring-apps How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-health-probes-graceful-termination.md
Title: How to configure health probes and graceful termination period for apps hosted in Azure Spring Apps
-description: Shows you how to customize apps running in Azure Spring Apps with health probes and graceful termination period.
+description: Learn how to customize apps running in Azure Spring Apps with health probes and graceful termination period.
This article shows you how to customize apps running in Azure Spring Apps with health probes and graceful termination periods.
-A probe is a diagnostic performed periodically by Azure Spring Apps on an app instance. To perform a diagnostic, Azure Spring Apps either executes an arbitrary command of your choice within the app instance, establishes a TCP socket connection, or makes an HTTP request.
+A probe is a diagnostic activity performed periodically by Azure Spring Apps on an app instance. To perform a diagnostic, Azure Spring Apps takes one of the following actions:
-Azure Spring Apps uses liveness probes to determine when to restart an application. For example, liveness probes could catch a deadlock, where an application is running but unable to make progress. Restarting the application in such a state can help to make the application more available despite bugs.
+- Executes an arbitrary command of your choice within the app instance.
+- Establishes a TCP socket connection.
+- Makes an HTTP request.
-Azure Spring Apps uses readiness probes to determine when an app instance is ready to start accepting traffic. One use of this signal is to control which app instances are used as backends for the application. When an app instance isn't ready, it's removed from Kubernetes Service Discovery. For more information, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+Azure Spring Apps offers default health probe rules for every application. This article shows you how to customize your application with three kinds of health probes:
-Azure Spring Apps uses startup probes to determine when an application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. You can use this behavior to adopt liveness checks on slow starting applications, preventing them from getting killed before they're up and running.
+- *Liveness probes* determine when to restart an application. For example, liveness probes can identify a deadlock, such as when an application is running but unable to make progress. Restarting the application in a deadlock state can make the application available despite errors.
-Azure Spring Apps offers default health probe rules for every application. This article shows you how to customize your application with three kinds of health probes.
+- *Readiness probes* determine when an app instance is ready to start accepting traffic. For example, readiness probes can control which app instances are used as backends for the application. When an app instance isn't ready, it's removed from Kubernetes service discovery. For more information, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+
+- *Startup probes* determine when an application has started. A startup probe disables liveness and readiness checks until startup succeeds, ensuring that liveness and readiness probes don't interfere with application startup. You can use startup probes to perform liveness checks on slow starting applications, preventing the app from terminating before it's up and running.
## Prerequisites -- The [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI.
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. Use the following command to remove previous versions and install the latest extension. If you previously installed the spring-cloud extension, uninstall it to avoid configuration and version mismatches.
+
+ ```azurecli
+ az extension remove --name spring
+ az extension add --name spring
+ az extension remove --name spring-cloud
+ ```
## Configure health probes and graceful termination for applications
-The following sections describe the properties available for configuration and how to set the properties using the Azure CLI.
+The following sections describe how to configure health probes and graceful termination using the Azure CLI.
### Graceful termination
-The following table describes the property available for configuring graceful termination.
+The following table describes the `terminationGracePeriodSeconds` property, which you can use to configure graceful termination.
-| Property name | Description |
-|-||
-| terminationGracePeriodSeconds | The grace period is the duration in seconds after the processes running in the app instance are sent a termination signal and before the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. The value must be a non-negative integer. The value zero indicates to stop immediately via the kill signal (with no opportunity to shut down). If this value is nil, the default grace period will be used instead. The default value is 90 seconds. |
+| Property name | Description |
+|-||
+| `terminationGracePeriodSeconds` | The duration in seconds after processes running in the app instance are sent a termination signal before they're forcibly halted. Set this value longer than the expected cleanup time for your process. The value must be a non-negative integer. Setting the grace period to *0* stops the app instance immediately via the kill signal, with no opportunity to shut down. If the value is *nil*, Azure Spring Apps uses the default grace period. The default value is *90*. |
### Health probe properties
-The following table describes the properties available for configuring health probes.
+The following table describes the properties you can use to configure health probes.
-| Property name | Description |
-||-|
-| initialDelaySeconds | The number of seconds after the app instance has started before probes are initiated. The default value is 0 seconds. The minimum value is 0. |
-| periodSeconds | How often (in seconds) to perform the probe. The default value is 10 seconds. The minimum value is 1 second. |
-| timeoutSeconds | The number of seconds after which the probe times out. The default value is 1 second. The minimum value is 1 second. |
-| failureThreshold | The minimum number of consecutive failures for the probe to be considered failed after having succeeded. The default value is 3. The minimum value is 1. |
-| successThreshold | The minimum number of consecutive successes for the probe to be considered successful after having failed. The default value is 1. The value must be 1 for liveness and startup. The minimum value is 1. |
+| Property name | Description |
+||--|
+| `initialDelaySeconds` | The number of seconds after the app instance has started before probes are initiated. The default value is *0*, the minimum value. |
+| `periodSeconds` | The frequency in seconds to perform the probe. The default value is *10*. The minimum value is *1*. |
+| `timeoutSeconds` | The number of seconds until the probe times out. The default value is *1*, the minimum value. |
+| `failureThreshold` | The minimum number of consecutive failures for the probe to be considered failed after having succeeded. The default value is *3*. The minimum value is *1*. |
+| `successThreshold` | The minimum number of consecutive successes for the probe to be considered successful after having failed. The default value is *1*. The value must be *1* for liveness and startup. The minimum value is *1*. |
### Probe action properties
-There are three different ways to check an app instance using a probe. Each probe must define exactly one of these three probe actions:
+There are three ways you can check an app instance using a probe. Each probe must define one of the following probe actions:
- `HTTPGetAction` Performs an HTTP GET request against the app instance on a specified path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400. | Property name | Description |
- ||--|
- | scheme | The scheme to use for connecting to the host. Defaults to HTTP. |
- | path | The path to access on the HTTP server of the app instance, such as `/healthz`. |
+ |-||
+ | `scheme` | The scheme to use for connecting to the host. The default is *HTTP*. |
+ | `path` | The path to access on the HTTP server of the app instance, such as */healthz*. |
- `ExecAction` Executes a specified command inside the app instance. The diagnostic is considered successful if the command exits with a status code of 0. | Property name | Description |
- ||-|
- | command | The command line to execute inside the app instance. The working directory for the command is root ('/') in the app instance's filesystem. The command is run using `exec`, not inside a shell, so traditional shell instructions won't work. To use a shell, you need to explicitly call out to that shell. An exit status of 0 is treated as live/healthy and non-zero is unhealthy. |
+ |-|--|
+ | `command` | The command to execute inside the app instance. The working directory for the command is the root directory (*/*) in the app instance's file system. Because the command is run using `exec` rather than inside a shell, shell instructions won't work. To use a shell, explicitly call out to the shell. An exit status of *0* is treated as live/healthy, and non-zero is unhealthy. |
- `TCPSocketAction` Performs a TCP check against the app instance.
- There are no available properties to be customized for now.
+ There are no available properties for the `TCPSocketAction` action.
+
+### Customize your application
+
+#### [Azure portal](#tab/azure-portal)
+
+Use the following steps to customize your application using Azure portal.
+
+1. Under **Settings**, select **Apps**, and then select the application from the list.
+
+ :::image type="content" source="media/how-to-configure-health-probes-graceful-termination/select-app.jpg" lightbox="media/how-to-configure-health-probes-graceful-termination/select-app.jpg" alt-text="Screenshot of Azure portal showing the Apps page.":::
+
+1. Select **Configuration** in the left navigation pane, select **Health probes**, and then specify Health probe properties.
+
+ :::image type="content" source="media/how-to-configure-health-probes-graceful-termination/probe-config.jpg" lightbox="media/how-to-configure-health-probes-graceful-termination/probe-config.jpg" alt-text="Screenshot of the Azure portal Configuration page showing the Health probes tab.":::
+
+1. To set the termination grace period, select **General settings**, and specify a value in the **Termination grace period** box.
-### Customize your application by using the Azure CLI
+ :::image type="content" source="media/how-to-configure-health-probes-graceful-termination/termination-grace-period-config.jpg" lightbox="media/how-to-configure-health-probes-graceful-termination/termination-grace-period-config.jpg" alt-text="Screenshot of the Azure portal Configuration page showing the General settings tab.":::
-The following steps show you how to customize your application.
+#### [Azure CLI](#tab/azure-cli)
-1. Use the following command to create an application with liveness probe and readiness probe:
+Use the following steps to customize your application using Azure CLI.
+
+1. Use the following command to create an application with a liveness probe and readiness probe:
```azurecli az spring app create \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <service-instance-name> \
--name <application-name> \ --enable-liveness-probe true \ --liveness-probe-config <path-to-liveness-probe-json-file> \
The following steps show you how to customize your application.
} ```
- > [!NOTE]
- > Azure Spring Apps also support two more kinds of probe actions, as shown in the following JSON file examples:
- >
- > ```json
- > "probeAction": {
- > "type": "HTTPGetAction",
- > "scheme": "HTTP",
- > "path": "/anyPath"
- > }
- > ```
- >
- > and
- >
- > ```json
- > "probeAction": {
- > "type": "ExecAction",
- > "command": ["cat", "/tmp/healthy"]
- > }
- > ```
-
-1. Optionally, protect slow starting containers with a startup probe by using the following command:
+ The following example shows an `HTTPGetAction` action:
+
+ ```json
+ "probeAction": {
+ "type": "HTTPGetAction",
+ "scheme": "HTTP",
+ "path": "/anyPath"
+ }
+ ```
+
+ The following example shows an `ExecAction` action:
+
+ ```json
+ "probeAction": {
+ "type": "ExecAction",
+ "command": ["cat", "/tmp/healthy"]
+ }
+ ```
+
+1. Optionally, use the following command to protect slow starting containers with a startup probe:
```azurecli az spring app update \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <service-instance-name> \
--name <application-name> \ --enable-startup-probe true \ --startup-probe-config <path-to-startup-probe-json-file> ```
-1. Optionally, disable any specific health probe using the following command:
+1. Optionally, use the following command to disable a health probe:
```azurecli az spring app update \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <service-instance-name> \
--name <application-name> \ --enable-liveness-probe false ```
-1. Optionally, set the termination grace period seconds using the following command:
+1. Optionally, use the following command to set the termination grace period:
```azurecli az spring app update \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Cloud-instance-name> \
+ --service <service-instance-name> \
--name <application-name> \ --grace-period <termination-grace-period-seconds> ```
-## Use best practices
+
-Use the following best practices when adding your own persistent storage to Azure Spring Apps.
+## Best practices
-- Use liveness and readiness probe together. The reason for this recommendation is that Azure Spring Apps provides two approaches for service discovery at the same time. When the readiness probe fails, the app instance will be removed only from Kubernetes Service Discovery. A properly configured liveness probe can remove the issued app instance from Eureka Service Discovery to avoid unexpected cases. For more information about Service Discovery, see [Discover and register your Spring Boot applications](how-to-service-registration.md).-- When an app instance starts, the first check is done after the delay specified by `initialDelaySeconds`, and subsequent checks happen periodically, with the period length specified by `periodSeconds`. If the app has failed to respond to the requests for a number of times defined by `failureThreshold`, the app instance will be restarted. Be sure your application can start fast enough, or update these parameters, so the total timeout `initialDelaySeconds + periodSeconds * failureThreshold` is longer than the start time of your application.-- For Spring Boot applications, Spring Boot shipped with the [Health Groups](https://docs.spring.io/spring-boot/docs/2.2.x/reference/html/production-ready-features.html#health-groups) support, allowing developers to select a subset of health indicators and group them under a single, correlated, health status. For more information, see [Liveness and Readiness Probes with Spring Boot](https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot) on the Spring Blog.
+Use the following best practices when adding your own persistent storage to Azure Spring Apps:
- The following examples show Liveness and Readiness probes with Spring Boot:
+- Use liveness and readiness probes together. Azure Spring Apps provides two approaches for service discovery at the same time. When the readiness probe fails, the app instance is removed only from Kubernetes service discovery. A properly configured liveness probe can remove the issued app instance from Eureka service discovery to avoid unexpected cases. For more information about service discovery, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
- - Liveness probe:
+- When an app instance starts, the first check occurs after the delay specified by `initialDelaySeconds`. Subsequent checks occur periodically, according to the period length specified by `periodSeconds`. If the app fails to respond to the requests for several times as specified by `failureThreshold`, the app instance is restarted. Make sure your application can start fast enough, or update these parameters, so that the total timeout `initialDelaySeconds + periodSeconds * failureThreshold` is longer than the start time of your application.
- ```json
- "probe": {
- "initialDelaySeconds": 30,
- "periodSeconds": 10,
- "timeoutSeconds": 1,
- "failureThreshold": 30,
- "successThreshold": 1,
- "probeAction": {
- "type": "HTTPGetAction",
- "scheme": "HTTP",
- "path": "/actuator/health/liveness"
- }
- }
- ```
+- For Spring Boot applications, Spring Boot shipped with the [Health Groups](https://docs.spring.io/spring-boot/docs/2.2.x/reference/html/production-ready-features.html#health-groups) support, allowing developers to select a subset of health indicators and group them under a single, correlated health status. For more information, see [Liveness and Readiness Probes with Spring Boot](https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot) on the Spring Blog.
- - Readiness probe:
+ The following example shows a liveness probe with Spring Boot:
- ```json
- "probe": {
- "initialDelaySeconds": 0,
- "periodSeconds": 10,
- "timeoutSeconds": 1,
- "failureThreshold": 3,
- "successThreshold": 1,
- "probeAction": {
- "type": "HTTPGetAction",
- "scheme": "HTTP",
- "path": "/actuator/health/readiness"
- }
- }
- ```
+ ```json
+ "probe": {
+ "initialDelaySeconds": 30,
+ "periodSeconds": 10,
+ "timeoutSeconds": 1,
+ "failureThreshold": 30,
+ "successThreshold": 1,
+ "probeAction": {
+ "type": "HTTPGetAction",
+ "scheme": "HTTP",
+ "path": "/actuator/health/liveness"
+ }
+ }
+ ```
+
+ The following example shows a readiness probe with Spring Boot:
+
+ ```json
+ "probe": {
+ "initialDelaySeconds": 0,
+ "periodSeconds": 10,
+ "timeoutSeconds": 1,
+ "failureThreshold": 3,
+ "successThreshold": 1,
+ "probeAction": {
+ "type": "HTTPGetAction",
+ "scheme": "HTTP",
+ "path": "/actuator/health/readiness"
+ }
+ }
+ ```
-## FAQs
+## Frequently asked questions
-The following list shows frequently asked questions (FAQ) about using health probes with Azure Spring Apps.
+This section provides answers to frequently asked questions about using health probes with Azure Spring Apps.
-- I received 400 response when I created applications with customized health probes. What does this mean?
+- I received a 400 response when I created applications with customized health probes. What does this mean?
- *The error message will point out which probe is responsible for the provision failure. Be sure the health probe rules are correct and the timeout is long enough for the application to be in the running state.*
+ *The error message points out which probe is responsible for the provision failure. Make sure that the health probe rules are correct and that the timeout is long enough for the application to be in the running state.*
-- What's the default probe settings for existing application?
+- What are the default probe settings for an existing application?
*The following example shows the default settings:*
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
+
+ Title: Estimate the cost of archiving data (Azure Blob Storage)
+
+description: Learn how to calculate the cost of storing and maintaining data in the archive storage tier.
++ Last updated : 11/02/2022++++++
+# Estimate the cost of archiving data
+
+The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot and cool tiers.
+
+This article explains how to calculate the cost of using archive storage and then presents a few example scenarios.
+
+## Calculate costs
+
+The cost to archive data is derived from these three components:
+
+- Cost to write data to the archive tier
+- Cost to store data in the archive tier
+- Cost to rehydrate data from the archive tier
+
+The following sections show you how to calculate each component.
+
+This article uses fictitious prices in all calculations. You can find these sample prices in the [Sample prices](#sample-prices) section at the end of this article. These prices are meant only as examples, and shouldn't be used to calculate your costs.
+
+For official prices, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](/pricing/details/storage/data-lake/). For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md).
+
+#### The cost to write
+
+You can calculate the cost of writing to the archive tier by multiplying the <u>number of write operations</u> by the <u>price of each operation</u>. The price of an operation depends on which ones you use to write data to the archive tier.
+
+###### Put Blob
+
+If you use the [Put Blob](/rest/api/storageservices/put-blob) operation, then the number of operations is the same as the number of blobs. For example, if you plan to write 30,000 blobs to the archive tier, then that will require 30,000 operations. Each operation is charged the price of an **archive** write operation.
+
+> [!TIP]
+> Operations are billed per 10,000. Therefore, if the price per 10,000 operations is $0.10, then the price of a single operation is $0.10 / 10,000 = $0.00001.
+
+###### Put Block and Put Block List
+
+If you upload a blob by using the [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list) operations, then an upload will require multiple operations, and each of those operations are charged separately. Each [Put Block](/rest/api/storageservices/put-block) operation is charged at the price of a **hot** write operation. The number of [Put Block](/rest/api/storageservices/put-block) operations that you need depends on the block size that you specify to upload the data. For example, if the blob size is 100 MiB and you choose block size to 10 MiB when you upload that blob, you would use 10 [Put Block](/rest/api/storageservices/put-block) operations. Blocks are written (committed) to the archive tier by using the [Put Block List](/rest/api/storageservices/put-block-list) operation. That operation is charged the price of an **archive** write operation. Therefore, to upload a single blob, your cost is (<u>number of blocks</u> * <u>price of a hot write operation) + price of an archive write operation</u>.
+
+> [!NOTE]
+> If you're not using an SDK or the REST API directly, you might have to investigate which operations your data transfer tool is using to upload files. You might be able to determine this by reaching out the tool provider or by using storage logs.
+
+###### Set Blob Tier
+
+If you use the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation to move a blob from the cool or hot tier to the archive tier, you're charged the price of an **archive** write operation.
+
+#### The cost to store
+
+You can calculate the storage costs by multiplying the <u>size of the data</u> in GB by the <u>price of archive storage</u>.
+
+For example (assuming the sample pricing), if you plan to store 10 TB archived blobs, the capacity cost is $0.00099 * 10 * 1024 = $10.14 per month.
+
+#### The cost to rehydrate
+
+Blobs in the archive tier are offline and can't be read or modified. To read or modify data in an archived blob, you must first rehydrate the blob to an online tier (either the hot or cool tier).
+
+You can calculate the cost to rehydrate data by adding the <u>cost to retrieve data</u> to the <u>cost of reading the data</u>.
+
+Assuming sample pricing, the cost of retrieving 1 GB of data from the archive tier would be 1 * $0.02 = $0.02.
+
+Read operations are billed per 10,000. Therefore, if the cost per 10,000 operations is $5.00, then the cost of a single operation is $5.00 / 10,000 = $0.0005. The cost of reading 1000 blobs at standard priority is 1000 * $0.0005 = $0.50.
+
+In this example, the total cost to rehydrate (retrieving + reading) would be $0.02 + $0.50 = $0.52.
+
+> [!NOTE]
+> If you set the rehydration priority to high, then the data retrieval and read rates increase.
+
+If you plan to rehydrate data, you should try to avoid an early deletion fee. To review your options, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+
+## Scenario: One-time data backup
+
+This scenario assumes that you plan to remove on-premises tapes or file servers by migrating backup data to cloud storage. If you don't expect users to access that data often, then it might make sense to migrate that data directly to the archive tier. In the first month, you'd assume the cost of writing data to the archive tier. In the remaining months, you'd pay only for the cost to store the data and the cost to rehydrate data as needed for the occasional read operation.
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table demonstrates three months of spending.
+
+This scenario assumes an initial ingest of 2,000,000 files totaling 102,400 GB in size to archive. It also assumes one-time read each month of about 1% of archived capacity. The operation used this scenario is the [Put Blob](/rest/api/storageservices/put-blob) operation.
+
+<br>
+<table>
+ <tr>
+ <th>Cost factor</th>
+ <th>January</th>
+ <th>February</th>
+ <th>March</th>
+ <th>Projected annual</th>
+ </tr>
+ <tr>
+ <td>Write transactions</td>
+ <td>2,000,000</td>
+ <td>0</td>
+ <td>0</td>
+ <td>2,000,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single write operation</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to write (transactions * price of a write operation)</td>
+ <td>$20.00</td>
+ <td>$0.00</td>
+ <td>$0.00</td>
+ <td>$20.00</td>
+ </tr>
+ <tr>
+ <td>Total file size (GB)</td>
+ <td>102,400</td>
+ <td>102,400</td>
+ <td>102,400</td>
+ <td>1,228,800</td>
+ </tr>
+ <tr>
+ <td>Data prices (pay-as-you-go)</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to store (file size * data price)</td>
+ <td>$101.38</td>
+ <td>$101.38</td>
+ <td>$101.38</td>
+ <td>$1,216.51</td>
+ </tr>
+ <tr>
+ <td>Data retrieval size</td>
+ <td>1,024</td>
+ <td>1,024</td>
+ <td>1,024</td>
+ <td>12,288</td>
+ </tr>
+ <tr>
+ <td>Price of data retrieval</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ </tr>
+ <tr>
+ <td>Number of read transactions (File count * 1%)</td>
+ <td>20,000</td>
+ <td>20,000</td>
+ <td>20,000</td>
+ <td>240,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single read operation</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
+ <td>$30.48</td>
+ <td>$30.48</td>
+ <td>$30.48</td>
+ <td>$365.76</td>
+ </tr>
+ <tr>
+ <td><strong>Total cost</strong></td>
+ <td><strong>$151.86</strong></td>
+ <td><strong>$131.86</strong></td>
+ <td><strong>$131.86</strong></td>
+ <td><strong>$1,602.27</strong></td>
+ </tr>
+</table>
++
+## Scenario: Continuous tiering
+
+This scenario assumes that you plan to periodically move data to the archive tier. Perhaps you're using [Blob Storage inventory reports](blob-inventory.md) to gauge which blobs are accessed less frequently, and then using [lifecycle management policies](lifecycle-management-overview.md) to automate the archival process.
+
+Each month, you'd assume the cost of writing to the archive tier. The cost to store and then rehydrate data would increase over time as you archive more blobs.
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table demonstrates three months of spending.
+
+This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes a one-time read each month of about 1% of archived capacity. The operation used this scenario is the [Put Blob](/rest/api/storageservices/put-blob) operation.
+<br><br>
+
+<table>
+ <tr>
+ <th>Cost factor</th>
+ <th>January</th>
+ <th>February</th>
+ <th>March</th>
+ <th>Projected annual</th>
+ </tr>
+ <tr>
+ <td>Write transactions</td>
+ <td>200,000</td>
+ <td>200,000</td>
+ <td>200,000</td>
+ <td>2,400,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single write operation</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to write (transactions * price of a write operation)</td>
+ <td>$2.00</td>
+ <td>$2.00</td>
+ <td>$2.00</td>
+ <td>$24.00</td>
+ </tr>
+ <tr>
+ <td>Total file size (GB)</td>
+ <td>10,240</td>
+ <td>20,480</td>
+ <td>39,720</td>
+ <td>122,880</td>
+ </tr>
+ <tr>
+ <td>Data prices (pay-as-you-go)</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ <td>$0.00099</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to store (file size * data price)</td>
+ <td>$10.14</td>
+ <td>$20.28</td>
+ <td>$30.41</td>
+ <td>$790.73</td>
+ </tr>
+ <tr>
+ <td>Price of data retrieval</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ <td>$0.02</td>
+ </tr>
+ <tr>
+ <td>Number of read transactions (File count * 1% storage read)</td>
+ <td>2,000</td>
+ <td>4,000</td>
+ <td>6,000</td>
+ <td>156,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single read operation</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ <td>$0.0005</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
+ <td>$3.05</td>
+ <td>$6.10</td>
+ <td>$9.14</td>
+ <td>$237.74</td>
+ </tr>
+ <tr>
+ <td><strong>Total cost</strong></td>
+ <td><strong>$15.19</strong></td>
+ <td><strong>$28.37</strong></td>
+ <td><strong>$41.56</strong></td>
+ <td><strong>$1,052.48</strong></td>
+ </tr>
+</table>
++
+## Archive versus cool
+
+Archive storage is the lowest cost tier. However, it can take up to 15 hours to rehydrate 10 GiB files. To learn more, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md). The archive tier might not be the best fit if your workloads must read data quickly. The cool tier offers a near real-time read latency with a lower price than that the hot tier. Understanding your access requirements will help you to choose between the cool and archive tiers.
+
+The following table compares the cost of archive storage with the cost of cold storage by using the [Sample prices](#sample-prices) that appear in this article. This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in size to archive. It also assumes 1 read each month about 10% of stored capacity (1024 GB), and 10% of total transactions (20,000).
+<br><br>
+
+<table>
+ <tr>
+ <th>Cost factor</th>
+ <th>Archive</th>
+ <th>Cool</th>
+ </tr>
+ <tr>
+ <td>Write transactions</td>
+ <td>200,000</td>
+ <td>200,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single write operation</td>
+ <td>$0.00001</td>
+ <td>$0.00001</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to write (transactions * price of a write operation)</td>
+ <td>$2.00</td>
+ <td>$2.00</td>
+ </tr>
+ <tr>
+ <td>Total file size (GB)</td>
+ <td>10,240</td>
+ <td>10,240</td>
+ </tr>
+ <tr>
+ <td>Data prices (pay-as-you-go)</td>
+ <td>$0.00099</td>
+ <td>$0.0152</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to store (file size * data price)</td>
+ <td>$10.14</td>
+ <td>$155.65</td>
+ </tr>
+ <tr>
+ <td>Data retrieval size</td>
+ <td>1,024</td>
+ <td>1,024</td>
+ </tr>
+ <tr>
+ <td>Price of data retrieval per GB</td>
+ <td>$0.02</td>
+ <td>$0.01</td>
+ </tr>
+ <tr>
+ <td>Number of read transactions</td>
+ <td>20,000</td>
+ <td>20,000</td>
+ </tr>
+ <tr>
+ <td>Price of a single read operation</td>
+ <td>$0.0005</td>
+ <td>$0.000001</td>
+ </tr>
+ <tr bgcolor="beige">
+ <td>Cost to rehydrate (cost to retrieve + cost to read)</td>
+ <td>$30.48</td>
+ <td>$10.26</td>
+ </tr>
+ <tr>
+ <td><strong>Monthly cost</strong></td>
+ <td><strong>$42.62</strong></td>
+ <td><strong>$167.91</strong></td>
+ </tr>
+</table>
+
+The following chart shows the impact on monthly spending given various read percentages. This chart assumes a monthly ingest of 1,000,000 files totaling 10,240 GB in size.
+
+For example, the second pair of bars assumes that workloads read 100,000 files (**10%** of 1,000,000 files) and 1,024 GB (**10%** of 10,240 GB). Assuming the sample pricing, the estimated monthly cost of cool storage is **$175.99** and the estimated monthly cost of archive storage is **$90.62**.
+
+This chart shows a break-even point at or around the 25% read level. After that level, the cost of archive storage begins to rise relative to the cost of cool storage.
+
+> [!div class="mx-imgBorder"]
+> ![Cool versus archive monthly spending](./media/archive-cost-estimation/cool-versus-archive-monthly-spending.png)
+
+## Sample prices
+
+This article uses the following fictitious prices.
+
+> [!IMPORTANT]
+> These prices are meant only as examples, and should not be used to calculate your costs.
+
+| Price factor | Archive | Cool |
+|-|-|--|
+| Price of write transactions (per 10,000) | $0.10 | $0.10 |
+| Price of a single write operation (cost / 10,000) | $0.00001 | $0.00001 |
+| Data prices (pay-as-you-go) | $0.00099 | $0.0152 |
+| Price of read transactions (per 10,000) | $5.00 | $0.01 |
+| Price of a single read operation (cost / 10,000) | $0.0005 | $0.000001 |
+| Price of high priority read transactions (per 10,000) | $50.00 | N/A |
+| Price of data retrieval (per GB) | $0.02 | $0.01 |
+| PRice of high priority data retrieval (per GB) | $0.10 | N/A |
+
+For official prices, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](/pricing/details/storage/data-lake/).
+
+For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md).
+
+## Next steps
+
+- [Set a blob's access tier](access-tiers-online-manage.md)
+- [Archive a blob](archive-blob.md)
+- [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
Previously updated : 10/14/2021 Last updated : 11/03/2022
Use only versions `1.6.0` or higher.
<a id="explorer-in-portal"></a>
-## Storage Explorer in the Azure portal
+## Storage browser in the Azure portal
-ACLs are not yet supported.
+In the storage browser that appears in the Azure portal, you can't access a file or folder by specifying a path. Instead, you must browse through folders to reach a file. Therefore, if an ACL grants a user read access to a file but not read access to all folders leading up to the file, that user won't be able to view the file in storage browser.
<a id="third-party-apps"></a>
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Previously updated : 10/04/2022 Last updated : 11/03/2022
To learn how to enable on-premises Active Directory Domain Services authenticati
To learn how to enable Azure AD DS authentication for Azure file shares, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
-To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files (preview)](storage-files-identity-auth-azure-active-directory-enable.md).
+To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
## Applies to | File share type | SMB | NFS |
It's helpful to understand some key terms relating to identity-based authenticat
- **Server Message Block (SMB) protocol**
- SMB is an industry-standard network file-sharing protocol. SMB is also known as Common Internet File System or CIFS. For more information on SMB, see [Microsoft SMB Protocol and CIFS Protocol Overview](/windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview).
+ SMB is an industry-standard network file-sharing protocol. SMB is also known as Common Internet File System (CIFS). For more information on SMB, see [Microsoft SMB Protocol and CIFS Protocol Overview](/windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview).
- **Azure Active Directory (Azure AD)**
- Azure AD is Microsoft's multi-tenant cloud-based directory and identity management service. Azure AD combines core directory services, application access management, and identity protection into a single solution. Storing FSLogix profiles on Azure file shares for Azure AD-joined VMs is currently in public preview. For more information, see [Create a profile container with Azure Files and Azure Active Directory (preview)](../../virtual-desktop/create-profile-container-azure-ad.md).
+ Azure AD is Microsoft's multi-tenant cloud-based directory and identity management service. Azure AD combines core directory services, application access management, and identity protection into a single solution.
- **Azure Active Directory Domain Services (Azure AD DS)**
Deprecating and replacing scattered on-premises file servers is a common problem
### Lift and shift applications to Azure
-When you lift and shift applications to the cloud, you want to keep the same authentication model for your data. As we extend the identity-based access control experience to Azure file shares, it eliminates the need to change your application to modern auth methods and expedite cloud adoption. Azure file shares provide the option to integrate with either Azure AD DS or on-premises AD DS for authentication. If your plan is to be 100% cloud native and minimize the efforts managing cloud infrastructures, Azure AD DS would be a better fit as a fully managed domain service. If you need full compatibility with AD DS capabilities, you may want to consider extending your AD DS environment to cloud by self-hosting domain controllers on VMs. Either way, we provide the flexibility to choose the domain services that suits your business needs.
+When you lift and shift applications to the cloud, you want to keep the same authentication model for your data. As we extend the identity-based access control experience to Azure file shares, it eliminates the need to change your application to modern auth methods and expedite cloud adoption. Azure file shares provide the option to integrate with either Azure AD DS or on-premises AD DS for authentication. If your plan is to be 100% cloud native and minimize the efforts managing cloud infrastructures, Azure AD DS would be a better fit as a fully managed domain service. If you need full compatibility with AD DS capabilities, you may want to consider extending your AD DS environment to cloud by self-hosting domain controllers on VMs. Either way, we provide the flexibility to choose the domain service that best suits your business needs.
### Backup and disaster recovery (DR)
If you are keeping your primary file storage on-premises, Azure file shares can
## Supported scenarios
-This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities (preview). We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already setup on-premises or in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication. Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
+This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities. We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already set up on-premises or in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication. Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS. - **Azure AD DS authentication:** Azure AD DS-joined Windows machines can access Azure file shares with Azure AD credentials over SMB. -- **Azure AD Kerberos for hybrid identities (preview):** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs.
+- **Azure AD Kerberos for hybrid identities:** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. You can also use this feature to store FSLogix profiles on Azure file shares for Azure AD-joined VMs. For more information, see [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md).
### Restrictions -- Azure AD DS and on-premises AD DS authentication don't support authentication against computer accounts (machine accounts). You can consider using a service logon account instead.-- Neither Azure AD DS authentication nor on-premises AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices.
+- On-premises AD DS authentication and Azure AD DS authentication don't support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC because computer accounts can't be synced to Azure AD. You can either [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) to allow computer accounts to access the share, or consider using a service logon account instead.
+- Neither on-premises AD DS authentication nor Azure AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices.
- Identity-based authentication isn't supported with Network File System (NFS) shares. ## Advantages of identity-based authentication
The following diagram represents the workflow for Azure AD DS authentication to
:::image type="content" source="media/storage-files-active-directory-overview/Files-Azure-AD-DS-Diagram.png" alt-text="Diagram":::
-### Azure AD Kerberos for hybrid identities (preview)
+### Azure AD Kerberos for hybrid identities
Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
-For more information on this preview feature, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+For more information on this feature, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
## Access control Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level permission assignment can be performed on Azure AD users or groups managed through Azure RBAC. With Azure RBAC, the credentials you use for file access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB Share Reader to users or groups in Azure AD to grant read access to an Azure file share.
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Previously updated : 09/19/2022 Last updated : 11/03/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Part two: assign share-level permissions to an identity
-Before you begin this article, make sure you've completed the previous article, [Enable AD DS authentication for your account](storage-files-identity-ad-ds-enable.md).
-
-Once you've enabled Active Directory Domain Services (AD DS) authentication on your storage account, you must configure share-level permissions in order to get access to your file shares. There are two ways you can assign share-level permissions. You can assign them to specific Azure AD users/user groups and you can assign them to all authenticated identities as a default share level permission.
+Once you've enabled an Active Directory (AD) source for your storage account, you must configure share-level permissions in order to get access to your file share. There are two ways you can assign share-level permissions. You can assign them to [specific Azure AD users/groups](#share-level-permissions-for-specific-azure-ad-users-or-groups), and you can assign them to all authenticated identities as a [default share-level permission](#share-level-permissions-for-all-authenticated-identities).
> [!IMPORTANT]
-> Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage account key. Administrative control is not supported with Azure AD credentials.
+> Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage account key. Full administrative control isn't supported with Active Directory Domain Services (AD DS) or Azure AD authentication.
## Applies to | File share type | SMB | NFS |
Once you've enabled Active Directory Domain Services (AD DS) authentication on y
## Which configuration should you use
-Most users should assign share-level permissions to specific Azure AD users or groups and then use Windows ACLs for granular access control at the directory and file level. This is the most stringent and secure configuration.
+Most users should assign share-level permissions to specific Azure AD users or groups, and then use Windows ACLs for granular access control at the directory and file level. This is the most stringent and secure configuration.
-There are three scenarios where we instead recommend using default share-level permissions assigned to all authenticated identities:
+There are three scenarios where we instead recommend using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities) assigned to all authenticated identities:
-- If you are unable to sync your on-premises AD DS to Azure AD, you can alternatively use a default share-level permission. Assigning a default share-level permission allows you to work around the sync requirement as you don't need to specify the permission to identities in Azure AD. Then you can use Windows ACLs for granular permission enforcement on your files and directories.
- - Identities that are tied to an AD but aren't synching to Azure AD DS can also leverage the default share-level permission. This could include standalone Managed Service Accounts (sMSA), group Managed Service Accounts (gMSA), and computer service accounts.
+- If you are unable to sync your on-premises AD DS to Azure AD, you can use a default share-level permission. Assigning a default share-level permission allows you to work around the sync requirement because you don't need to specify the permission to identities in Azure AD. Then you can use Windows ACLs for granular permission enforcement on your files and directories.
+ - Identities that are tied to an AD but aren't synching to Azure AD can also leverage the default share-level permission. This could include standalone Managed Service Accounts (sMSA), group Managed Service Accounts (gMSA), and computer accounts.
- The on-premises AD DS you're using is synched to a different Azure AD than the Azure AD the file share is deployed in.
- - This is typical when you are managing multi-tenant environments. Using the default share-level permission allows you to bypass the requirement for a Azure AD hybrid identity. You can still use Windows ACLs on your files and directories for granular permission enforcement.
-- You prefer to enforce authentication only using Windows ACLS at the file and directory level.
+ - This is typical when you're managing multi-tenant environments. Using a default share-level permission allows you to bypass the requirement for an Azure AD hybrid identity. You can still use Windows ACLs on your files and directories for granular permission enforcement.
+- You prefer to enforce authentication only using Windows ACLs at the file and directory level.
+
+> [!NOTE]
+> Because computer accounts don't have an identity in Azure AD, you can't configure Azure role-based access control (RBAC) for them. However, computer accounts can access a file share by using a [default share-level permission](#share-level-permissions-for-all-authenticated-identities).
## Share-level permissions
-The following table lists the share-level permissions and how they align with the built-in Azure role-based access control (RBAC) roles:
+The following table lists the share-level permissions and how they align with the built-in Azure RBAC roles:
|Supported built-in roles |Description | |||
az role assignment create --role "<role-name>" --assignee <user-principal-name>
You can add a default share-level permission on your storage account, instead of configuring share-level permissions for Azure AD users or groups. A default share-level permission assigned to your storage account applies to all file shares contained in the storage account.
-When you set a default share-level permission, all authenticated users and groups will have the same permission. Authenticated users or groups are identified as the identity can be authenticated against the on-premises AD DS the storage account is associated with. The default share level permission is set to **None** at initialization, implying that no access is allowed to files & directories in Azure file share.
+When you set a default share-level permission, all authenticated users and groups will have the same permission. Authenticated users or groups are identified as the identity can be authenticated against the on-premises AD DS the storage account is associated with. The default share-level permission is set to **None** at initialization, implying that no access is allowed to files or directories in the Azure file share.
# [Portal](#tab/azure-portal)
-You can't currently assign permissions to the storage account with the Azure portal. Use either the Azure PowerShell module or the Azure CLI, instead.
+To configure default share-level permissions on your storage account using the [Azure portal](https://portal.azure.com), follow these steps.
+
+1. In the Azure portal, go to the storage account that contains your file share(s) and select **Data storage > File shares**.
+1. You must enable an AD source on your storage account before assigning a default share-level permission. If you've already done so, you can skip this step. To enable an AD source, select **Set up** under the desired AD source.
+1. After you've configured an AD source, **Step 2: Set share-level permissions** will be available for configuration. Select **Enable permissions for all authenticated users and groups**.
+1. Select the appropriate role to be enabled as the default [share permission](#share-level-permissions) from the dropdown list. You can also change an existing default permission to a different role.
+1. Select **Save**.
# [Azure PowerShell](#tab/azure-powershell)
-You can use the following script to configure default share-level permissions on your storage account. You can enable default share level permission only on storage accounts associated with a directory service for Files authentication.
+You can use the following script to configure default share-level permissions on your storage account. You can enable default share-level permission only on storage accounts associated with a directory service for Azure Files authentication.
-Before running the following script, make sure your Az.Storage module is version 3.7.0 or newer.
+Before running the following script, make sure your Az.Storage module is version 3.7.0 or newer. We suggest updating to the latest version.
```azurepowershell $defaultPermission = "None|StorageFileDataSmbShareContributor|StorageFileDataSmbShareReader|StorageFileDataSmbShareElevatedContributor" # Set the default permission of your choice
$account.AzureFilesIdentityBasedAuth
# [Azure CLI](#tab/azure-cli)
-You can use the following script to configure default share-level permissions on your storage account. You can enable default share level permission only on storage accounts associated with a directory service for Files authentication.
+You can use the following script to configure default share-level permissions on your storage account. You can enable default share-level permission only on storage accounts associated with a directory service for Azure Files authentication.
Before running the following script, make sure your Azure CLI is version 2.24.1 or newer.
storageAccountName="YourStorageAccountName"
resourceGroupName="YourResourceGroupName" defaultPermission="None|StorageFileDataSmbShareContributor|StorageFileDataSmbShareReader|StorageFileDataSmbShareElevatedContributor" # Set the default permission of your choice - az storage account update --name $storageAccountName --resource-group $resourceGroupName --default-share-permission $defaultPermission ```
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Previously updated : 10/04/2022 Last updated : 11/03/2022
We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring on-premises AD DS for authentication with Azure file shares.
-If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
+If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md).
## Applies to | File share type | SMB | NFS |
If you're new to Azure Files, we recommend reading our [planning guide](storage-
## Supported scenarios and restrictions -- AD DS identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a default share-level permission. Password hash synchronization is optional.
+- AD DS identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities). Password hash synchronization is optional.
- Supports Azure file shares managed by Azure File Sync. - Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 128 Kerberos encryption is not yet supported. - Supports single sign-on experience. - Only supported on clients running OS versions Windows 8/Windows Server 2012 or newer. - Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details.-- Doesn't support authentication against computer accounts created in AD DS.
+- Doesn't support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC. You can either [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) to allow computer accounts to access the share, or consider using a service logon account instead.
- Doesn't support authentication against Network File System (NFS) file shares. - Doesn't support using CNAME to mount file shares.
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
Title: Use Azure Active Directory to authorize access to Azure files over SMB for hybrid identities using Kerberos authentication (preview)
-description: Learn how to enable identity-based Kerberos authentication for hybrid user identities over Server Message Block (SMB) for Azure Files through Azure Active Directory. Your users can then access Azure file shares by using their Azure AD credentials (preview).
+ Title: Use Azure Active Directory to authorize access to Azure files over SMB for hybrid identities using Kerberos authentication
+description: Learn how to enable identity-based Kerberos authentication for hybrid user identities over Server Message Block (SMB) for Azure Files through Azure Active Directory. Your users can then access Azure file shares by using their Azure AD credentials.
This article focuses on enabling and configuring Azure AD for authenticating [hy
> [!IMPORTANT] > Azure Files authentication with Azure Active Directory Kerberos is currently in public preview.
-> This preview version is provided without a service level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
Azure AD Kerberos authentication only supports using AES-256 encryption.
Azure Files authentication with Azure AD Kerberos is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/) except China and Government clouds.
-## Enable Azure AD Kerberos authentication for hybrid user accounts (preview)
+## Enable Azure AD Kerberos authentication for hybrid user accounts
-To enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts (preview), use the Azure portal.
+To enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts, use the Azure portal.
1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for. 1. Under **Data storage**, select **File shares**.
For more information, see these resources:
- [Potential errors when enabling Azure AD Kerberos authentication for hybrid users](storage-troubleshoot-windows-file-connection-problems.md#potential-errors-when-enabling-azure-ad-kerberos-authentication-for-hybrid-users) - [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md) - [Enable AD DS authentication to Azure file shares](storage-files-identity-ad-ds-enable.md)-- [Create a profile container with Azure Files and Azure Active Directory (preview)](../../virtual-desktop/create-profile-container-azure-ad.md)
+- [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md)
- [FAQ](storage-files-faq.md)
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
Title: Planning for an Azure Files deployment | Microsoft Docs
+ Title: Planning for an Azure Files deployment
description: Understand planning for an Azure Files deployment. You can either direct mount an Azure file share, or cache Azure file shares on-premises with Azure File Sync.
When deploying Azure file shares into storage accounts, we recommend:
To access an Azure file share, the user of the file share must be authenticated and authorized to access the share. This is done based on the identity of the user accessing the file share. Azure Files integrates with four main identity providers: - **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS)**: Azure storage accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a storage account is domain-joined, the end user can mount a file share with the user account they signed into their PC with. AD-based authentication uses the Kerberos authentication protocol. - **Azure Active Directory Domain Services (Azure AD DS)**: Azure AD DS provides a Microsoft-managed domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD DS provides similar benefits to domain joining it to a customer-owned Active Directory. This deployment option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.-- **Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview)**: Azure AD Kerberos allows you to use Azure AD to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Azure AD to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs.
+- **Azure Active Directory (Azure AD) Kerberos for hybrid identities**: Azure AD Kerberos allows you to use Azure AD to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Azure AD to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs.
- **Azure storage account key**: Azure file shares may also be mounted with an Azure storage account key. To mount a file share this way, the storage account name is used as the username and the storage account key is used as a password. Using the storage account key to mount the Azure file share is effectively an administrator operation, because the mounted file share will have full permissions to all of the files and folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the NTLMv2 authentication protocol is used. For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to behave like Windows file servers or NAS appliances, domain joining your storage account to **Customer-owned Active Directory** is the recommended option. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](storage-files-active-directory-overview.md).
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
description: Learn how to mount an Azure file share over SMB on Linux and review
Previously updated : 10/21/2022 Last updated : 11/03/2022
The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By
| Distribution | SMB 3.1.1 | SMB 3.0 | |-|--||
-| Linux kernel version | <ul><li>Basic 3.1.1 support: 4.17</li><li>Default mount: 5.0</li><li>AES-128-GCM encryption: 5.3</li></ul> | <ul><li>Basic 3.0 support: 3.12</li><li>AES-128-CCM encryption: 4.11</li></ul> |
+| Linux kernel version | <ul><li>Basic 3.1.1 support: 4.17</li><li>Default mount: 5.0</li><li>AES-128-GCM encryption: 5.3</li><li>AES-256-GCM encryption: 5.10</li></ul> | <ul><li>Basic 3.0 support: 3.12</li><li>AES-128-CCM encryption: 4.11</li></ul> |
| [Ubuntu](https://wiki.ubuntu.com/Releases) | AES-128-GCM encryption: 18.04.5 LTS+ | AES-128-CCM encryption: 16.04.4 LTS+ | | [Red Hat Enterprise Linux (RHEL)](https://access.redhat.com/articles/3078) | <ul><li>Basic: 8.0+</li><li>Default mount: 8.2+</li><li>AES-128-GCM encryption: 8.2+</li></ul> | 7.5+ | | [Debian](https://www.debian.org/releases/) | Basic: 10+ | AES-128-CCM encryption: 10+ |
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
After enabling Azure AD Kerberos authentication, you'll need to explicitly grant
## Potential errors when enabling Azure AD Kerberos authentication for hybrid users
-You might encounter the following errors when trying to enable Azure AD Kerberos authentication for hybrid user accounts, which is currently in public preview.
+You might encounter the following errors when trying to enable Azure AD Kerberos authentication for hybrid user accounts.
### Error - Grant admin consent disabled
When enabling Azure AD Kerberos authentication, you might encounter this error i
- Has no start date, or has a start date before 2019-01-01 - Sets a restriction on service principal passwords, which either disallows custom passwords or sets a maximum password lifetime of less than 365.5 days
-There is currently no workaround for this error during the public preview.
+There is currently no workaround for this error.
#### Cause 2: an application already exists for the storage account
If you don't want to rotate the service principal password every six months, you
1. [Disable Azure AD Kerberos](storage-files-identity-auth-azure-active-directory-enable.md#disable-azure-ad-authentication-on-your-storage-account) 1. [Delete the existing application](#cause-2-an-application-already-exists-for-the-storage-account)
-1. [Reconfigure Azure AD Kerberos via the Azure portal](storage-files-identity-auth-azure-active-directory-enable.md#enable-azure-ad-kerberos-authentication-for-hybrid-user-accounts-preview)
+1. [Reconfigure Azure AD Kerberos via the Azure portal](storage-files-identity-auth-azure-active-directory-enable.md#enable-azure-ad-kerberos-authentication-for-hybrid-user-accounts)
Once you've reconfigured Azure AD Kerberos, the new experience will auto-create and manage the newly created application.
synapse-analytics Get Started Analyze Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-data-explorer.md
In this article, you'll learn the basic steps to load and analyze data with Data
1. Paste in the following command, and select **Run** to ingest data into StormEvents table. ```Kusto
- .ingest into table StormEvents 'https://kustosamplefiles.blob.core.windows.net/samplefiles/StormEvents.csv?sv=2019-12-12&ss=b&srt=o&sp=r&se=2022-09-05T02:23:52Z&st=2020-09-04T18:23:52Z&spr=https&sig=VrOfQMT1gUrHltJ8uhjYcCequEcfhjyyMX%2FSc3xsCy4%3D' with (ignoreFirstRecord=true)
+ .ingest into table StormEvents 'https://kustosamples.blob.core.windows.net/samplefiles/StormEvents.csv' with (ignoreFirstRecord=true)
``` 1. After ingestion completes, paste in the following query, select the query in the window, and select **Run**.
synapse-analytics How To Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-using-azure-monitor.md
Title: How to monitor Synapse Analytics using Azure Monitor description: Learn how to monitor your Synapse Analytics workspace using Azure Monitor metrics, alerts, and logs- ++ Last updated : 11/02/2022 - Previously updated : 11/30/2020--+ # Use Azure Monitor with your Azure Synapse Analytics workspace
To access these metrics, complete the instructions in [Azure Monitor data platfo
Here are some of the metrics emitted by workspaces:
-| **Metric** | **Metric category, display name** | **Unit** | **Aggregation types** | **Description** |
-|--||-|-|--|
-| IntegrationActivityRunsEnded | Integration, Activity runs metric | Count | Sum (default), Count | The total number of activity runs that occurred/ended within a 1-minute window. </br></br> Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state.|
-| IntegrationPipelineRunsEnded | Integration, Pipeline runs metric | Count | Sum (default), Count | The total number of pipeline runs that occurred/ended within a 1-minute window. </br></br> Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state. |
-| IntegrationTriggerRunsEnded | Integration, Trigger runs metric | Count | Sum (default), Count | The total number of trigger runs that occurred/ended within a 1-minute window. </br></br> Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state. |
-| BuiltinSqlPoolDataProcessedBytes | Built-in SQL pool, Data processed (bytes) | Byte | Sum (default) | Amount of data processed by the built-in serverless SQL pool. |
-| BuiltinSqlPoolLoginAttempts | Built-in SQL pool, Login attempts | Count | Sum (default) | Number of login attempts for the built-in serverless SQL pool. |
-| BuiltinSqlPoolDataRequestsEnded | Built-in SQL pool, Requests ended (bytes) | Count | Sum (default) | Number of ended SQL requests for the built-in serverless SQL pool. </br></br> Use the Result dimension of this metric to filter by final state. |
+| **Metric** | **Metric category, display name** | **Unit** | **Aggregation types** | **Description** |
+| | | | | |
+| IntegrationActivityRunsEnded | Integration, Activity runs metric | Count | Sum (default), Count | The total number of activity runs that occurred/ended within a 1-minute window.<br /><br />Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state. |
+| IntegrationPipelineRunsEnded | Integration, Pipeline runs metric | Count | Sum (default), Count | The total number of pipeline runs that occurred/ended within a 1-minute window.<br /><br />Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state. |
+| IntegrationTriggerRunsEnded | Integration, Trigger runs metric | Count | Sum (default), Count | The total number of trigger runs that occurred/ended within a 1-minute window.<br /><br />Use the Result dimension of this metric to filter by Succeeded, Failed, or Cancelled final state. |
+| BuiltinSqlPoolDataProcessedBytes | Built-in SQL pool, Data processed (bytes) | Byte | Sum (default) | Amount of data processed by the built-in serverless SQL pool. |
+| BuiltinSqlPoolLoginAttempts | Built-in SQL pool, Login attempts | Count | Sum (default) | Number of login attempts for the built-in serverless SQL pool. |
+| BuiltinSqlPoolDataRequestsEnded | Built-in SQL pool, Requests ended (bytes) | Count | Sum (default) | Number of ended SQL requests for the built-in serverless SQL pool.<br /><br />Use the Result dimension of this metric to filter by final state. |
### Dedicated SQL pool metrics
-Here are some of the metrics emitted by dedicated SQL pools:
-
-| **Metric** | **Display name** | **Unit** | **Aggregation types** | **Description** |
-|--||-|-|--|
-| DWULimit | DWU limit | Count | Max (default), Min, Avg | Configured size of the SQL pool |
-| DWUUsed | DWU used | Count | Max (default), Min, Avg | Represents a high-level representation of usage across the SQL pool. Measured by DWU limit * DWU percentage |
-| DWUUsedPercent | DWU used percentage | Percent | Max (default), Min, Avg | Represents a high-level representation of usage across the SQL pool. Measured by taking the maximum between CPU percentage and Data IO percentage |
-| ConnectionsBlockedByFirewall | Connections blocked by firewall | Count | Sum (default) | Count of connections blocked by firewall rules. Revisit access control policies for your SQL pool and monitor these connections if the count is high |
-| AdaptiveCacheHitPercent | Adaptive cache hit percentage | Percent | Max (default), Min, Avg | Measures how well workloads are utilizing the adaptive cache. Use this metric with the cache hit percentage metric to determine whether to scale for additional capacity or rerun workloads to hydrate the cache |
-| AdaptiveCacheUsedPercent | Adaptive cache used percentage | Percent | Max (default), Min, Avg | Measures how well workloads are utilizing the adaptive cache. Use this metric with the cache used percentage metric to determine whether to scale for additional capacity or rerun workloads to hydrate the cache |
-| LocalTempDBUsedPercent | Local tempdb used percentage | Percent | Max (default), Min, Avg | Local tempdb utilization across all compute nodes - values are emitted every five minute |
-| MemoryUsedPercent | Memory used percentage | Percent | Max (default), Min, Avg | Memory utilization across all nodes in the SQL pool |
-| CPUPercent | CPU used percentage | Percent | Max (default), Min, Avg | CPU utilization across all nodes in the SQL pool |
-| Connections | Connections | Count | Sum (default) | Count of total logins to the SQL pool |
-| ActiveQueries | Active queries | Count | Sum (default) | The active queries. Using this metric unfiltered and unsplit displays all active queries running on the system |
-| QueuedQueries | Queued queries | Count | Sum (default) | Cumulative count of requests queued after the max concurrency limit was reached |
-| WLGActiveQueries | Workload group active queries | Count | Sum (default) | The active queries within the workload group. Using this metric unfiltered and unsplit displays all active queries running on the system |
-| WLGActiveQueriesTimeouts | Workload group query timeouts | Count | Sum (default) | Queries for the workload group that have timed out. Query timeouts reported by this metric are only once the query has started executing (it does not include wait time due to locking or resource waits) |
-| WLGQueuedQueries | Workload group queued queries | Count | Sum (default) | Cumulative count of requests queued after the max concurrency limit was reached |
-| WLGAllocationBySystemPercent | Workload group allocation by system percent | Percent | Max (default), Min, Avg, Sum | The percentage allocation of resources relative to the entire system |
-| WLGAllocationByEffectiveCapResourcePercent | Workload group allocation by max resource percent | Percent | Max (default), Min, Avg | Displays the percentage allocation of resources relative to the effective cap resource percent per workload group. This metric provides the effective utilization of the workload group |
-| WLGEffectiveCapResourcePercent | Effective cap resource percent | Percent | Max (default), Min, Avg | The effective cap resource percent for the workload group. If there are other workload groups with min_percentage_resource > 0, the effective_cap_percentage_resource is lowered proportionally |
-| WLGEffectiveMinResourcePercent | Effective min resource percent | Percent | Max (default), Min, Avg, Sum | The effective min resource percentage setting allowed considering the service level and the workload group settings. The effective min_percentage_resource can be adjusted higher on lower service levels |
+Here are some of the metrics emitted by dedicated SQL pools created in Azure Synapse workspaces. For metrics emitted by dedicated SQL pools (formerly SQL Data Warehouse), see [Monitoring resource utilization and query activity](../sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md).
+
+| **Metric** | **Display name** | **Unit** | **Aggregation types** | **Description** |
+| | | | | |
+| DWULimit | DWU limit | Count | Max (default), Min, Avg | Configured size of the SQL pool |
+| DWUUsed | DWU used | Count | Max (default), Min, Avg | Represents a high-level representation of usage across the SQL pool. Measured by DWU limit * DWU percentage |
+| DWUUsedPercent | DWU used percentage | Percent | Max (default), Min, Avg | Represents a high-level representation of usage across the SQL pool. Measured by taking the maximum between CPU percentage and Data IO percentage |
+| ConnectionsBlockedByFirewall | Connections blocked by firewall | Count | Sum (default) | Count of connections blocked by firewall rules. Revisit access control policies for your SQL pool and monitor these connections if the count is high |
+| AdaptiveCacheHitPercent | Adaptive cache hit percentage | Percent | Max (default), Min, Avg | Measures how well workloads are utilizing the adaptive cache. Use this metric with the cache hit percentage metric to determine whether to scale for additional capacity or rerun workloads to hydrate the cache |
+| AdaptiveCacheUsedPercent | Adaptive cache used percentage | Percent | Max (default), Min, Avg | Measures how well workloads are utilizing the adaptive cache. Use this metric with the cache used percentage metric to determine whether to scale for additional capacity or rerun workloads to hydrate the cache |
+| LocalTempDBUsedPercent | Local `tempdb` used percentage | Percent | Max (default), Min, Avg | Local `tempdb` utilization across all compute nodes - values are emitted every five minutes |
+| MemoryUsedPercent | Memory used percentage | Percent | Max (default), Min, Avg | Memory utilization across all nodes in the SQL pool |
+| CPUPercent | CPU used percentage | Percent | Max (default), Min, Avg | CPU utilization across all nodes in the SQL pool |
+| Connections | Connections | Count | Sum (default) | Count of total logins to the SQL pool |
+| ActiveQueries | Active queries | Count | Sum (default) | The active queries. Using this metric unfiltered and unsplit displays all active queries running on the system |
+| QueuedQueries | Queued queries | Count | Sum (default) | Cumulative count of requests queued after the max concurrency limit was reached |
+| WLGActiveQueries | Workload group active queries | Count | Sum (default) | The active queries within the workload group. Using this metric unfiltered and unsplit displays all active queries running on the system |
+| WLGActiveQueriesTimeouts | Workload group query timeouts | Count | Sum (default) | Queries for the workload group that have timed out. Query timeouts reported by this metric are only once the query has started executing (it does not include wait time due to locking or resource waits) |
+| WLGQueuedQueries | Workload group queued queries | Count | Sum (default) | Cumulative count of requests queued after the max concurrency limit was reached |
+| WLGAllocationBySystemPercent | Workload group allocation by system percent | Percent | Max (default), Min, Avg, Sum | The percentage allocation of resources relative to the entire system |
+| WLGAllocationByEffectiveCapResourcePercent | Workload group allocation by max resource percent | Percent | Max (default), Min, Avg | Displays the percentage allocation of resources relative to the effective cap resource percent per workload group. This metric provides the effective utilization of the workload group |
+| WLGEffectiveCapResourcePercent | Effective cap resource percent | Percent | Max (default), Min, Avg | The effective cap resource percent for the workload group. If there are other workload groups with min_percentage_resource > 0, the effective_cap_percentage_resource is lowered proportionally |
+| WLGEffectiveMinResourcePercent | Effective min resource percent | Percent | Max (default), Min, Avg, Sum | The effective min resource percentage setting allowed considering the service level and the workload group settings. The effective min_percentage_resource can be adjusted higher on lower service levels |
### Apache Spark pool metrics Here are some of the metrics emitted by Apache Spark pools:
-| **Metric** | **Metric category, display name** | **Unit** | **Aggregation types** | **Description** |
-|--||-|-|--|
-| BigDataPoolApplicationsEnded | Ended Apache Spark applications | Count | Sum (default) | Number of Apache Spark pool applications ended |
-| BigDataPoolAllocatedCores | Number of vCores allocated to the Apache Spark pool | Count | Max (default), Min, Avg | Allocated vCores for an Apache Spark Pool |
-| BigDataPoolAllocatedMemory | Amount of memory (GB) allocated to the Apache Spark pool | Count | Max (default), Min, Avg | Allocated Memory for Apache Spark Pool (GB) |
+| **Metric** | **Metric category, display name** | **Unit** | **Aggregation types** | **Description** |
+| | | | | |
+| BigDataPoolApplicationsEnded | Ended Apache Spark applications | Count | Sum (default) | Number of Apache Spark pool applications ended |
+| BigDataPoolAllocatedCores | Number of vCores allocated to the Apache Spark pool | Count | Max (default), Min, Avg | Allocated vCores for an Apache Spark Pool |
+| BigDataPoolAllocatedMemory | Amount of memory (GB) allocated to the Apache Spark pool | Count | Max (default), Min, Avg | Allocated Memory for Apache Spark Pool (GB) |
| BigDataPoolApplicationsActive | Active Apache Spark applications | Count | Max (default), Min, Avg | Number of active Apache Spark pool applications | ## Alerts
Sign in to the Azure portal and select **Monitor** > **Alerts** to create alerts
1. Define the **alert condition** to specify when the alert should fire.
- > [!NOTE]
+ > [!NOTE]
> Make sure to select **All** in the **Filter by resource type** drop-down list. 1. Define the **alert details** to further specify how the alert should be configured.
Sign in to the Azure portal and select **Monitor** > **Alerts** to create alerts
Here are the logs emitted by Azure Synapse Analytics workspaces:
-| Log Analytics table name | Log category name | Description |
-|--|--|-|
-| SynapseGatewayApiRequests | GatewayApiRequests | Azure Synapse gateway API requests. |
-| SynapseRbacOperations | SynapseRbacOperations | Azure Synapse role-based access control (SRBAC) operations. |
-| SynapseBuiltinSqlReqsEnded | BuiltinSqlReqsEnded | Azure Synapse built-in serverless SQL pool ended requests. |
-| SynapseIntegrationPipelineRuns | IntegrationPipelineRuns | Azure Synapse integration pipeline runs. |
-| SynapseIntegrationActivityRuns | IntegrationActivityRuns | Azure Synapse integration activity runs. |
-| SynapseIntegrationTriggerRuns | IntegrationTriggerRuns | Azure Synapse integration trigger runs. |
+| Log Analytics table name | Log category name | Description |
+| | | |
+| SynapseGatewayApiRequests | GatewayApiRequests | Azure Synapse gateway API requests. |
+| SynapseRbacOperations | SynapseRbacOperations | Azure Synapse role-based access control (SRBAC) operations. |
+| SynapseBuiltinSqlReqsEnded | BuiltinSqlReqsEnded | Azure Synapse built-in serverless SQL pool ended requests. |
+| SynapseIntegrationPipelineRuns | IntegrationPipelineRuns | Azure Synapse integration pipeline runs. |
+| SynapseIntegrationActivityRuns | IntegrationActivityRuns | Azure Synapse integration activity runs. |
+| SynapseIntegrationTriggerRuns | IntegrationTriggerRuns | Azure Synapse integration trigger runs. |
### Dedicated SQL pool logs Here are the logs emitted by dedicated SQL pools:
-| Log Analytics table name | Log category name | Description |
-|-|--|-|
+| Log Analytics table name | Log category name | Description |
+| | | |
| SynapseSqlPoolExecRequests | ExecRequests | Information about SQL requests/queries in an Azure Synapse dedicated SQL pool. | SynapseSqlPoolDmsWorkers | DmsWorkers | Information about workers completing DMS steps in an Azure Synapse dedicated SQL pool. | SynapseSqlPoolRequestSteps | RequestSteps | Information about request steps that compose a given SQL request/query in an Azure Synapse dedicated SQL pool.
For more information on these logs, see the following information:
Here is the log emitted by Apache Spark pools:
-| Log Analytics table name | Log category name | Description |
-|--||--|
+| Log Analytics table name | Log category name | Description |
+| | | |
| SynapseBigDataPoolApplicationsEnded | BigDataPoolAppsEnded | Information about ended Apache Spark applications | ### Diagnostic settings
Use diagnostic settings to configure diagnostic logs for non-compute resources.
With Azure Monitor diagnostic settings, you can route diagnostic logs for analysis to multiple different targets. * **Storage account**: Save your diagnostic logs to a storage account for auditing or manual inspection. You can use the diagnostic settings to specify the retention time in days.
-* **Event Hub**: Stream the logs to Azure Event Hubs. The logs become input to a partner service/custom analytics solution like Power BI.
+* **Event Hubs**: Stream the logs to Azure Event Hubs. The logs become input to a partner service/custom analytics solution like Power BI.
* **Log Analytics workspace**: Analyze the logs with Log Analytics. The Azure Synapse integration with Log Analytics is useful in the following scenarios: * You want to write complex queries on a rich set of metrics that are published by Azure Synapse to Log Analytics. You can create custom alerts on these queries via Azure Monitor. * You want to monitor across workspaces. You can route data from multiple workspaces to a single Log Analytics workspace.
-You can also use a storage account or Event Hub namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions.
+You can also use a storage account or Event Hubs namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions.
#### Configure diagnostic settings
Create or add diagnostic settings for your workspace, dedicated SQL pool, or Apa
1. Give your setting a name, select **Send to Log Analytics**, and then select a workspace from **Log Analytics workspace**.
- > [!NOTE]
+ > [!NOTE]
> Because an Azure log table can't have more than 500 columns, we **highly recommended** you select _Resource-Specific mode_. For more information, see [AzureDiagnostics Logs reference](/azure/azure-monitor/reference/tables/azurediagnostics). 1. Select **Save**.
After a few moments, the new setting appears in your list of settings for your w
## Next steps
-For more information on monitoring pipeline runs, see the [Monitor pipeline runs in Synapse Studio](how-to-monitor-pipeline-runs.md) article.
+- For more information on monitoring pipeline runs, see the [Monitor pipeline runs in Synapse Studio](how-to-monitor-pipeline-runs.md) article.
-For more information on monitoring Apache Spark applications, see the [Monitor Apache Spark applications in Synapse Studio](apache-spark-applications.md) article.
+- For more information on monitoring Apache Spark applications, see the [Monitor Apache Spark applications in Synapse Studio](apache-spark-applications.md) article.
-For more information on monitoring SQL requests, see the [Monitor SQL requests in Synapse Studio](how-to-monitor-sql-requests.md) article.
+- For more information on monitoring SQL requests, see the [Monitor SQL requests in Synapse Studio](how-to-monitor-sql-requests.md) article.
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Title: Manageability and monitoring - query activity, resource utilization
+ Title: Manageability and monitoring - query activity, resource utilization
description: Learn what capabilities are available to manage and monitor Azure Synapse Analytics. Use the Azure portal and Dynamic Management Views (DMVs) to understand query activity and resource utilization of your data warehouse. + + Last updated : 11/02/2022 + - Previously updated : 04/04/2022--
-# Monitoring resource utilization and query activity in Azure Synapse Analytics
+# Monitor resource utilization and query activity in Azure Synapse Analytics
Azure Synapse Analytics provides a rich monitoring experience within the Azure portal to surface insights regarding your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs. The portal also enables you to integrate with other Azure monitoring services such as Azure Monitor (logs) with Log analytics to provide a holistic monitoring experience for not only your data warehouse but also your entire Azure analytics platform for an integrated monitoring experience. This documentation describes what monitoring capabilities are available to optimize and manage your analytics platform with Synapse SQL. ## Resource utilization
-The following metrics are available in the Azure portal for Synapse SQL. These metrics are surfaced through [Azure Monitor](../../azure-monitor/data-platform.md?bc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2ftoc.json#metrics).
-
-| Metric Name | Description | Aggregation Type |
-| -- | | - |
-| CPU percentage | CPU utilization across all nodes for the data warehouse | Avg, Min, Max |
-| Data IO percentage | IO Utilization across all nodes for the data warehouse | Avg, Min, Max |
-| Memory percentage | Memory utilization (SQL Server) across all nodes for the data warehouse | Avg, Min, Max |
-| Active Queries | Number of active queries executing on the system | Sum |
-| Queued Queries | Number of queued queries waiting to start executing | Sum |
-| Successful Connections | Number of successful connections (logins) against the database | Sum, Count |
-| Failed Connections | Number of failed connections (logins) against the database | Sum, Count |
-| Blocked by Firewall | Number of logins to the data warehouse which was blocked | Sum, Count |
-| DWU limit | Service level objective of the data warehouse | Avg, Min, Max |
-| DWU percentage | Maximum between CPU percentage and Data IO percentage | Avg, Min, Max |
-| DWU used | DWU limit * DWU percentage | Avg, Min, Max |
-| Cache hit percentage | (cache hits / (cache hits + cache miss)) * 100, where cache hits are the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes | Avg, Min, Max |
-| Cache used percentage | (cache used / cache capacity) * 100 where cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes | Avg, Min, Max |
-| Local tempdb percentage | Local tempdb utilization across all compute nodes - values are emitted every five minutes | Avg, Min, Max |
+The following metrics are available for dedicated SQL pools (formerly SQL Data Warehouse). For dedicated SQL pools created in Azure Synapse workspaces, see [Use Azure Monitor with your Azure Synapse Analytics workspace](../monitoring/how-to-monitor-using-azure-monitor.md).
+
+These metrics are surfaced through [Azure Monitor](../../azure-monitor/data-platform.md?bc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2ftoc.json#metrics).
+
+| Metric Name | Description | Aggregation Type |
+| | | |
+| CPU percentage | CPU utilization across all nodes for the data warehouse | Avg, Min, Max |
+| Data IO percentage | IO Utilization across all nodes for the data warehouse | Avg, Min, Max |
+| Memory percentage | Memory utilization (SQL Server) across all nodes for the data warehouse | Avg, Min, Max |
+| Active Queries | Number of active queries executing on the system | Sum |
+| Queued Queries | Number of queued queries waiting to start executing | Sum |
+| Successful Connections | Number of successful connections (logins) against the database | Sum, Count |
+| Failed Connections: User Errors | Number of user failed connections (logins) against the database | Sum, Count |
+| Failed Connections: System Errors | Number of system failed connections (logins) against the database | Sum, Count |
+| Blocked by Firewall | Number of logins to the data warehouse which was blocked | Sum, Count |
+| DWU limit | Service level objective of the data warehouse | Avg, Min, Max |
+| DWU percentage | Maximum between CPU percentage and Data IO percentage | Avg, Min, Max |
+| DWU used | DWU limit * DWU percentage | Avg, Min, Max |
+| Cache hit percentage | (cache hits / (cache hits + cache miss)) * 100, where cache hits are the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes | Avg, Min, Max |
+| Cache used percentage | (cache used / cache capacity) * 100 where cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes | Avg, Min, Max |
+| Local `tempdb` percentage | Local `tempdb` utilization across all compute nodes - values are emitted every five minutes | Avg, Min, Max |
Things to consider when viewing metrics and setting alerts: -- DWU used represents only a **high-level representation of usage** across the SQL pool and is not meant to be a comprehensive indicator of utilization. To determine whether to scale up or down, consider all factors which can be impacted by DWU such as concurrency, memory, tempdb, and adaptive cache capacity. We recommend [running your workload at different DWU settings](sql-data-warehouse-manage-compute-overview.md#finding-the-right-size-of-data-warehouse-units) to determine what works best to meet your business objectives.
+- DWU used represents only a **high-level representation of usage** across the SQL pool and is not meant to be a comprehensive indicator of utilization. To determine whether to scale up or down, consider all factors which can be impacted by DWU such as concurrency, memory, `tempdb`, and adaptive cache capacity. We recommend [running your workload at different DWU settings](sql-data-warehouse-manage-compute-overview.md#finding-the-right-size-of-data-warehouse-units) to determine what works best to meet your business objectives.
- Failed and successful connections are reported for a particular data warehouse - not for the server itself.-- Memory percentage reflects utilization even if the data warehouse is in idle state - it does not reflect active workload memory consumption. Use and track this metric along with others (`tempdb`, gen2 cache) to make a holistic decision on if scaling for additional cache capacity will increase workload performance to meet your requirements.
+- Memory percentage reflects utilization even if the data warehouse is in idle state - it does not reflect active workload memory consumption. Use and track this metric along with others (`tempdb`, Gen2 cache) to make a holistic decision on if scaling for additional cache capacity will increase workload performance to meet your requirements.
## Query activity For a programmatic experience when monitoring Synapse SQL via T-SQL, the service provides a set of Dynamic Management Views (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
-To view the list of DMVs that apply to Synapse SQL, review [dedicated SQL pool DMVs](../sql/reference-tsql-system-views.md#dedicated-sql-pool-dynamic-management-views-dmvs).
+To view the list of DMVs that apply to Synapse SQL, review [dedicated SQL pool DMVs](../sql/reference-tsql-system-views.md#dedicated-sql-pool-dynamic-management-views-dmvs).
-## Metrics and diagnostics logging
+## Metrics and diagnostics logging
Both metrics and logs can be exported to Azure Monitor, specifically the [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) component and can be programmatically accessed through [log queries](../../azure-monitor/logs/log-analytics-tutorial.md?bc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2ftoc.json). The log latency for Synapse SQL is about 10-15 minutes.
virtual-desktop Configure Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-device-redirections.md
Configuring device redirection for your Azure Virtual Desktop environment allows
## Supported device redirection
-Each client supports different kinds of device redirections. Check out [Compare the clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare) for the full list of supported device redirections for each client.
+Each client supports different kinds of device redirections. Check out [Compare the clients](compare-remote-desktop-clients.md) for the full list of supported device redirections for each client.
>[!IMPORTANT] >You can only enable redirections with binary settings that apply to both to and from the remote machine. The service doesn't currently support one-way blocking of redirections from only one side of the connection. ## Customizing RDP properties for a host pool
-To learn more about customizing RDP properties for a host pool using PowerShell or the Azure portal, check out [RDP properties](customize-rdp-properties.md). For the full list of supported RDP properties, see [Supported RDP file settings](/windows-server/remote/remote-desktop-services/clients/rdp-files?context=%2fazure%2fvirtual-desktop%2fcontext%2fcontext).
+To learn more about customizing RDP properties for a host pool using PowerShell or the Azure portal, check out [RDP properties](customize-rdp-properties.md). For the full list of supported RDP properties, see [Supported RDP file settings](rdp-properties.md).
## Setup device redirection
virtual-desktop Client Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md
To subscribe to a workspace with a link:
1. Enter your user account, then select **Sign in**. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+## Azure Virtual Desktop (HostApp)
+
+The Azure Virtual Desktop (HostApp) is a platform component containing a set of predefined user interfaces and APIs that Azure Virtual Desktop developers can use to deploy and manage Remote Desktop connections to their Azure Virtual Desktop resources. If this application is required on a device for another application to work correctly, it will automatically be downloaded by the other application. There should be no need for user interaction.
+
+The purpose of the Azure Virtual Desktop (HostApp) is to provide core functionality to other client apps in the Microsoft Store. This is known as the *Hosted App Model*. For more information, see [Hosted App Model](https://blogs.windows.com/windowsdeveloper/2020/03/19/hosted-app-model/).
+ ## Provide feedback If you want to provide feedback to us on the Remote Desktop client for Windows, you can do so by selecting the button that looks like a smiley face emoji in the client app, as shown in the following image. This will open the **Feedback Hub**.
virtual-desktop Connect Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-android-chrome-os.md
The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop
You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Android and Chrome OS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-android).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop client for Android and Chrome OS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-android).
## Prerequisites
virtual-desktop Connect Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-ios-ipados.md
The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop
You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for iOS and iPadOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-ios).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop client for iOS and iPadOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-ios).
## Prerequisites
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md
The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for macOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop client for macOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac).
## Prerequisites
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-microsoft-store.md
The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows (Microsoft Store)](/windows-server/remote/remote-desktop-services/clients/windows).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows (Microsoft Store)](/windows-server/remote/remote-desktop-services/clients/windows).
## Prerequisites
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop
You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows](/windows-server/remote/remote-desktop-services/clients/windowsdesktop).
- ## Prerequisites Before you can access your resources, you'll need to meet the prerequisites: - Internet access. -- A device running one of the following versions of Windows:
+- A device running one of the following supported versions of Windows:
- Windows 11
+ - Windows 11 IoT Enterprise
- Windows 10 - Windows 10 IoT Enterprise - Windows 7
+ - Windows Server 2019
+ - Windows Server 2016
+ - Windows Server 2012 R2
- Download the Remote Desktop client installer, choosing the correct version for your device: - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602) *(most common)* - [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960) - [Windows on Arm](https://go.microsoft.com/fwlink/?linkid=2098961)
+- .NET Framework 4.6.2 or later. You may need to install this on Windows 7, Windows Server 2012 R2, Windows Server 2016, and some versions of Windows 10. To download the latest version, see [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework).
+
+- You cannot sign-in using the built-in Administrator user account.
+ > [!IMPORTANT] > Extended support for using Windows 7 to connect to Azure Virtual Desktop ends on January 10, 2023.
virtual-desktop Remote Desktop Clients Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/remote-desktop-clients-overview.md
There are many features you can use to enhance your remote experience, such as:
Some features are only available with certain clients, so it's important to check [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json) to understand the differences when connecting to Azure Virtual Desktop.
-If you want information on Remote Desktop Services instead, see [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+> [!TIP]
+> You can also use most versions of the Remote Desktop client to also connect to [Remote Desktop Services](/windows-server/remote/remote-desktop-services/welcome-to-rds) in Windows Server or to a remote PC, as well as to Azure Virtual Desktop. If you want information on Remote Desktop Services instead, see [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
Here's a list of the Remote Desktop client apps and our documentation for connecting to Azure Virtual Desktop, where you can find download links, what's new, and learn how to install and use each client.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
+
+ Title: What's new in the Remote Desktop client for Windows
+description: Learn about recent changes to the Remote Desktop client for Windows
++++ Last updated : 11/03/2022+
+# What's new in the Remote Desktop client for Windows
+
+You can find more detailed information about the Windows Desktop client at [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](users/connect-windows.md) and [Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](users/client-features-windows.md). You'll find the latest updates for the available clients in this article.
+
+## Supported client versions
+
+The client can be configured to enable Windows Insider releases. The following table lists the current versions available for each release:
+
+| Release | Latest version | Minimum supported version |
+||-||
+| Public | 1.2.3577 | 1.2.1672 |
+| Insider | 1.2.3667 | 1.2.1672 |
+
+## Updates for version 1.2.3667 (Insider)
+
+*Date published: 10/25/2022*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+- Added User Datagram Protocol (UDP) support to the client's ARM64 platform.
+- Fixed an issue where the tooltip didn't disappear when the user moved the mouse cursor away from the tooltip area.
+- Fixed an issue where the application crashes when calling reset manually from the command line.
+- Fixed an issue where the client stops responding when disconnecting, which prevents the user from launching another connection.
+- Fixed an issue where the client stops responding when coming out of sleep mode.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.3577
+
+*Date published: 10/10/2022*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+
+Fixed a bug related to tracing that was blocking reconnections.
+
+## Updates for version 1.2.3576
+
+*Date published: 10/6/2022*
+
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE58YFH), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE59ekJ), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE58YFG)
+
+Fixed a bug that affected users of some third-party plugins.
+
+## Updates for version 1.2.3575
+
+*Date published: 10/4/2022*
+
+Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios.
+
+## Updates for version 1.2.3574
+
+*Date published: 10/4/2022*
+
+- Added banner warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023.
+- Added page to installer warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to multimedia redirection (MMR) for Azure Virtual Desktop, including the following:
+ - MMR now works on remote app browser and supports up to 30 sites. For more information, see [Understanding multimedia redirection for Azure Virtual Desktop](/azure/virtual-desktop/multimedia-redirection-intro).
+ - MMR introduces better diagnostic tools with the new status icon and one-click Tracelog. For more information, see [Multimedia redirection for Azure Virtual Desktop (preview)](/azure/virtual-desktop/multimedia-redirection).
+
+## Updates for version 1.2.3497
+
+*Date published: 9/20/2022*
+
+- Accessibility improvements through increased color contrast in the virtual desktop connection blue bar.
+- Updated connection information dialog to distinguish between Websocket (renamed from TCP), RDP Shortpath for managed networks, and RDP Shortpath for public networks.
+- Fixed bugs.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - Fixed an issue that caused calls to disconnect when using a microphone with a high sample rate (192 kbps).
+- Resolved a connectivity issue with older RDP stacks.
+
+## Updates for version 1.2.3496
+
+*Date published: 9/08/2022*
+
+- Reverted to version 1.2.3401 build to avoid a connectivity issue with older RDP stacks.
+
+## Updates for version 1.2.3401
+
+*Date published: 8/02/2022*
+
+- Fixed an issue where the narrator was announcing the Tenant Expander button as "on" or "off" instead of "expanded" or ΓÇ£collapsed."
+- Fixed an issue where the text size didn't change when the user adjusted the text size system setting.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.3317
+
+*Date published: 7/12/2022*
+
+- Fixed the vulnerability known as [CVE-2022-30221](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-30221).
+
+## Updates for version 1.2.3316
+
+*Date published: 7/06/2022*
+
+- Fixed an issue where the service couldn't render RemoteApp windows while RemoteFX Advanced Graphics were disabled.
+- Fixed an issue that happened when a user tried to connect to an Azure Virtual Desktop endpoint while using the Remote Desktop Services Transport Layer Security protocol (RDSTLS) with CredSSP disabled, which caused the Windows Desktop client to not prompt the user for credentials. Because the client couldn't authenticate, it would get stuck in an infinite loop of failed connection attempts.
+- Fixed an issue that happened when users tried to connect to an Azure Active Directory (Azure AD)-joined Azure Virtual Desktop endpoint from a client machine joined to the same Azure AD tenant while the Credential Security Support Provider protocol (CredSSP) was disabled.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - Better noise suppression during calls.
+ - A diagnostic overlay now appears when you press **Shift+Ctrl+Semicolon (;)** during calls. The diagnostic overlay only works with version 1.17.2205.23001 or later of the Remote Desktop WebRTC Redirector Service. You can download the latest version of the service [here](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4YM8L).
+
+## Updates for version 1.2.3213
+
+*Date published: 6/02/2022*
+
+- Reduced flicker when application is restored to full-screen mode from minimized state in single-monitor configuration.
+- The client now shows an error message when the user tries to open a connection from the UI, but the connection doesn't launch.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - The new hardware encoding feature increases the video quality (resolution and framerate) of the outgoing camera during Teams calls. Because this feature uses the underlying hardware on the PC and not just software, we're being extra careful to ensure broad compatibility before turning the feature on by default for all users. Therefore, this feature is currently off by default. To get an early preview of the feature, you can enable it on your local machine by creating a registry key at **Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Terminal Server Client\Default\AddIns\WebRTC Redirector\\(DWORD)UseHardwareEncoding** and setting it to **1**. To disable the feature, set the key to **0**.
+
+## Updates for version 1.2.3130
+
+*Date published: 05/10/2022*
+
+- Fixed the vulnerability known as [CVE-2022-22017](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-22017).
+- Fixed the vulnerability known as [CVE-2022-26940](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-26940).
+- Fixed the vulnerability known as [CVE-2022-22015](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-22015).
+- Fixed an issue where the [Class Identifier (CLSID)-based registration of the dynamic virtual channel (DVC) plug-in](/windows/win32/termserv/dvc-plug-in-registration) wasn't working.
+
+## Updates for version 1.2.3128
+
+*Date published: 5/03/2022*
+
+- Improved Narrator application experience.
+- Accessibility improvements.
+- Fixed a regression that prevented subsequent connections after reconnecting to an existing session with the group policy object (GPO) "User Configuration\Administrative Templates\System\Ctrl+Alt+Del Options\Remove Lock Computer" enabled.
+- Added an error message for when a user selects a credential type for smart card or Windows Hello for Business but the required smart card redirection is disabled in the RDP file.
+- Improved diagnostic for User Data Protocol (UDP)-based Remote Desktop Protocol (RDP) transport protocols.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including updating the WebRTC stack from version M88 to M98. M98 provides better reliability and performances when making audio and video calls.
+
+## Updates for version 1.2.3004
+
+*Date published: 3/29/2022*
+
+- Fixed an issue where Narrator didn't announce grid or list views correctly.
+- Fixed an issue where the msrdc.exe process might take a long time to exit after closing the last Azure Virtual Desktop connection if customers have set a very short token expiration policy.
+- Updated the error message that appears when users are unable to subscribe to their feed.
+- Updated the disconnect dialog boxes that appear when the user locks their remote session or puts their local computer in sleep mode to be only informational.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- [Multimedia redirection for Azure Virtual Desktop (preview)](/azure/virtual-desktop/multimedia-redirection) now has an update that gives it more site and media control compatibility.
+- Improved connection reliability for Teams on Azure Virtual Desktop.
+
+## Updates for version 1.2.2927
+
+*Date published: 3/15/2022*
+
+Fixed an issue where the number pad didn't work on initial focus.
+
+## Updates for version 1.2.2925
+
+*Date published: 03/08/2022*
+
+- Fixed the vulnerability known as [CVE-2022-21990](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21990).
+- Fixed the vulnerability known as [CVE-2022-24503](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-24503).
+- Fixed an issue where background updates could close active remote connections.
+
+## Updates for version 1.2.2924
+
+*Date published: 02/23/2022*
+
+- The Desktop client now supports Ctrl+Alt+arrow key keyboard shortcuts during desktop sessions.
+- Improved graphics performance with certain mouse types.
+- Fixed an issue that caused the client to randomly crash when something ends a RemoteApp connection.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - The background blur feature is rolling out this week for Windows endpoints.
+ - Fixed an issue that caused the screen to turn black during Teams video calls.
+
+## Updates for version 1.2.2860
+
+*Date published: 02/15/2022*
+
+- Improved stability of Azure Active Directory authentication.
+- Fixed an issue that was preventing users from opening multiple .RDP files from different host pools.
+
+## Updates for version 1.2.2851
+
+*Date published: 01/25/2022*
+
+- Fixed an issue that caused a redirected camera to give incorrect error codes when camera access was restricted in the Privacy settings on the client device. This update should give accurate error messages in apps using the redirected camera.
+- Fixed an issue where the Azure Active Directory credential prompt appeared in the wrong monitor.
+- Fixed an issue where the background refresh and update tasks were repeatedly registered with the task scheduler, which caused the background and update task times to change without user input.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - In September 2021 we released a preview of our GPU render path optimizations but defaulted them off. After extensive testing, we've now enabled them by default. These GPU render path optimizations reduce endpoint-to-endpoint latency and solve some performance issues. You can manually disable these optimizations by setting the registry key **HKEY_CURRENT_USER \SOFTWARE\Microsoft\Terminal Server Client\IsSwapChainRenderingEnabled** to **00000000**.
+
+## Updates for version 1.2.2691
+
+*Date published: 01/12/2022*
+
+- Fixed the vulnerability known as [CVE-2019-0887](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2019-0887).
+- Fixed the vulnerability known as [CVE-2022-21850](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21850).
+- Fixed the vulnerability known as [CVE-2022-21851](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21851).
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.2688
+
+*Date published: 12/09/2021*
+
+- Fixed an issue where some users were unable to subscribe using the "subscribe with URL" option after updating to version 1.2.2687.0.
+
+## Updates for version 1.2.2687
+
+*Date published: 12/02/2021*
+
+- Improved manual refresh functionality to acquire new user tokens, which ensures the service can accurately update user access to resources.
+- Fixed an issue where the service sometimes pasted empty frames when a user tried to copy an image from a remotely running Internet Explorer browser to a locally running Word document.
+- Fixed the vulnerability known as [CVE-2021-38665](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38665).
+- Fixed the vulnerability known as [CVE-2021-38666](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38666).
+- Fixed the vulnerability known as [CVE-2021-1669](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-1669).
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed a usability issue where the Windows Desktop client would sometimes prompt for a password (Azure Active Directory prompt) after the device went into sleep mode.
+- Fixed an issue where the client didn't automatically expand and display interactive sign-in messages set by admins when a user signs in to their virtual machine.
+- Fixed a reliability issue that appeared in version 1.2.2686 where the client stopped responding when users tried to launch new connections.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - The notification volume level on the client device is now the same as the host device.
+ - Fixed an issue where the device volume was low in Azure Virtual Desktop sessions
+ - Fixed a multi-monitor screen sharing issue where screen sharing didn't appear correctly when moving from one monitor to the other.
+ - Resolved a black screen issue that caused screen sharing to incorrectly show a black screen sometimes.
+ - Increased the reliability of the camera stack when resizing the Teams app or turning the camera on or off.
+ - Fixed a memory leak that caused issues like high memory usage or video freezing when reconnecting with Azure Virtual Desktop.
+ - Fixed an issue that caused Remote Desktop connections to stop responding.
+
+## Updates for version 1.2.2606
+
+*Date published: 11/9/2021*
+
+- Fixed the vulnerability known as [CVE-2021-38665](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38665).
+- Fixed the vulnerability known as [CVE-2021-38666](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38666).
+- Fixed an issue where the service sometimes pasted empty frames when a user tried to copy an image from a remotely running Internet Explorer browser to a locally running Word document.
+
+## Updates for version 1.2.2600
+
+*Date published: 10/26/2021*
+
+- Updates to Teams for Azure Virtual Desktop, including improvements to camera performance during video calls.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.2459
+
+*Date published: 09/28/2021*
+
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed an issue that caused the client to prompt for credentials a second time after closing a credential prompt window while subscribing.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - Fixed an issue in that made the video screen turn black and crash during calls in the Chrome browser.
+ - Reduced E2E latency and some performance issues by optimizing the GPU render path in the Windows Desktop client. To enable the new render path, add the registry key **HKEY_CURRENT_USER \SOFTWARE\Microsoft\Terminal Server Client\IsSwapChainRenderingEnabled** and set its value to **00000001**. To disable the new render path and revert to the original path, either set the key's value to **00000000** or delete the key.
+
+## Updates for version 1.2.2322
+
+*Date published: 08/24/2021*
+
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Added updates to Teams on Azure Virtual Desktop, including:
+ - Fixed an issue that caused the screen to turn black when Direct X wasn't available for hardware decoding.
+ - Fixed a software decoding and camera preview issue that happened when falling back to software decode.
+- [Multimedia redirection for Azure Virtual Desktop](/azure/virtual-desktop/multimedia-redirection) is now in public preview.
+
+## Updates for version 1.2.2223
+
+*Date published: 08/10/2021*
+
+- Fixed the security vulnerability known as [CVE-2021-34535](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-34535).
+
+## Updates for version 1.2.2222
+
+*Date published: 07/27/2021*
+
+- The client also updates in the background when the auto-update feature is enabled, no remote connection is active, and MSRDCW.exe isn't running.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed an ICE inversion parameter issue that prevented some Teams calls from connecting.
+
+## Updates for version 1.2.2130
+
+*Date published: 06/22/2021*
+
+- Windows Virtual Desktop has been renamed to Azure Virtual Desktop. Learn more about the name change at [our announcement on our blog](https://azure.microsoft.com/blog/azure-virtual-desktop-the-desktop-and-app-virtualization-platform-for-the-hybrid-workplace/).
+- Fixed an issue where the client would ask for authentication after the user ended their session and closed the window.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed an issue with Logitech C270 cameras where Teams only showed a black screen in the camera settings and while sharing images during calls.
+
+## Updates for version 1.2.2061
+
+*Date published: 05/25/2021*
+
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams on Azure Virtual Desktop, including the following:
+ - Resolved a black screen video issue that also fixed a mismatch in video resolutions with Teams Server.
+ - Teams on Azure Virtual Desktop now changes resolution and bitrate in accordance with what Teams Server expects.
+
+## Updates for version 1.2.1954
+
+*Date published: 05/13/2021*
+
+- Fixed the vulnerability known as [CVE-2021-31186](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-31186).
+
+## Updates for version 1.2.1953
+
+*Date published: 05/06/2021*
+
+- Fixed an issue that caused the client to crash when users selected "Disconnect all sessions" in the system tray.
+- Fixed an issue where the client wouldn't switch to full screen on a single monitor with a docking station.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to Teams on Azure Virtual Desktop, including the following:
+ - Added hardware acceleration for video processing outgoing video streams for Windows 10-based clients.
+ - When joining a meeting with both a front-facing and rear-facing or external camera, the front-facing camera will be selected by default.
+ - Fixed an issue that made Teams on Azure Virtual Desktop crash while loading on x86-based machines.
+ - Fixed an issue that caused striations during screen sharing.
+ - Fixed an issue that prevented some people in meetings from seeing incoming video or screen sharing.
+
+## Updates for version 1.2.1844
+
+*Date published: 03/23/2021*
+
+- Updated background installation functionality to perform silently for the client auto-update feature.
+- Fixed an issue where the client forwarded multiple attempts to launch a desktop to the same session. Depending on your group policy configuration, the session host can now allow the creation of multiple sessions for the same user on the same session host or disconnect the previous connection by default. This behavior wasn't consistent before version 1.2.1755.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates for Teams on Azure Virtual Desktop, including the following:
+ - We've offloaded video processing (XVP) to reduce CPU utilization by 5-10% (depending on CPU generation). Combined with the hardware decode feature from February's update, we've now reduced the total CPU utilization by 10-20% (depending on CPU generation).
+ - We've added XVP and hardware decode, which allows older machines to display more incoming video streams smoothly in 2x2 mode.
+ - We've also updated the WebRTC stack from version M74 to M88. M88 has better reliability, AV sync performance, and fewer transient issues.
+ - We've replaced our software H264 encoder with OpenH264. OpenH264 is an open-source codec that increases video quality of the outgoing camera stream.
+ - The client now has simultaneous shipping with 2x2 mode. 2x2 mode shows up to four incoming video streams simultaneously.
+
+## Updates for version 1.2.1755
+
+*Date published: 02/23/2021*
+
+- Added the Experience Monitor access point to the system tray icon.
+- Fixed an issue where entering an email address into the "Subscribe to a Workplace" tab caused the application to stop responding.
+- Fixed an issue where the client sometimes didn't send Event Hubs and Diagnostics events.
+- Updates to Teams on Azure Virtual Desktop, including:
+ - Improved audio and video sync performance and added hardware accelerated decode that decreases CPU utilization on the client.
+ - Addressed the most prevalent causes of black screen issues when a user joins a call or meeting with their video turned on, when a user performs screen sharing, and when a user toggles their camera on and off.
+ - Improved quality of active speaker switching in single video view by reducing the time it takes for the video to appear and reducing intermittent black screens when switching video streams to another user.
+ - Fixed an issue where hardware devices with special characters would sometimes not be available in Teams.
+
+## Updates for version 1.2.1672
+
+*Date published: 01/26/2021*
+
+- Added support for the screen capture protection feature for Windows 10 endpoints. To learn more, see [Session host security best practices](/azure/virtual-desktop/security-guide#session-host-security-best-practices).
+- Added support for proxies that require authentication for feed subscription.
+- The client now shows a notification with an option to retry if an update didn't successfully download.
+- Addressed some accessibility issues with keyboard focus and high-contrast mode.
+
+## Updates for version 1.2.1525
+
+*Date published: 12/01/2020*
+
+- Added List view for remote resources so that longer app names are readable.
+- Added a notification icon that appears when an update for the client is available.
+
+## Updates for version 1.2.1446
+
+*Date published: 10/27/2020*
+
+- Added the auto-update feature, which allows the client to install the latest updates automatically.
+- The client now distinguishes between different feeds in the Connection Center.
+- Fixed an issue where the subscription account doesn't match the account the user signed in with.
+- Fixed an issue where some users couldn't access remote apps through a downloaded file.
+- Fixed an issue with Smartcard redirection.
+
+## Updates for version 1.2.1364
+
+*Date published: 09/22/2020*
+
+- Fixed an issue where single sign-on (SSO) didn't work on Windows 7.
+- Fixed the connection failure that happened when calling or joining a Teams call while another app has an audio stream opened in exclusive mode and when media optimization for Teams is enabled.
+- Fixed a failure to enumerate audio or video devices in Teams when media optimization for Teams is enabled.
+- Added a "Need help with settings?" link to the desktop settings page.
+- Fixed an issue with the "Subscribe" button that happened when using high-contrast dark themes.
+
+## Updates for version 1.2.1275
+
+*Date published: 08/25/2020*
+
+- Added functionality to auto-detect sovereign clouds from the userΓÇÖs identity.
+- Added functionality to enable custom URL subscriptions for all users.
+- Fixed an issue with app pinning on the feed taskbar.
+- Fixed a crash when subscribing with URL.
+- Improved experience when dragging remote app windows with touch or pen.
+- Fixed an issue with localization.
+
+## Updates for version 1.2.1186
+
+*Date published: 07/28/2020*
+
+- You can now be subscribed to Workspaces with multiple user accounts, using the overflow menu (**...**) option on the command bar at the top of the client. To differentiate Workspaces, the Workspace titles now include the username, as do all app shortcuts titles.
+- Added additional information to subscription error messages to improve troubleshooting.
+- The collapsed/expanded state of Workspaces is now preserved during a refresh.
+- Added a **Send Diagnostics and Close** button to the **Connection information** dialog.
+- Fixed an issue with the CTRL + SHIFT keys in remote sessions.
+
+## Updates for version 1.2.1104
+
+*Date published: 06/23/2020*
+
+- Updated the automatic discovery logic for the **Subscribe** option to support the Azure Resource Manager-integrated version of Azure Virtual Desktop. Customers with only Azure Virtual Desktop resources should no longer need to provide consent for Azure Virtual Desktop (classic).
+- Improved support for high-DPI devices with scale factor up to 400%.
+- Fixed an issue where the disconnect dialog didn't appear.
+- Fixed an issue where command bar tooltips would remain visible longer than expected.
+- Fixed a crash when you tried to subscribe immediately after a refresh.
+- Fixed a crash from incorrect parsing of date and time in some languages.
+
+## Updates for version 1.2.1026
+
+*Date published: 05/27/2020*
+
+- When subscribing, you can now choose your account instead of typing your email address.
+- Added a new **Subscribe with URL** option that allows you to specify the URL of the Workspace you are subscribing to or leverage email discovery when available in cases where we can't automatically find your resources. This is similar to the subscription process in the other Remote Desktop clients. This can be used to subscribe directly to Azure Virtual Desktop workspaces.
+- Added support to subscribe to a Workspace using a new URI scheme that can be sent in an email to users or added to a support website.
+- Added a new **Connection information** dialog that provides client, network, and server details for desktop and app sessions. You can access the dialog from the connection bar in full screen mode or from the System menu when windowed.
+- Desktop sessions launched in windowed mode now always maximize instead of going full screen when maximizing the window. Use the **Full screen** option from the system menu to enter full screen.
+- The Unsubscribe prompt now displays a warning icon and shows the workspace names as a bulleted list.
+- Added the details section to additional error dialogs to help diagnose issues.
+- Added a timestamp to the details section of error dialogs.
+- Fixed an issue where the RDP file setting **desktop size ID** didn't work properly.
+- Fixed an issue where the **Update the resolution on resize** display setting didn't apply after launching the session.
+- Fixed localization issues in the desktop settings panel.
+- Fixed the size of the focus box when tabbing through controls on the desktop settings panel.
+- Fixed an issue causing the resource names to be difficult to read in high contrast mode.
+- Fixed an issue causing the update notification in the action center to be shown more than once a day.
+
+## Updates for version 1.2.945
+
+*Date published: 04/28/2020*
+
+- Added new display settings options for desktop connections available when right-clicking a desktop icon on the Connection Center.
+ - There are now three display configuration options: **All displays**, **Single display** and **Select displays**.
+ - We now only show available settings when a display configuration is selected.
+ - In Select display mode, a new **Maximize to current displays** option allows you to dynamically change the displays used for the session without reconnecting. When enabled, maximizing the session causes it to go full screen on all displays touched by the session window.
+ - We've added a new **Single display when windowed** option for all displays and select displays modes. This option switches your session automatically to a single display when you exit full screen mode, and automatically returns to multiple displays when you maximize the window.
+- We've added a new **Display settings** group to the system menu that appears when you right-click the title bar of a windowed desktop session. This will let you change some settings dynamically during a session. For example, you can change the new **Single display mode when windowed** and **Maximize to current displays** settings.
+- When you exit full screen, the session window will return to its original location when you first entered full screen.
+- The background refresh for Workspaces has been changed to every four hours instead of every hour. A refresh now happens automatically when launching the client.
+- Resetting your user data from the About page now redirects to the Connection Center when completed instead of closing the client.
+- The items in the system menu for desktop connections were reordered and the Help topic now points to the client documentation.
+- Addressed some accessibility issues with tab navigation and screen readers.
+- Fixed an issue where the Azure Active Directory authentication dialog appeared behind the session window.
+- Fixed a flickering and shrinking issue when dragging a desktop session window between displays of different scale factors.
+- Fixed an error that occurred when redirecting cameras.
+- Fixed multiple crashes to improve reliability.
+
+## Updates for version 1.2.790
+
+*Date published: 03/24/2020*
+
+- Renamed the "Update" action for Workspaces to "Refresh" for consistency with other Remote Desktop clients.
+- You can now refresh a Workspace directly from its context menu.
+- Manually refreshing a Workspace now ensures all local content is updated.
+- You can now reset the client's user data from the About page without needing to uninstall the app.
+- You can also reset the client's user data using msrdcw.exe /reset with an optional /f parameter to skip the prompt.
+- We now automatically look for a client update when navigating to the About page.
+- Updated the color of the buttons for consistency.
+
+## Updates for version 1.2.675
+
+*Date published: 02/25/2020*
+
+- Connections to Azure Virtual Desktop are now blocked if the RDP file is missing the signature or one of the signscope properties has been modified.
+- When a Workspace is empty or has been removed, the Connection Center no longer appears to be empty.
+- Added the activity ID and error code on disconnect messages to improve troubleshooting. You can copy the dialog message with **Ctrl+C**.
+- Fixed an issue that caused the desktop connection settings to not detect displays.
+- Client updates no longer automatically restart the PC.
+- Windowless icons should no longer appear on the taskbar.
+
+## Updates for version 1.2.605
+
+*Date published: 01/29/2020*
+
+- You can now select which displays to use for desktop connections. To change this setting, right-click the icon of the desktop connection and select **Settings**.
+- Fixed an issue where the connection settings didn't display the correct available scale factors.
+- Fixed an issue where Narrator couldn't read the dialogue shown while the connection initiated.
+- Fixed an issue where the wrong user name displayed when the Azure Active Directory and Active Directory names didn't match.
+- Fixed an issue that made the client stop responding when initiating a connection while not connected to a network.
+- Fixed an issue that caused the client to stop responding when attaching a headset.
+
+## Updates for version 1.2.535
+
+*Date published: 12/04/2019*
+
+- You can now access information about updates directly from the more options button on the command bar at the top of the client.
+- You can now report feedback from the command bar of the client.
+- The Feedback option is now only shown if the Feedback Hub is available.
+- Ensured the update notification is not shown when notifications are disabled through policy.
+- Fixed an issue that prevented some RDP files from launching.
+- Fixed a crash on startup of the client caused by corruption of some persistent settings.
+
+## Updates for version 1.2.431
+
+*Date published: 11/12/2019*
+
+- The 32-bit and ARM64 versions of the client are now available!
+- The client now saves any changes you make to the connection bar (such as its position, size, and pinned state) and applies those changes across sessions.
+- Updated gateway information and connection status dialogs.
+- Addressed an issue that caused two credentials to prompt at the same time while trying to connect after the Azure Active Directory token expired.
+- On Windows 7, users are now properly prompted for credentials if they had saved credentials when the server disallows it.
+- The Azure Active Directory prompt now appears in front of the connection window when reconnecting.
+- Items pinned to the taskbar are now updated during a feed refresh.
+- Improved scrolling on the Connection Center when using touch.
+- Removed the empty line from the resolution drop-down menu.
+- Removed unnecessary entries in Windows Credential Manager.
+- Desktop sessions are now properly sized when exiting full screen.
+- The RemoteApp disconnection dialog now appears in the foreground when you resume your session after entering sleep mode.
+- Addressed accessibility issues like keyboard navigation.
+
+## Updates for version 1.2.247
+
+*Date published: 09/17/2019*
+
+- Improved the fallback languages for localized version. (For example, FR-CA will properly display in French instead of English.)
+- When removing a subscription, the client now properly removes the saved credentials from Credential Manager.
+- The client update process is now unattended once started and the client will relaunch once completed.
+- The client can now be used on Windows 10 in S mode.
+- Fixed an issue that caused the update process to fail for users with a space in their username.
+- Fixed a crash that happened when authenticating during a connection.
+- Fixed a crash that happened when closing the client.
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
$virtualMachines | Start-AzVM
```
+## Limitations
+
+You can't resize a VM size that has a local temp disk to a VM size with no local temp disk and vice versa.
+
+The only combinations allowed for resizing are:
+
+- VM (with local temp disk) -> VM (with local temp disk); and
+- VM (with no local temp disk) -> VM (with no local temp disk).
+
+If interested in a work around, please see [How do I migrate from a VM size with local temp disk to a VM size with no local temp disk?](azure-vms-no-temp-disk.yml#how-do-i-migrate-from-a-vm-size-with-local-temp-disk-to-a-vm-size-with-no-local-temp-disk)
+++ ## Next steps
virtual-machines Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms-guide-ha-ibm.md
sudo crm configure primitive rsc_ip_db2ptr_<b>PTR</b> IPaddr2 \
params ip="<b>10.100.0.10</b>" # Configure probe port for Azure load Balancer
-sudo crm configure primitive rsc_nc_db2ptr_<b>PTR</b> azure-lb port=<b>62500</b>
+sudo crm configure primitive rsc_nc_db2ptr_<b>PTR</b> azure-lb port=<b>62500</b> \
+ op monitor timeout=20s interval=10
sudo crm configure group g_ip_db2ptr_<b>PTR</b> rsc_ip_db2ptr_<b>PTR</b> rsc_nc_db2ptr_<b>PTR</b>
virtual-machines Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md
Integrations between SAP products and the Microsoft 365 portfolio range from cus
- [Export from SAP List Viewer (ALV) to Microsoft Excel](https://help.sap.com/docs/ABAP_PLATFORM_NEW/b1c834a22d05483b8a75710743b5ff26/4ec38f8788d22b90e10000000a42189d.html)
-The mechanism described in this article uses the standard built-in OData capabilities of Power Query and puts emphasis for SAP landscapes deployed on Azure. Address on-premises landscapes with the Azure API Management [self-hosted Gateway](../../../api-management/self-hosted-gateway-overview.md).
+The mechanism described in this article uses the standard [built-in OData capabilities of Power Query](/power-query/connectors/odatafeed) and puts emphasis for SAP landscapes deployed on Azure. Address on-premises landscapes with the Azure API Management [self-hosted Gateway](../../../api-management/self-hosted-gateway-overview.md).
-For more information on which Microsoft products support Power Query, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
+For more information on which Microsoft products support Power Query in general, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
## Setup considerations
honoring the SAP named user mapping.
## SAP OData access via other Power Query enabled applications and services
-Above example shows the flow for Excel Desktop, but the approach is applicable to **any** Power Query enabled Microsoft product. For more information which products support Power Query, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query). Popular consumers are [Power BI](/power-bi/connect-dat), [Power Automate](/flow/) and [Dynamics 365](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
+Above example shows the flow for Excel Desktop, but the approach is applicable to **any** Power Query OData enabled Microsoft product. For more information on the OData connector of Power Query and which products support it, see the [Power Query Connectors documentation](/power-query/connectors/odatafeed). For more information which products support Power Query in general, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
+
+Popular consumers are [Power BI](/power-bi/connect-data/desktop-connect-odata), [Excel for the web](https://www.office.com/launch/excel), [Power Apps (Dataflows)](/power-apps/maker/data-platform/create-and-use-dataflows) and [Analysis Service](/analysis-services/analysis-services-overview).
## Tackle SAP write-back scenarios with Power Automate
The highlighted button triggers a flow that forwards the OData PATCH request to
## Next steps
-[Learn from where you can use Power Query](/power-query/power-query-what-is-power-query#where-can-you-use-power-query)
+[Learn from where you can use OData with Power Query](/power-query/connectors/odatafeed)
[Work with SAP OData APIs in Azure API Management](../../../api-management/sap-api.md)
virtual-machines High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid.md
vm-windows Previously updated : 03/25/2022 Last updated : 11/03/2022
This documentation assumes that:
params ip=10.3.1.16 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW2_ASCS azure-lb port=62010
+ sudo crm configure primitive nc_NW2_ASCS azure-lb port=62010 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW2_ASCS fs_NW2_ASCS nc_NW2_ASCS vip_NW2_ASCS \ meta resource-stickiness=3000
This documentation assumes that:
params ip=10.3.1.13 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW3_ASCS azure-lb port=62020
+ sudo crm configure primitive nc_NW3_ASCS azure-lb port=62020 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW3_ASCS fs_NW3_ASCS nc_NW3_ASCS vip_NW3_ASCS \ meta resource-stickiness=3000
This documentation assumes that:
params ip=10.3.1.17 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW2_ERS azure-lb port=62112
+ sudo crm configure primitive nc_NW2_ERS azure-lb port=62112 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW2_ERS fs_NW2_ERS nc_NW2_ERS vip_NW2_ERS
This documentation assumes that:
params ip=10.3.1.19 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW3_ERS azure-lb port=62122
+ sudo crm configure primitive nc_NW3_ERS azure-lb port=62122 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW3_ERS fs_NW3_ERS nc_NW3_ERS vip_NW3_ERS ```
virtual-machines High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-azure-files.md
vm-windows Previously updated : 01/24/2022 Last updated : 11/03/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
params ip=10.90.90.10 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000
+ sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW1_ASCS fs_NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \
- meta resource-stickiness=3000
+ meta resource-stickiness=3000
``` Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
The following items are prefixed with either **[A]** - applicable to all nodes,
params ip=10.90.90.9 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW1_ERS azure-lb port=62101
+ sudo crm configure primitive nc_NW1_ERS azure-lb port=62101 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW1_ERS fs_NW1_ERS nc_NW1_ERS vip_NW1_ERS ```
virtual-machines High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-simple-mount.md
vm-windows Previously updated : 11/01/2022 Last updated : 11/03/2022
The instructions in this section are applicable only if you're using Azure NetAp
params ip=10.27.0.9 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000
+ sudo crm configure primitive nc_NW1_ASCS azure-lb port=62000 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW1_ASCS nc_NW1_ASCS vip_NW1_ASCS \
- meta resource-stickiness=3000
+ meta resource-stickiness=3000
``` Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
The instructions in this section are applicable only if you're using Azure NetAp
params ip=10.27.0.10 \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_NW1_ERS azure-lb port=62101
+ sudo crm configure primitive nc_NW1_ERS azure-lb port=62101 \
+ op monitor timeout=20s interval=10
sudo crm configure group g-NW1_ERS nc_NW1_ERS vip_NW1_ERS ```
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
vm-windows Previously updated : 10/25/2022 Last updated : 11/03/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure primitive vip_<b>NW1</b>_nfs IPaddr2 \ params ip=<b>10.0.0.4</b> op monitor interval=10 timeout=20
- sudo crm configure primitive nc_<b>NW1</b>_nfs azure-lb port=<b>61000</b>
+ sudo crm configure primitive nc_<b>NW1</b>_nfs azure-lb port=<b>61000</b> \
+ op monitor timeout=20s interval=10
sudo crm configure group g-<b>NW1</b>_nfs \ fs_<b>NW1</b>_sapmnt exportfs_<b>NW1</b> nc_<b>NW1</b>_nfs vip_<b>NW1</b>_nfs
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure primitive vip_<b>NW2</b>_nfs IPaddr2 \ params ip=<b>10.0.0.5</b> op monitor interval=10 timeout=20
- sudo crm configure primitive nc_<b>NW2</b>_nfs azure-lb port=<b>61001</b>
+ sudo crm configure primitive nc_<b>NW2</b>_nfs azure-lb port=<b>61001</b> \
+ op monitor timeout=20s interval=10
sudo crm configure group g-<b>NW2</b>_nfs \ fs_<b>NW2</b>_sapmnt exportfs_<b>NW2</b> nc_<b>NW2</b>_nfs vip_<b>NW2</b>_nfs
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
vm-windows Previously updated : 10/20/2022 Last updated : 11/03/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
params ip=<b>10.0.0.7</b> \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_<b>NW1</b>_ASCS azure-lb port=620<b>00</b>
+ sudo crm configure primitive nc_<b>NW1</b>_ASCS azure-lb port=620<b>00</b> \
+ op monitor timeout=20s interval=10
sudo crm configure group g-<b>NW1</b>_ASCS fs_<b>NW1</b>_ASCS nc_<b>NW1</b>_ASCS vip_<b>NW1</b>_ASCS \ meta resource-stickiness=3000
The following items are prefixed with either **[A]** - applicable to all nodes,
params ip=<b>10.0.0.8</b> \ op monitor interval=10 timeout=20
- sudo crm configure primitive nc_<b>NW1</b>_ERS azure-lb port=621<b>02</b>
+ sudo crm configure primitive nc_<b>NW1</b>_ERS azure-lb port=621<b>02</b> \
+ op monitor timeout=20s interval=10
sudo crm configure group g-<b>NW1</b>_ERS fs_<b>NW1</b>_ERS nc_<b>NW1</b>_ERS vip_<b>NW1</b>_ERS </code></pre>
virtual-machines Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md
The OS configuration can drift from the recommended settings over time. This dri
To have proper network performance and system stability, ensure the appropriate OS-specific version of eNIC and fNIC drivers are installed per the following compatibility table (This table has the latest compatible driver version). Servers are delivered to customers with compatible versions. However, drivers can get rolled back to default versions during OS/kernel patching. Ensure the appropriate driver version is running post OS/kernel patching operations.
- | OS Vendor | OS Package Version | Firmware Version | eNIC Driver | fNIC Driver |
+ | OS Vendor | OS Package Version | Firmware Version | eNIC Driver | fNIC Driver |
||-|--|--|--| | SuSE | SLES 12 SP2 | 3.2.3i | 2.3.0.45 | 1.6.0.37 | | SuSE | SLES 12 SP3 | 3.2.3i | 2.3.0.43 | 1.6.0.36 |
- | SuSE | SLES 12 SP4 | 3.2.3i | 4.0.0.14 | 2.0.0.63 |
+ | SuSE | SLES 12 SP4 | 3.2.3i | 4.0.0.14 | 2.0.0.63 |
| SuSE | SLES 12 SP5 | 3.2.3i | 4.0.0.14 | 2.0.0.63 | | Red Hat | RHEL 7.6 | 3.2.3i | 3.1.137.5 | 2.0.0.50 |
- | SuSE | SLES 12 SP4 | 4.1.1b | 4.0.0.6 | 2.0.0.60 |
+ | SuSE | SLES 12 SP4 | 4.1.1b | 4.0.0.6 | 2.0.0.60 |
| SuSE | SLES 12 SP5 | 4.1.1b | 4.0.0.6 | 2.0.0.59 | | SuSE | SLES 15 SP1 | 4.1.1b | 4.0.0.8 | 2.0.0.60 | | SuSE | SLES 15 SP2 | 4.1.1b | 4.0.0.8 | 2.0.0.60 | | Red Hat | RHEL 7.6 | 4.1.1b | 4.0.0.8 | 2.0.0.60 |
+ | Red Hat | RHEL 8.2 | 4.1.1b | 4.0.0.8 | 2.0.0.60 |
| SuSE | SLES 12 SP4 | 4.1.3d | 4.0.0.13 | 2.0.0.69 | | SuSE | SLES 12 SP5 | 4.1.3d | 4.0.0.13 | 2.0.0.69 | | SuSE | SLES 15 SP1 | 4.1.3d | 4.0.0.13 | 2.0.0.69 |
+ | Red Hat | RHEL 8.2 | 4.1.3d | 4.0.0.13 | 2.0.0.69 |
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
vm-windows Previously updated : 09/07/2022 Last updated : 11/03/2022
Create a dummy file system cluster resource, which will monitor and report failu
params ip="10.23.0.27" sudo crm configure primitive rsc_nc_HN1_HDB03 azure-lb port=62503 \
+ op monitor timeout=20s interval=10 \
meta resource-stickiness=0 sudo crm configure group g_ip_HN1_HDB03 rsc_ip_HN1_HDB03 rsc_nc_HN1_HDB03
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Security admin rules are similar to NSG rules in structure and the parameters th
## Network intent policies and security admin rules
- A network intent policy is applied to some network services to ensure the network traffic is working as needed for these services. By default, deployed security admin rules aren't applied on virtual networks with services that use network intent policies such as SQL managed instance service. If you deploy a service in a virtual network with existing security admin rules, those security admin rules will be removed from those virtual networks.
+A network intent policy is applied to some network services to ensure the network traffic is working as needed for these services. By default, a security admin configuration will not apply security admin rules to virtual networks with services that use network intent policies such as SQL managed instance service. With this default option, if you deploy a service using network intent policies in a virtual network with existing security admin rules applied, those security admin rules will be removed from those virtual networks. You can also elect for your security admin configuration to handle virtual networks with services that use network intent policies differently to instead apply security admin rules to those virtual networks unless the security admin rule is of a "deny" action type. With either option, your security admin rules will not block traffic to or from virtual networks with services that use network intent policies, ensuring that these services continue to function as expected.
If you need to apply security admin rules on virtual networks with services that use network intent policies, contact AVNMFeatureRegister@microsoft.com to enable this functionality. Overriding the default behavior described above could break the network intent policies created for those services. For example, creating a deny admin rule can block some traffic allowed by the SQL managed instance service, which is defined by their network intent policies. Make sure to review your environment before applying a security admin configuration. For an example of how to allow the traffic of services that use network intent policies, see [How can I explicitly allow SQLMI traffic before having deny rules](faq.md#how-can-i-explicitly-allow-azure-sql-managed-instance-traffic-before-having-deny-rules) ## Security admin fields
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Deploy a network manager instance with the defined scope and access you need.
1. Select **+ Create a resource** and search for **Network Manager**. Then select **Create** to begin setting up Azure Virtual Network Manager.
-1. On the *Basics* tab, enter or select the following information:
+1. On the **Basics** tab, enter or select the following information:
:::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics.png" alt-text="Screenshot of Create a network manager Basics page.":::
Create five virtual networks using the portal. This example creates virtual netw
1. From the **Home** screen, select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
-1. On the *Basics* tab, enter or select the following information.
+1. On the **Basics** tab, enter or select the following information.
:::image type="content" source="./media/create-virtual-network-manager-portal/create-mesh-vnet-basic.png" alt-text="Screenshot of create a virtual network basics page.":::
Virtual Network Manager applies configurations to groups of VNets by placing the
:::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
+1. On the **Create a network group** page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
In this task, you'll manually add three virtual networks for your Mesh configura
:::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
-1. On the *Manually add members* page, select three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
+1. On the **Manually add members** page, select three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
-1. On the **Network Group** page under *Settings*, select **Group Members** to view the membership of the group you manually selected.
+1. On the **Network Group** page under **Settings**, select **Group Members** to view the membership of the group you manually selected.
:::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png"::: ### Create Azure Policy for dynamic membership
Using [Azure Policy](concept-azure-policy-integration.md), you'll define a condi
## Create a configuration Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration. Replace <subscription_id> with your subscription and follow the steps below:
-1. Select **Configurations** under *Settings*, then select **+ Create**.
+1. Select **Configurations** under **Settings**, then select **+ Create**.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of configuration creation screen for Network Manager.":::
Now that the Network Group is created, and has the correct VNets, create a mesh
:::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Basics* page, enter the following information, and select **Next: Topology >**.
+1. On the **Basics** page, enter the following information, and select **Next: Topology >**.
:::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
Now that the Network Group is created, and has the correct VNets, create a mesh
| Description | *(Optional)* Provide a description about this connectivity configuration. |
-1. On the *Topology* tab, select the *Mesh* topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region.
+1. On the **Topology** tab, select the **Mesh** topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region.
:::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration.":::
Now that the Network Group is created, and has the correct VNets, create a mesh
To have your configurations applied to your environment, you'll need to commit the configuration by deployment. You'll need to deploy the configuration to the **West US** region where the virtual networks are deployed.
-1. Select **Deployments** under *Settings*, then select **Deploy configurations**.
+1. Select **Deployments** under **Settings**, then select **Deploy configurations**.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployments.png" alt-text="Screenshot of deployments page in Network Manager.":::
To have your configurations applied to your environment, you'll need to commit t
## Verify configuration deployment Use the **Network Manager** section for each virtual machine to verify whether configuration was deployed in the steps below:
-1. Select **Refresh** on the *Deployments* page to see the updated status of the configuration that you committed.
+1. Select **Refresh** on the **Deployments** page to see the updated status of the configuration that you committed.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-status.png" alt-text="Screenshot of refresh button for updated deployment status.":::
If you no longer need Azure Virtual Network Manager, you'll need to make sure al
1. Select **Next** and select **Deploy** to complete the deployment removal.
-1. To delete a configuration, select **Configurations** under *Settings* from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page. Select **Yes** to confirm the configuration deletion.
+1. To delete a configuration, select **Configurations** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page. Select **Yes** to confirm the configuration deletion.
:::image type="content" source="./media/create-virtual-network-manager-portal/delete-configuration.png" alt-text="Screenshot of delete button for a connectivity configuration.":::
-1. To delete a network group, select **Network Groups** under *Settings* from the left pane of Azure Virtual Network Manager. Select the checkbox next to the network group you want to remove and then select **Delete** at the top of the resource page.
+1. To delete a network group, select **Network Groups** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the network group you want to remove and then select **Delete** at the top of the resource page.
:::image type="content" source="./media/create-virtual-network-manager-portal/delete-network-group.png" alt-text="Screenshot of delete a network group button.":::
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
Now that the virtual network is in the network group, configurations will be app
```azurecli # Delete static member group
-az network manager group static-member delete --network-group-name "CrossTenantNetworkGroup" --network-manager-name " myAVNM" --resource-group "myRG" --static-member-name "fabrikamVnetΓÇ¥
+az network manager group static-member delete --network-group-name "CrossTenantNetworkGroup" --network-manager-name " myAVNM" --resource-group "myRG" --static-member-name "targetVnet01ΓÇ¥
# Delete scope connections az network manager scope-connection delete --resource-group "myRG" --network-manager-name "myAVNM" --name "ToTargetManagedTenant"
virtual-network Virtual Network Tcpip Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tcpip-performance-tuning.md
But the TCP header value for TCP window size is only 2 bytes long, which means t
The scale factor is also a setting that you can configure in an operating system. Here's the formula for calculating the TCP window size by using scale factors:
-`TCP window size = TCP window size in bytes \* (2^scale factor)`
+`TCP window size = TCP window size in bytes * (2^scale factor)`
Here's the calculation for a window scale factor of 3 and a window size of 65,535:
-`65,535 \* (2^3) = 262,140 bytes`
+`65,535 * (2^3) = 262,140 bytes`
A scale factor of 14 results in a TCP window size of 14 (the maximum offset allowed). The TCP window size will be 1,073,725,440 bytes (8.5 gigabits).
Still, these packet types are indications that TCP throughput isn't achieving it
## Next steps
-Now that you've learned about TCP/IP performance tuning for Azure VMs, you might want to read about other considerations for [planning virtual networks](./virtual-network-vnet-plan-design-arm.md) or [learn more about connecting and configuring virtual networks](./index.yml).
+Now that you've learned about TCP/IP performance tuning for Azure VMs, you might want to read about other considerations for [planning virtual networks](./virtual-network-vnet-plan-design-arm.md) or [learn more about connecting and configuring virtual networks](./index.yml).
vmware-cloudsimple Access Cloudsimple Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/access-cloudsimple-portal.md
Last updated 06/04/2019 -+
vmware-cloudsimple Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/account.md
Last updated 08/14/2019 -+
vmware-cloudsimple Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-ad.md
Last updated 08/15/2019 -+
vmware-cloudsimple Azure Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-application-gateway.md
Last updated 08/16/2019 -+
vmware-cloudsimple Azure Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-create-vm.md
Last updated 08/16/2019 -+
vmware-cloudsimple Azure Expressroute Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-expressroute-connection.md
Last updated 08/14/2019 -+
vmware-cloudsimple Azure Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-manage-vm.md
Last updated 08/16/2019 -+
vmware-cloudsimple Azure Subscription Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-subscription-mapping.md
Last updated 08/14/2019 -+
vmware-cloudsimple Backup Workloads Veeam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/backup-workloads-veeam.md
Last updated 08/16/2019 -+
vmware-cloudsimple Cloudsimple Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-security.md
Last updated 08/20/2019 -+
vmware-cloudsimple Cloudsimple Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-service.md
Last updated 08/20/2019 -+
vmware-cloudsimple Configure Server Vrealize Automation Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/configure-server-vrealize-automation-endpoint.md
Last updated 08/19/2019 -+
vmware-cloudsimple Create Cloudsimple Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-cloudsimple-service.md
Last updated 08/19/2019 -+
vmware-cloudsimple Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-private-cloud.md
Last updated 08/19/2019 -+
vmware-cloudsimple Create Vlan Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-vlan-subnet.md
Last updated 08/15/2019 -+
vmware-cloudsimple Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/delete-private-cloud.md
Last updated 08/06/2019 -+
vmware-cloudsimple Disaster Recovery Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/disaster-recovery-site-recovery-manager.md
Last updated 08/20/2019 -+
vmware-cloudsimple Disaster Recovery Zerto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/disaster-recovery-zerto.md
Last updated 08/20/2019 -+
vmware-cloudsimple Dns Dhcp Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/dns-dhcp-setup.md
Last updated 08/16/2019 -+
vmware-cloudsimple Ensuring High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/ensuring-high-availability.md
Last updated 08/20/2019 -+
vmware-cloudsimple Escalate Private Cloud Privileges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/escalate-private-cloud-privileges.md
Last updated 06/05/2019 -+
vmware-cloudsimple Escalate Privileges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/escalate-privileges.md
Last updated 08/16/2019 -+
vmware-cloudsimple Expand Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/expand-private-cloud.md
Last updated 06/06/2019 -+
vmware-cloudsimple Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/firewall.md
Last updated 08/15/2019 -+
vmware-cloudsimple High Availability Vpn Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/high-availability-vpn-connection.md
Last updated 08/14/2019 -+
vmware-cloudsimple Horizon Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/horizon-guide.md
Last updated 08/20/2019 -+
vmware-cloudsimple Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/index.md
Last updated 08/20/2019 -+
vmware-cloudsimple Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/key-concepts.md
Last updated 04/24/2019 -+
vmware-cloudsimple Learn Private Cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/learn-private-cloud-permissions.md
Last updated 08/16/2019 -+
vmware-cloudsimple Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/load-balancers.md
Last updated 08/20/2019 -+
vmware-cloudsimple Manage Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/manage-private-cloud.md
Last updated 06/10/2019 -+
vmware-cloudsimple Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/migrate-workloads.md
Last updated 08/20/2019 -+
vmware-cloudsimple Migration Layer 2 Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/migration-layer-2-vpn.md
Last updated 08/19/2019 -+
vmware-cloudsimple Monitor Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/monitor-activity.md
Last updated 08/13/2019 -+
vmware-cloudsimple On Premises Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/on-premises-connection.md
Last updated 08/14/2019 -+
vmware-cloudsimple On Premises Dns Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/on-premises-dns-setup.md
Last updated 08/14/2019 -+
vmware-cloudsimple Oracle Real Application Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/oracle-real-application-clusters.md
Last updated 08/06/2019 -+
vmware-cloudsimple Private Cloud Dns Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/private-cloud-dns-forwarding.md
Last updated 02/29/2020 -+
vmware-cloudsimple Private Cloud Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/private-cloud-secure.md
Last updated 08/19/2019 -+
vmware-cloudsimple Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/public-ips.md
Last updated 08/15/2019 -+
vmware-cloudsimple Quickstart Create Private Cloud Vmware Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/quickstart-create-private-cloud-vmware-virtual-machine.md
Last updated 08/16/2019 -+
vmware-cloudsimple Set Up Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/set-up-vpn.md
Last updated 08/14/2019 -+
vmware-cloudsimple Set Vcenter Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/set-vcenter-identity.md
Last updated 08/15/2019 -+
vmware-cloudsimple Shrink Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/shrink-private-cloud.md
Last updated 07/01/2019 -+
vmware-cloudsimple Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/users.md
Last updated 08/14/2019 -+
vmware-cloudsimple Vcenter Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vcenter-access.md
Last updated 08/30/2019 -+
vmware-cloudsimple Virtual Network Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/virtual-network-connection.md
Last updated 08/14/2019 -+
vmware-cloudsimple Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vpn-gateway.md
Last updated 08/14/2019 -+
vmware-cloudsimple Vsan Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vsan-encryption.md
Last updated 08/19/2019 -+
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
Last updated 10/25/2022 + # Configure P2S for access based on users and groups - Azure AD authentication
In this section, you generate and download the Azure VPN Client profile configur
## Next steps * To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
-* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
+* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).