Updates from: 07/19/2022 01:08:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
if __name__ == "__main__":
## Step 6: Run your web app
-In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: <http://localhost:5000>.
+In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: `http://localhost:5000`.
# [Linux](#tab/linux)
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
Previously updated : 06/16/2021 Last updated : 07/18/2022
The following errors can be returned by the Azure Active Directory B2C service.
| Error code | Message | Notes | | - | - | -- |
+| `AADB2C90001` | This user already exists, and profile '{0}' does not allow the same user to be created again. | [Sign-up flow](add-sign-up-and-sign-in-policy.md) |
| `AADB2C90002` | The CORS resource '{0}' returned a 404 not found. | [Hosting the page content](customize-ui-with-html.md#hosting-the-page-content) | | `AADB2C90006` | The redirect URI '{0}' provided in the request is not registered for the client ID '{1}'. | [Register a web application](tutorial-register-applications.md), [Sending authentication requests](openid-connect.md#send-authentication-requests) | | `AADB2C90007` | The application associated with client ID '{0}' has no registered redirect URIs. | [Register a web application](tutorial-register-applications.md), [Sending authentication requests](openid-connect.md#send-authentication-requests) |
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 04/12/2022 Last updated : 07/18/2022
Page layout packages are periodically updated to include fixes and improvements
Azure AD B2C page layout uses the following versions of the [jQuery library](https://jquery.com/) and the [Handlebars templates](https://handlebarsjs.com/):
-|Element |Page layout version range |jQuery version |Handlebars Runtime version |Handlebars Compliler version |
+|Element |Page layout version range |jQuery version |Handlebars Runtime version |Handlebars Compiler version |
||||--|-| |multifactor |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 | | |< 1.2.4 | 3.4.1 |4.0.12 |2.0.1 |
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.14**
+- Fixed WCAG 2.1 accessibility bug for the TOTP multifactor authentication screens.
+ **2.1.10** - Correcting to the tab index
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Fixed an accessibility bug to show inline error messages only on form submission. **2.1.6**-- Fixed password error get cleared when typing too quickly on a different field.
+- Fixed *password error gets cleared when typing too quickly on a different field*.
**2.1.5** - Fixed cursor jumps issue on iOS when editing in the middle of the text.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**2.1.1** -- Added a UXString `heading` in addition to `intro` to display on the page as a title. This is hidden by default.
+- Added a UXString `heading` in addition to `intro` to display on the page as a title. This message is hidden by default.
- Added support for saving passwords to iCloud Keychain. - Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray). - Added disclaimers on self-asserted page.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Initial release
-## Unified sign-in sign-up page with password reset link (unifiedssp)
+## Unified sign-in and sign-up page with password reset link (unifiedssp)
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Updates to the UI elements and CSS classes **2.1.5**-- Fixed an issue on tab order when idp selector template is used on sign in page.
+- Fixed an issue on tab order when idp selector template is used on sign-in page.
- Fixed an encoding issue on sign-in link text. **2.1.4**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Allowing the "forgot password" link to use as claims exchange. For more information, see [Self-service password reset](add-password-reset-policy.md#self-service-password-reset-recommended). **2.1.1**-- Added a UXString `heading` in addition to `intro` to display on the page as a title. This is hidden by default.
+- Added a UXString `heading` in addition to `intro` to display on the page as a title. This message is hidden by default.
- Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray). - Added support for saving passwords to iCloud Keychain. - Focus is now placed on the first error field when multiple fields have errors.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for multiple sign-up links. - Added support for user input validation according to the predicate rules defined in the policy.-- When the [sign-in option](sign-in-options.md) is set to Email, the sign-in header presents "Sign in with your sign in name". The username field presents "Sign in name". For more information, see [localization](localization-string-ids.md#sign-up-or-sign-in-page-elements).
+- When the [sign-in option](sign-in-options.md) is set to Email, the sign-in header presents "Sign in with your sign-in name". The username field presents "Sign in name". For more information, see [localization](localization-string-ids.md#sign-up-or-sign-in-page-elements).
**1.2.0**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**1.2.2** - Fixed an issue with auto-filling the verification code when using iOS. - Fixed an issue with redirecting a token to the relying party from Android Webview. -- Added a UXString `heading` in addition to `intro` to display on the page as a title. This is hidden by default.
+- Added a UXString `heading` in addition to `intro` to display on the page as a title. This messages is hidden by default.
- Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray). **1.2.1**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- 'Confirm Code' button removed - The input field for the code now only takes input up to six (6) characters-- The page will automatically attempt to verify the code entered when a 6-digit code is entered, without any button having to be clicked
+- The page will automatically attempt to verify the code entered when a six-digit code is entered, without any button having to be clicked
- If the code is wrong, the input field is automatically cleared - After three (3) attempts with an incorrect code, B2C sends an error back to the relying party - Accessibility fixes
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**1.1.0** - Accessibility fix-- Removed the default message when there is no contact from the policy
+- Removed the default message when there's no contact from the policy
- Default CSS removed **1.0.0**
active-directory Partner Driven Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/partner-driven-integrations.md
Previously updated : 07/08/2022 Last updated : 07/18/2022
Popular third party applications, such as Dropbox, Snowflake, and Workplace by F
**Option 2 - Implement a SCIM compliant API for your application:** If your line-of-business application supports the [SCIM](https://aka.ms/scimoverview) standard, it can easily be integrated with the [Azure AD SCIM client](use-scim-to-provision-users-and-groups.md).
+ [![Diagram showing implementation of a SCIM compliant API for your application.](media/partner-driven-integrations/scim-compliant-api-1.png)](media/partner-driven-integrations/scim-compliant-api-1.png#lightbox)
+ **Option 3 - Use Microsoft Graph:** Many new applications use Microsoft Graph to retrieve users, groups and other resources from Azure Active Directory. You can learn more about what scenarios to use [SCIM and Graph](scim-graph-scenarios.md) in. **Option 4 - Use partner-driven connectors:** In cases where an application doesn't support SCIM, partners have built gateways between the Azure AD SCIM client and target applications. **This document serves as a place for partners to attest to integrations that are compatible with Azure Active Directory, and for customers to discover these partner-driven integrations.** These gateways are built, maintained, and owned by the third-party vendor.
+ [![Diagram showing gateways between the Azure AD SCIM client and target applications.](media/partner-driven-integrations/partner-driven-connectors-1.png)](media/partner-driven-integrations/partner-driven-connectors-1.png#lightbox)
+ ## Available partner-driven integrations The descriptions and lists of applications below are provided by the partners themselves. You can use the lists of applications supported to identify a partner that you may want to contact and learn more about.
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
Previously updated : 05/06/2022 Last updated : 07/18/2022 -+
To view or copy BitLocker keys, you need to be the owner of the device or have o
- Security Administrator - Security Reader
-## Device-list filtering (preview)
+## View and filter your devices (preview)
-Previously, you could filter the device list only by activity and enabled state. In this preview, you can filter the device list by these device attributes:
+In this preview, you have the ability to infinitely scroll, reorder columns, and select all devices. You can filter the device list by these device attributes:
- Enabled state - Compliant state
Previously, you could filter the device list only by activity and enabled state.
- Activity timestamp - OS - Device type (printer, secure VM, shared device, registered device)
+- MDM
+- Extension attributes
+- Administrative unit
+- Owner
-To enable the preview filtering functionality in the **All devices** view:
+To enable the preview in the **All devices** view:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Azure Active Directory** > **Devices**.
-1. Select the banner that says **Try out the new devices filtering improvements. Click to enable the preview.**
-
- ![Enable filtering preview functionality](./media/device-management-azure-portal/device-filter-preview-enable.png)
+2. Go to **Azure Active Directory** > **Devices** > **All devices**.
+3. Select the **Preview features** button.
+4. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**.
+5. Refresh your browser.
-You can now add filters to your **All devices** view.
+You can now experience the enhanced **All devices** view.
## Download devices
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
You can enforce Conditional Access policies, such as multifactor authentication
> > Remote desktop using Windows Hello for Business authentication is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
-> [!WARNING]
-> The per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting is not supported for the Azure Windows VM Sign-In app.
- ## Log in by using Azure AD credentials to a Windows VM > [!IMPORTANT]
You might see the following error message when you initiate a remote desktop con
![Screenshot of the message that says the sign-in method you're trying to use isn't allowed.](./media/howto-vm-sign-in-azure-ad-windows/mfa-sign-in-method-required.png)
-If you've configured a Conditional Access policy that requires MFA before you can access the resource, you need to ensure that the Windows 10 or later PC that's initiating the remote desktop connection to your VM signs in by using a strong authentication method such as Windows Hello. If you don't use a strong authentication method for your remote desktop connection, you'll see the error.
+If you've configured a Conditional Access policy that requires MFA or legacy per-user Enabled/Enforced Azure AD MFA before you can access the resource, you need to ensure that the Windows 10 or later PC that's initiating the remote desktop connection to your VM signs in by using a strong authentication method such as Windows Hello. If you don't use a strong authentication method for your remote desktop connection, you'll see the error.
Another MFA-related error message is the one described previously: "Your credentials did not work." ![Screenshot of the message that says your credentials didn't work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-> [!WARNING]
-> The legacy per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting is not supported for the Azure Windows VM Sign-In app. This setting causes sign-in to fail with the "Your credentials did not work" error message.
-
-You can resolve the problem by removing the per-user MFA setting through these commands:
-
-```
-
-# Get StrongAuthenticationRequirements configure on a user
-(Get-MsolUser -UserPrincipalName username@contoso.com).StrongAuthenticationRequirements
-
-# Clear StrongAuthenticationRequirements from a user
-$mfa = @()
-Set-MsolUser -UserPrincipalName username@contoso.com -StrongAuthenticationRequirements $mfa
-
-# Verify StrongAuthenticationRequirements are cleared from the user
-(Get-MsolUser -UserPrincipalName username@contoso.com).StrongAuthenticationRequirements
-
-```
- If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can configure a Conditional Access policy that excludes the Azure Windows VM Sign-In app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification). > [!NOTE]
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
Based on the app dependencies, you have three migration options:
>[!NOTE] >* Utilize Azure AD Domain Services if the dependencies are aligned with [Common deployment scenarios for Azure AD Domain Services](../../active-directory-domain-services/scenarios.md). >* To validate if Azure AD DS is a good fit, you might use tools like Service Map [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [Automatic Dependency Mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
->* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
+>* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
#### Implement approach #2
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Because of modern browser [3rd party cookie restrictions such as Safari ITP](../
**Service category:** Device Management **Product capability:** Device Lifecycle Management
-Previously, the only filters you could use were "Enabled" and "Activity date." Now, you can [filter your list of devices on more properties](../devices/device-management-azure-portal.md#device-list-filtering-preview), including OS type, join type, compliance, and more. These additions should simplify locating a particular device.
+Previously, the only filters you could use were "Enabled" and "Activity date." Now, you can [filter your list of devices on more properties](../devices/device-management-azure-portal.md), including OS type, join type, compliance, and more. These additions should simplify locating a particular device.
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Title: 'Azure AD Connect: Cloud authentication via staged rollout | Microsoft Docs'
-description: This article explains how to migrate from federated authentication, to cloud authentication, by using a staged rollout.
+ Title: 'Azure AD Connect: Cloud authentication via Staged Rollout | Microsoft Docs'
+description: This article explains how to migrate from federated authentication, to cloud authentication, by using a Staged Rollout.
-# Migrate to cloud authentication using staged rollout
+# Migrate to cloud authentication using Staged Rollout
-Staged rollout allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. This article discusses how to make the switch. Before you begin the staged rollout, however, you should consider the implications if one or more of the following conditions is true:
+Staged Rollout allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. This article discusses how to make the switch. Before you begin the Staged Rollout, however, you should consider the implications if one or more of the following conditions is true:
- You're currently using an on-premises Multi-Factor Authentication server. - You're using smart cards for authentication.
Staged rollout allows you to selectively test groups of users with cloud authent
Before you try this feature, we suggest that you review our guide on choosing the right authentication method. For more information, see the "Comparing methods" table in [Choose the right authentication method for your Azure Active Directory hybrid identity solution](./choose-ad-authn.md#comparing-methods).
-For an overview of the feature, view this "Azure Active Directory: What is staged rollout?" video:
+For an overview of the feature, view this "Azure Active Directory: What is Staged Rollout?" video:
>[!VIDEO https://www.microsoft.com/videoplayer/embed/RE3inQJ]
For an overview of the feature, view this "Azure Active Directory: What is stage
- You have configured all the appropriate tenant-branding and conditional access policies you need for users who are being migrated to cloud authentication. -- If you plan to use Azure AD Multi-Factor Authentication, we recommend that you use [combined registration for self-service password reset (SSPR) and Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md) to have your users register their authentication methods once. Note- when using SSPR to reset password or change password using MyProfile page while in Staged rollout, Azure AD Connect needs to sync the new password hash which can take up to 2 minutes after reset.
+- If you plan to use Azure AD Multi-Factor Authentication, we recommend that you use [combined registration for self-service password reset (SSPR) and Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md) to have your users register their authentication methods once. Note- when using SSPR to reset password or change password using MyProfile page while in Staged Rollout, Azure AD Connect needs to sync the new password hash which can take up to 2 minutes after reset.
-- To use the staged rollout feature, you need to be a global administrator on your tenant.
+- To use the Staged Rollout feature, you need to be a global administrator on your tenant.
- To enable *seamless SSO* on a specific Active Directory forest, you need to be a domain administrator.
For an overview of the feature, view this "Azure Active Directory: What is stage
## Supported scenarios
-The following scenarios are supported for staged rollout. The feature works only for:
+The following scenarios are supported for Staged Rollout. The feature works only for:
- Users who are provisioned to Azure AD by using Azure AD Connect. It does not apply to cloud-only users. - User sign-in traffic on browsers and *modern authentication* clients. Applications or cloud services that use legacy authentication will fall back to federated authentication flows. An example might be Exchange online with modern authentication turned off, or Outlook 2010, which does not support modern authentication. -- Group size is currently limited to 50,000 users. If you have groups that are larger then 50,000 users, it is recommended to split this group over multiple groups for staged rollout.
+- Group size is currently limited to 50,000 users. If you have groups that are larger than 50,000 users, it is recommended to split this group over multiple groups for Staged Rollout.
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition without line-of-sight to the federation server for Windows 10 version 1903 and newer, when userΓÇÖs UPN is routable and domain suffix is verified in Azure AD. -- Autopilot enrollment is supported in Staged rollout with Windows 10 version 1909 or later.
+- Autopilot enrollment is supported in Staged Rollout with Windows 10 version 1909 or later.
## Unsupported scenarios
-The following scenarios are not supported for staged rollout:
+The following scenarios are not supported for Staged Rollout:
- Legacy authentication such as POP3 and SMTP are not supported. -- Certain applications send the "domain_hint" query parameter to Azure AD during authentication. These flows will continue, and users who are enabled for staged rollout will continue to use federation for authentication.
+- Certain applications send the "domain_hint" query parameter to Azure AD during authentication. These flows will continue, and users who are enabled for Staged Rollout will continue to use federation for authentication.
<!-- -->
The following scenarios are not supported for staged rollout:
- You can use a maximum of 10 groups per feature. That is, you can use 10 groups each for *password hash sync*, *pass-through authentication*, and *seamless SSO*. - Nested groups are *not supported*.
- - Dynamic groups are *not supported* for staged rollout.
+ - Dynamic groups are *not supported* for Staged Rollout.
- Contact objects inside the group will block the group from being added. -- When you first add a security group for staged rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required.
+- When you first add a security group for Staged Rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required.
- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers). -- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of staged rollout.
+- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of Staged Rollout.
-- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for all versions, when userΓÇÖs on-premises UPN is not routable. This scenario will fall back to the WS-Trust endpoint while in staged rollout mode, but will stop working when staged migration is complete and user sign-on is no longer relying on federation server.
+- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for all versions, when userΓÇÖs on-premises UPN is not routable. This scenario will fall back to the WS-Trust endpoint while in Staged Rollout mode, but will stop working when staged migration is complete and user sign-on is no longer relying on federation server.
- If you have a non-persistent VDI setup with Windows 10, version 1903 or later, you must remain on a federated domain. Moving to a managed domain isn't supported on non-persistent VDI. For more information, see [Device identity and desktop virtualization](../devices/howto-device-identity-virtual-desktop-infrastructure.md). -- If you have a Windows Hello for Business hybrid certificate trust with certs that are issued via your federation server acting as Registration Authority or smartcard users, the scenario isn't supported on a staged rollout.
+- If you have a Windows Hello for Business hybrid certificate trust with certs that are issued via your federation server acting as Registration Authority or smartcard users, the scenario isn't supported on a Staged Rollout.
>[!NOTE]
- >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](./migrate-from-federation-to-cloud-authentication.md) and [Migrate from federation to pass-through authentication](./migrate-from-federation-to-cloud-authentication.md).
+ >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged Rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](./migrate-from-federation-to-cloud-authentication.md) and [Migrate from federation to pass-through authentication](./migrate-from-federation-to-cloud-authentication.md).
-## Get started with staged rollout
+## Get started with Staged Rollout
-To test the *password hash sync* sign-in by using staged rollout, follow the pre-work instructions in the next section.
+To test the *password hash sync* sign-in by using Staged Rollout, follow the pre-work instructions in the next section.
For information about which PowerShell cmdlets to use, see [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout).
For information about which PowerShell cmdlets to use, see [Azure AD 2.0 preview
![Screenshot of the AADConnect Troubleshooting log](./media/how-to-connect-staged-rollout/staged-2.png)
-If you want to test *pass-through authentication* sign-in by using staged rollout, enable it by following the pre-work instructions in the next section.
+If you want to test *pass-through authentication* sign-in by using Staged Rollout, enable it by following the pre-work instructions in the next section.
## Pre-work for pass-through authentication
If you want to test *pass-through authentication* sign-in by using staged rollou
1. Make sure that you've configured your [Smart Lockout settings](../authentication/howto-password-smart-lockout.md) appropriately. Doing so helps ensure that your users' on-premises Active Directory accounts don't get locked out by bad actors.
-We recommend enabling *seamless SSO* irrespective of the sign-in method (*password hash sync* or *pass-through authentication*) you select for staged rollout. To enable *seamless SSO*, follow the pre-work instructions in the next section.
+We recommend enabling *seamless SSO* irrespective of the sign-in method (*password hash sync* or *pass-through authentication*) you select for Staged Rollout. To enable *seamless SSO*, follow the pre-work instructions in the next section.
## Pre-work for seamless SSO
-Enable *seamless SSO* on the Active Directory forests by using PowerShell. If you have more than one Active Directory forest, enable it for each forest individually. *Seamless SSO* is triggered only for users who are selected for staged rollout. It doesn't affect your existing federation setup.
+Enable *seamless SSO* on the Active Directory forests by using PowerShell. If you have more than one Active Directory forest, enable it for each forest individually. *Seamless SSO* is triggered only for users who are selected for Staged Rollout. It doesn't affect your existing federation setup.
Enable *seamless SSO* by doing the following:
Enable *seamless SSO* by doing the following:
9. For a complete walkthrough, you can also download our [deployment plans](https://aka.ms/SeamlessSSODPDownload) for *seamless SSO*.
-## Enable staged rollout
+## Enable Staged Rollout
To roll out a specific feature (*pass-through authentication*, *password hash sync*, or *seamless SSO*) to a select set of users in a group, follow the instructions in the next sections.
-### Enable a staged rollout of a specific feature on your tenant
+### Enable a Staged Rollout of a specific feature on your tenant
You can roll out these options:
Do the following:
1. To access the UX, sign in to the [Azure AD portal](https://aka.ms/stagedrolloutux).
-2. Select the **Enable staged rollout for managed user sign-in** link.
+2. Select the **Enable Staged Rollout for managed user sign-in** link.
For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
Do the following:
>[!NOTE]
- >The members in a group are automatically enabled for staged rollout. Nested and dynamic groups are not supported for staged rollout.
+ >The members in a group are automatically enabled for Staged Rollout. Nested and dynamic groups are not supported for Staged Rollout.
>When adding a new group, users in the group (up to 200 users for a new group) will be updated to use managed auth immediately. >Editing a group (adding or removing users), it can take up to 24 hours for changes to take effect. >Seamless SSO will apply only if users are in the Seamless SSO group and also in either a PTA or PHS group. ## Auditing
-We've enabled audit events for the various actions we perform for staged rollout:
+We've enabled audit events for the various actions we perform for Staged Rollout:
-- Audit event when you enable a staged rollout for *password hash sync*, *pass-through authentication*, or *seamless SSO*.
+- Audit event when you enable a Staged Rollout for *password hash sync*, *pass-through authentication*, or *seamless SSO*.
>[!NOTE]
- >An audit event is logged when *seamless SSO* is turned on by using staged rollout.
+ >An audit event is logged when *seamless SSO* is turned on by using Staged Rollout.
![The "Create rollout policy for feature" pane - Activity tab](./media/how-to-connect-staged-rollout/staged-7.png)
We've enabled audit events for the various actions we perform for staged rollout
- Audit event when a group is added to *password hash sync*, *pass-through authentication*, or *seamless SSO*. >[!NOTE]
- >An audit event is logged when a group is added to *password hash sync* for staged rollout.
+ >An audit event is logged when a group is added to *password hash sync* for Staged Rollout.
![The "Add a group to feature rollout" pane - Activity tab](./media/how-to-connect-staged-rollout/staged-9.png) ![The "Add a group to feature rollout" pane - Modified Properties tab](./media/how-to-connect-staged-rollout/staged-10.png) -- Audit event when a user who was added to the group is enabled for staged rollout.
+- Audit event when a user who was added to the group is enabled for Staged Rollout.
![The "Add user to feature rollout" pane - Activity tab](media/how-to-connect-staged-rollout/staged-11.png)
We've enabled audit events for the various actions we perform for staged rollout
To test the sign-in with *password hash sync* or *pass-through authentication* (username and password sign-in), do the following:
-1. On the extranet, go to the [Apps page](https://myapps.microsoft.com) in a private browser session, and then enter the UserPrincipalName (UPN) of the user account that's selected for staged rollout.
+1. On the extranet, go to the [Apps page](https://myapps.microsoft.com) in a private browser session, and then enter the UserPrincipalName (UPN) of the user account that's selected for Staged Rollout.
- Users who've been targeted for staged rollout are not redirected to your federated login page. Instead, they're asked to sign in on the Azure AD tenant-branded sign-in page.
+ Users who've been targeted for Staged Rollout are not redirected to your federated login page. Instead, they're asked to sign in on the Azure AD tenant-branded sign-in page.
1. Ensure that the sign-in successfully appears in the [Azure AD sign-in activity report](../reports-monitoring/concept-sign-ins.md) by filtering with the UserPrincipalName. To test sign-in with *seamless SSO*:
-1. On the intranet, go to the [Apps page](https://myapps.microsoft.com) in a private browser session, and then enter the UserPrincipalName (UPN) of the user account that's selected for staged rollout.
+1. On the intranet, go to the [Apps page](https://myapps.microsoft.com) in a private browser session, and then enter the UserPrincipalName (UPN) of the user account that's selected for Staged Rollout.
- Users who've been targeted for staged rollout of *seamless SSO* are presented with a "Trying to sign you in ..." message before they're silently signed in.
+ Users who've been targeted for Staged Rollout of *seamless SSO* are presented with a "Trying to sign you in ..." message before they're silently signed in.
1. Ensure that the sign-in successfully appears in the [Azure AD sign-in activity report](../reports-monitoring/concept-sign-ins.md) by filtering with the UserPrincipalName.
- To track user sign-ins that still occur on Active Directory Federation Services (AD FS) for selected staged rollout users, follow the instructions at [AD FS troubleshooting: Events and logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging#types-of-events). Check vendor documentation about how to check this on third-party federation providers.
+ To track user sign-ins that still occur on Active Directory Federation Services (AD FS) for selected Staged Rollout users, follow the instructions at [AD FS troubleshooting: Events and logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging#types-of-events). Check vendor documentation about how to check this on third-party federation providers.
>[!NOTE]
- >While users are in Staged rollout with PHS, changing passwords might take up to 2 minutes to take effect due to sync time. Make sure to set expectations with your users to avoid helpdesk calls after they changed their password.
+ >While users are in Staged Rollout with PHS, changing passwords might take up to 2 minutes to take effect due to sync time. Make sure to set expectations with your users to avoid helpdesk calls after they changed their password.
## Monitoring
-You can monitor the users and groups added or removed from staged rollout and users sign-ins while in staged rollout, using the new Hybrid Auth workbooks in the Azure portal.
+You can monitor the users and groups added or removed from Staged Rollout and users sign-ins while in Staged Rollout, using the new Hybrid Auth workbooks in the Azure portal.
![Hybrid Auth workbooks](./media/how-to-connect-staged-rollout/staged-13.png)
-## Remove a user from staged rollout
+## Remove a user from Staged Rollout
-Removing a user from the group disables staged rollout for that user. To disable the staged rollout feature, slide the control back to **Off**.
+Removing a user from the group disables Staged Rollout for that user. To disable the Staged Rollout feature, slide the control back to **Off**.
## Frequently asked questions
A: Yes, you can use this feature in your production tenant, but we recommend tha
A: No, this feature is designed for testing cloud authentication. After successful testing a few groups of users you should cut over to cloud authentication. We do not recommend using a permanent mixed state, because this approach could lead to unexpected authentication flows.
-**Q: Can I use PowerShell to perform staged rollout?**
+**Q: Can I use PowerShell to perform Staged Rollout?**
-A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout).
+A: Yes. To learn how to use PowerShell to perform Staged Rollout, see [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout).
## Next steps - [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout ) - [Change the sign-in method to password hash synchronization](./migrate-from-federation-to-cloud-authentication.md) - [Change sign-in method to pass-through authentication](./migrate-from-federation-to-cloud-authentication.md)-- [Staged rollout interactive guide](https://mslearn.cloudguides.com/en-us/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD)
+- [Staged Rollout interactive guide](https://mslearn.cloudguides.com/en-us/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD)
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
Previously updated : 02/24/2022 Last updated : 07/18/2022 # Customer intent: As an administrator of an Azure AD tenant, I want to manage access to my applications and make sure they are secure.
In this tutorial, the administrator can find the basic steps to configure the ap
1. Set **Enable policy** to **On**. 1. To apply the Conditional Access policy, select **Create**.
-## Test multi-factor authentication
+### Test multi-factor authentication
1. Open a new browser window in InPrivate or incognito mode and browse to the URL of the application. 1. Sign in with the user account that you assigned to the application. You're required to register for and use Azure AD Multi-Factor Authentication. Follow the prompts to complete the process and verify you successfully sign into the Azure portal.
Juan wants to make sure that certain terms and conditions are known to users bef
1. For **Enforce with conditional access policy templates**, select **Custom policy**. 1. Select **Create**.
-## Add the terms of use to the policy
+### Add the terms of use to the policy
1. In the left menu of the tenant overview, select **Security**.
-1. Select **Conditional Access**, and then select the *MFA Pilot* policy.
+1. Select **Conditional Access**, and then **Policies**. From the list of policies, select the *MFA Pilot* policy.
1. Under **Access controls** and **Grant**, select the controls selected link. 1. Select *My TOU*. 1. Select **Require all the selected controls**, and then choose **Select**.
The My Apps portal enables administrators and users to manage the applications u
> [!NOTE] > Applications only appear in a user's my Apps portal after the user is assigned to the application and the application is configured to be visible to users. See [Configure application properties](add-application-portal-configure.md) to learn how to make the application visible to users.
+By default, all applications are listed together on a single page. But you can use collections to group together related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. In this section, you create a collection and assign it to users and groups.
+ 1. Open the Azure portal. 1. Go to **Azure Active Directory**, and then select **Enterprise Applications**. 1. Under **Manage**, select **Collections**.
The My Apps portal enables administrators and users to manage the applications u
1. Select the **Users and groups** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign the collection to. Or use the Search box to find users or groups. When you're finished selecting users and groups, choose **Select**. 1. Select **Review + Create**, and then select **Create**. The properties for the new collection appear.
+### Check the collection in the My Apps portal
+
+1. Open a new browser window in InPrivate or incognito mode and browse to the [My Apps](https://myapps.microsoft.com/) portal.
+1. Sign in with the user account that you assigned to the application.
+1. Check that the collection you created appears in the My Apps portal.
+1. Close the browser window.
+ ## Clean up resources You can keep the resources for future use, or if you're not going to continue to use the resources created in this tutorial, delete them with the following steps.
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with AWS Single Sign-on'
-description: Learn how to configure single sign-on between Azure Active Directory and AWS Single Sign-on.
+ Title: 'Tutorial: Azure AD SSO integration with AWS IAM Identity Center (successor to AWS Single Sign-On)'
+description: Learn how to configure single sign-on between Azure Active Directory and AWS IAM Identity Center (successor to AWS Single Sign-On).
Previously updated : 03/10/2022 Last updated : 07/15/2022
-# Tutorial: Azure AD SSO integration with AWS Single Sign-on
+# Tutorial: Azure AD SSO integration with AWS IAM Identity Center
-In this tutorial, you'll learn how to integrate AWS Single Sign-on with Azure Active Directory (Azure AD). When you integrate AWS Single Sign-on with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AWS IAM Identity Center (successor to AWS Single Sign-On) with Azure Active Directory (Azure AD). When you integrate AWS IAM Identity Center with Azure AD, you can:
-* Control in Azure AD who has access to AWS Single Sign-on.
-* Enable your users to be automatically signed-in to AWS Single Sign-on with their Azure AD accounts.
+* Control in Azure AD who has access to AWS IAM Identity Center.
+* Enable your users to be automatically signed-in to AWS IAM Identity Center with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate AWS Single Sign-on with Azure Ac
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AWS Single Sign-on single sign-on (SSO) enabled subscription.
+* AWS IAM Identity Center enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS Single Sign-on supports **SP and IDP** initiated SSO.
+* AWS IAM Identity Center supports **SP and IDP** initiated SSO.
-* AWS Single Sign-on supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
+* AWS IAM Identity Center supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
-## Add AWS Single Sign-on from the gallery
+## Add AWS IAM Identity Center from the gallery
-To configure the integration of AWS Single Sign-on into Azure AD, you need to add AWS Single Sign-on from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS IAM Identity Center into Azure AD, you need to add AWS IAM Identity Center from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AWS Single Sign-on** in the search box.
-1. Select **AWS Single Sign-on** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box.
+1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AWS Single Sign-on
+## Configure and test Azure AD SSO for AWS IAM Identity Center
-Configure and test Azure AD SSO with AWS Single Sign-on using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single Sign-on.
+Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
-To configure and test Azure AD SSO with AWS Single Sign-on, perform the following steps:
+To configure and test Azure AD SSO with AWS IAM Identity Center, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AWS Single Sign-on SSO](#configure-aws-single-sign-on-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AWS Single Sign-on test user](#create-aws-single-sign-on-test-user)** - to have a counterpart of B.Simon in AWS Single Sign-on that is linked to the Azure AD representation of user.
+1. **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AWS IAM Identity Center test user](#create-aws-iam-identity-center-test-user)** - to have a counterpart of B.Simon in AWS IAM Identity Center that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AWS Single Sign-on** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AWS IAM Identity Center** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click **Upload metadata file**.
- b. Click on **folder logo** to select metadata file which is explained to download in **[Configure AWS Single Sign-on SSO](#configure-aws-single-sign-on-sso)** section and click **Add**.
+ b. Click on **folder logo** to select metadata file which is explained to download in **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** section and click **Add**.
![image2](common/browse-upload-metadata.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://portal.sso.<REGION>.amazonaws.com/saml/assertion/<ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS Single Sign-on Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS IAM Identity Center Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. AWS Single Sign-on application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. AWS IAM Identity Center application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/edit-attribute.png) > [!NOTE]
- > If ABAC is enabled in AWS SSO, the additional attributes may be passed as session tags directly into AWS accounts.
+ > If ABAC is enabled in AWS IAM Identity Center, the additional attributes may be passed as session tags directly into AWS accounts.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate(Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up AWS Single Sign-on** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up AWS IAM Identity Center** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS Single Sign-on.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS IAM Identity Center.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AWS Single Sign-on**.
+1. In the applications list, select **AWS IAM Identity Center**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AWS Single Sign-on SSO
+## Configure AWS IAM Identity Center SSO
-1. To automate the configuration within AWS Single Sign-on, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within AWS IAM Identity Center, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up AWS Single Sign-on** will direct you to the AWS Single Sign-on application. From there, provide the admin credentials to sign into AWS Single Sign-on. The browser extension will automatically configure the application for you and automate steps 3-10.
+2. After adding extension to the browser, click on **Set up AWS IAM Identity Center** will direct you to the AWS IAM Identity Center application. From there, provide the admin credentials to sign into AWS IAM Identity Center. The browser extension will automatically configure the application for you and automate steps 3-10.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup AWS Single Sign-on manually, in a different web browser window, sign in to your AWS Single Sign-on company site as an administrator.
+3. If you want to setup AWS IAM Identity Center manually, in a different web browser window, sign in to your AWS IAM Identity Center company site as an administrator.
-1. Go to the **Services -> Security, Identity, & Compliance -> AWS Single Sign-On**.
+1. Go to the **Services -> Security, Identity, & Compliance -> AWS IAM Identity Center**.
2. In the left navigation pane, choose **Settings**.
-3. On the **Settings** page, find **Identity source** and click on **Change**.
+3. On the **Settings** page, find **Identity source**, click on **Actions** pull-down menu, and select Change **identity source**.
![Screenshot for Identity source change service](./media/aws-single-sign-on-tutorial/settings.png)
-4. On the Change identity source, choose **External identity provider**.
+4. On the Change identity source page, choose **External identity provider**.
![Screenshot for selecting external identity provider section](./media/aws-single-sign-on-tutorial/external-identity-provider.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for download and upload metadata section](./media/aws-single-sign-on-tutorial/upload-metadata.png)
- a. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
+ a. In the **Service provider metadata** section, find **AWS SSO SAML metadata**, select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
- b. Copy **AWS SSO Sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
+ b. Copy **AWS access portal sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- c. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file which you have downloaded from the Azure portal.
+ c. In the **Identity provider metadata** section, select **Choose file** to upload the metadata file which you have downloaded from the Azure portal.
d. Choose **Next: Review**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
9. Click **Change identity source**.
-### Create AWS Single Sign-on test user
+### Create AWS IAM Identity Center test user
-1. Open the **AWS SSO console**.
+1. Open the **AWS IAM Identity Center console**.
2. In the left navigation pane, choose **Users**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the Display name field, enter `Jane Doe`.
- g. Choose **Next: Groups**.
+ g. Choose **Next**, and then **Next** again.
> [!NOTE]
- > Make sure the username entered in AWS SSO matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
+ > Make sure the username entered in AWS IAM Identity Center matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
5. Choose **Add user**. 6. Next, you will assign the user to your AWS account. To do so, in the left navigation pane of the
-AWS SSO console, choose **AWS accounts**.
+AWS IAM Identity Center console, choose **AWS accounts**.
7. On the AWS Accounts page, select the AWS organization tab, check the box next to the AWS account you want to assign to the user. Then choose **Assign users**. 8. On the Assign Users page, find and check the box next to the user B.Simon. Then choose **Next:
permission set**.
> [!NOTE] > Permission sets define the level of access that users and groups have to an AWS account. To learn more
-about permission sets, see the AWS SSO **Permission Sets** page.
+about permission sets, see the **AWS IAM Identity Center Multi Account Permissions** page.
10. Choose **Finish**. > [!NOTE]
-> AWS Single Sign-on also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> AWS IAM Identity Center also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to AWS Single Sign-on Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AWS IAM Identity Center sign-in URL where you can initiate the login flow.
-* Go to AWS Single Sign-on Sign-on URL directly and initiate the login flow from there.
+* Go to AWS IAM Identity Center sign-in URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS Single Sign-on for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the AWS Single Sign-on tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS Single Sign-on for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the AWS IAM Identity Center tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure AWS Single Sign-on you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
Title: Azure Active Directory architecture overview (preview) description: Learn foundational information to plan and design your solution documentationCenter: ''-+ Last updated 06/02/2022-+ # Azure AD Verifiable Credentials architecture overview (preview)
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
Title: Plan your Azure Active Directory Verifiable Credentials issuance solution(preview) description: Learn to plan your end-to-end issuance solution. documentationCenter: ''-+ Last updated 06/03/2022-+
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
Title: Plan your Azure Active Directory Verifiable Credentials verification solution (preview) description: Learn foundational information to plan and design your verification solution documentationCenter: ''-+ Last updated 06/02/2022-+
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Ser
description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. Previously updated : 1/9/2022 Last updated : 07/18/2022 # Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS)
-Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters. Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] and [Windows][customer-managed-keys-windows].
+Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters.
+
+Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] and [Windows][customer-managed-keys-windows].
## Limitations+ * Data disk encryption support is limited to AKS clusters running Kubernetes version 1.17 and above. * Encryption of OS disk with customer-managed keys can only be enabled when creating an AKS cluster. ## Prerequisites+ * You must enable soft delete and purge protection for *Azure Key Vault* when using Key Vault to encrypt managed disks. * You need the Azure CLI version 2.11.1 or later.
+* Customer-managed keys are only supported in Kubernetes versions 1.17 and higher.
+* If you choose to rotate (change) your keys periodically, for more information see [Customer-managed keys and encryption of Azure managed disk](../virtual-machines/disk-encryption.md).
## Create an Azure Key Vault instance
az keyvault create -n myKeyVaultName -g myResourceGroup -l myAzureRegionName --
## Create an instance of a DiskEncryptionSet Replace *myKeyVaultName* with the name of your key vault. You will also need a *key* stored in Azure Key Vault to complete the following steps. Either store your existing Key in the Key Vault you created on the previous steps, or [generate a new key][key-vault-generate] and replace *myKeyName* below with the name of your key.
-
+ ```azurecli-interactive # Retrieve the Key Vault Id and store it in a variable $keyVaultId=az keyvault show --name myKeyVaultName --query "[id]" -o tsv
az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName
``` > [!IMPORTANT]
-> Ensure your AKS cluster identity has read permission of DiskEncryptionSet
+> Ensure your AKS cluster identity has **read** permission of DiskEncryptionSet
## Grant the DiskEncryptionSet access to key vault
az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden
## Create a new AKS cluster and encrypt the OS disk
-Create a **new resource group** and AKS cluster, then use your key to encrypt the OS disk. Customer-managed keys are only supported in Kubernetes versions greater than 1.17.
+Create a **new resource group** and AKS cluster, then use your key to encrypt the OS disk.
> [!IMPORTANT] > Ensure you create a new resoruce group for your AKS cluster
az group create -n myResourceGroup -l myAzureRegionName
az aks create -n myAKSCluster -g myResourceGroup --node-osdisk-diskencryptionset-id $diskEncryptionSetId --kubernetes-version KUBERNETES_VERSION --generate-ssh-keys ```
-When new node pools are added to the cluster created above, the customer-managed key provided during the create is used to encrypt the OS disk.
+When new node pools are added to the cluster created above, the customer-managed key provided during the create process is used to encrypt the OS disk.
## Encrypt your AKS cluster data disk(optional)
-OS disk encryption key will be used to encrypt data disk if key is not provided for data disk from v1.17.2, and you can also encrypt AKS data disks with your other keys.
+
+OS disk encryption key is used to encrypt the data disk if the key isn't provided for data disk from AKS version 1.17.2. You can also encrypt AKS data disks with your other keys.
> [!IMPORTANT]
-> Ensure you have the proper AKS credentials. The managed identity will need to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you will get an error suggesting that the managed identity does not have permissions.
+> Ensure you have the proper AKS credentials. The managed identity needs to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you'll get an error suggesting that the managed identity does not have permissions.
```azurecli-interactive # Retrieve your Azure Subscription Id from id property as shown below az account list ```
-```
+The following example resembles output from the command:
+
+```output
someuser@Azure:~$ az account list [ {
parameters:
kind: managed diskEncryptionSetID: "/subscriptions/{myAzureSubscriptionId}/resourceGroups/{myResourceGroup}/providers/Microsoft.Compute/diskEncryptionSets/{myDiskEncryptionSetName}" ```
-Next, run this deployment in your AKS cluster:
+
+Next, run the following commands to update your AKS cluster:
+ ```azurecli-interactive # Get credentials az aks get-credentials --name myAksCluster --resource-group myResourceGroup --output table
kubectl apply -f byok-azure-disk.yaml
## Using Azure tags
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
## Next steps
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
following command:
metadata: name: mypod spec:
+ nodeSelector:
+ kubernetes.io/os: linux
containers: - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine name: mypod
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
kind: Pod
metadata: name: mypod spec:
+ nodeSelector:
+ kubernetes.io/os: linux
containers: - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine name: mypod
spec:
- name: azure csi: driver: file.csi.azure.com
+ readOnly: false
volumeAttributes: secretName: azure-secret # required shareName: aksshare # required
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Only specific SKUs and sizes support Gen2 VMs. Check the [list of supported size
Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [AKS Ubuntu 18.04 image](#os-configuration). This image supports all Gen2 SKUs and sizes.
+## Default OS disk sizing
+
+By default, when creating a new cluster or adding a new node pool to an existing cluster, the disk size is determined by the number for vCPUs, which is based on the VM SKU. The default values are shown in the following table:
+
+|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mpbs) |
+|--|--|--|--|
+| 1 - 7 | P10/128G | 500 | 100 |
+| 8 - 15 | P15/256G | 1100 | 125 |
+| 16 - 63 | P20/512G | 2300 | 150 |
+| 64+ | P30/1024G | 5000 | 200 |
+
+> [!IMPORTANT]
+> Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks are not supported and a default OS disk size is not specified. The default OS disk size may impact the performance or cost of your cluster, but you can change the sizing of the OS disk at any time after cluster or node pool creation. This default disk sizing affects clusters or node pools created in July 2022 or later.
+ ## Ephemeral OS By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss if the VM needs to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
```azurecli-interactive az provider register --namespace Microsoft.ContainerService ```
+> [!NOTE]
+> Windows Server 2022 requires Kubernetes version "1.23.0" or higher.
Use `az aks nodepool add` command to add a Windows Server 2022 node pool:
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitoring AKS data reference
description: Important reference material needed when you monitor AKS Previously updated : 07/29/2021 Last updated : 07/18/2022
For reference, see a list of [all resource logs category types supported in Azur
| Category | Description | |:|:|
-| cluster-autoscaler | Understand why the AKS cluster is scaling up or down, which may not be expected. This information is also useful to correlate time intervals where something interesting may have happened in the cluster. |
-| guard | Managed Azure Active Directory and Azure RBAC audits. For managed Azure AD, this includes token in and user info out. For Azure RBAC, this includes access reviews in and out. |
| kube-apiserver | Logs from the API server. | | kube-audit | Audit log data for every audit event including get, list, create, update, delete, patch, and post. | | kube-audit-admin | Subset of the kube-audit log category. Significantly reduces the number of logs by excluding the get and list audit events from the log. | | kube-controller-manager | Gain deeper visibility of issues that may arise between Kubernetes and the Azure control plane. A typical example is the AKS cluster having a lack of permissions to interact with Azure. | | kube-scheduler | Logs from the scheduler. |
+| cluster-autoscaler | Understand why the AKS cluster is scaling up or down, which may not be expected. This information is also useful to correlate time intervals where something interesting may have happened in the cluster. |
+| cloud-controller-manager | Logs from the cloud-node-manager component of the Kubernetes cloud controller manager.|
+| guard | Managed Azure Active Directory and Azure RBAC audits. For managed Azure AD, this includes token in and user info out. For Azure RBAC, this includes access reviews in and out. |
+| csi-azuredisk-controller | Logs from the Azure Disk CSI storage driver. |
+| csi-azurefile-controller | Logs from the Azure Files CSI storage driver. |
+| csi-snapshot-controller | Logs from the Azure CSI driver snapshot controller. |
| AllMetrics | Includes all platform metrics. Sends these values to Log Analytics workspace where it can be evaluated with other data using log queries. | ## Azure Monitor Logs tables
-This section refers to all of the Azure Monitor Logs tables relevant to AKS and available for query by Log Analytics.
--
+This section refers to all of the Azure Monitor Logs tables relevant to AKS and available for query by Log Analytics.
|Resource Type | Notes | |-|--| | [Kubernetes services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) | Follow this link for a list of all tables used by AKS and a description of their structure. | - For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype). - ## Activity log The following table lists a few example operations related to AKS that may be created in the [Activity log](../azure-monitor/essentials/activity-log.md). Use the Activity log to track information such as when a cluster is created or had its configuration change. You can either view this information in the portal or create an Activity log alert to be proactively notified when an event occurs.
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
The OSM AKS add-on has the following limitations:
* [Iptables redirection][ip-tables-redirection] for port IP address and port range exclusion must be enabled using `kubectl patch` after installation. For more details, see [iptables redirection][ip-tables-redirection]. * Pods that are onboarded to the mesh that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+* At this time, OSM does not support Windows Server containers.
## Next steps
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
Azure API Management supports multi-region deployment, which enables API publish
A new Azure API Management service initially contains only one [unit][unit] in a single Azure region, the Primary region. Additional units can be added to the Primary or Secondary regions. An API Management gateway component is deployed to every selected Primary and Secondary region. Incoming API requests are automatically directed to the closest region. If a region goes offline, the API requests will be automatically routed around the failed region to the next closest gateway. > [!NOTE]
-> Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only. Therefore, in case of the Primary region outage, access to the developer portal and ability to change configuration (e.g. adding APIs, applying policies) will be impaired until the Primary region comes back online. While the Primary region is offline, available Secondary regions will continue to serve the API traffic using the latest configuration available to them. Optionally enable [zone redundancy](zone-redundancy.md) to improve the availability and resiliency of the Primary or Secondary regions.
+> Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only. Therefore, in case of the Primary region outage, access to the developer portal and ability to change configuration (e.g. adding APIs, applying policies) will be impaired until the Primary region comes back online. While the Primary region is offline, available Secondary regions will continue to serve the API traffic using the latest configuration available to them. Optionally enable [zone redundancy](../availability-zones/migrate-api-mgt.md) to improve the availability and resiliency of the Primary or Secondary regions.
>[!IMPORTANT] > The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo. For all other regions, customer data is stored in Geo.
A new Azure API Management service initially contains only one [unit][unit] in a
1. Select **+ Add** in the top bar. 1. Select the location from the drop-down list. 1. Select the number of scale **[Units](upgrade-and-scale.md)** in the location.
-1. Optionally enable [**Availability zones**](zone-redundancy.md).
+1. Optionally enable [**Availability zones**](../availability-zones/migrate-api-mgt.md).
1. If the API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), configure virtual network settings in the location. Select an existing virtual network, subnet, and public IP address that are available in the location. 1. Select **Add** to confirm. 1. Repeat this process until you configure all locations.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Check out the following related resources for the backup/restore process:
- [Automating API Management Backup and Restore with Logic Apps](https://github.com/Azure/api-management-samples/tree/master/tutorials/automating-apim-backup-restore-with-logic-apps) - [How to move Azure API Management across regions](api-management-howto-migrate.md)
-API Management **Premium** tier also supports [zone redundancy](zone-redundancy.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
+API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
[backup an api management service]: #step1 [restore an api management service]: #step2
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
In the Developer, Basic, Standard, and Premium tiers of API Management, the publ
* The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated. * Azure Virtual Network is added to or removed from the service. * API Management service is switched between External and Internal VNet deployment mode.
-* [Availability zones](zone-redundancy.md) are enabled, added, or removed.
+* [Availability zones](../availability-zones/migrate-api-mgt.md) are enabled, added, or removed.
In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated.
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
The following table summarizes migration options for instances in the different
|Tier |Migration options | |||
-|Premium | 1. Enable [zone redundancy](zone-redundancy.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) |
+|Premium | 1. Enable [zone redundancy](../availability-zones/migrate-api-mgt.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) |
|Developer | 1. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/>-or-<br/> 2. Update existing [VNet configuration](#update-vnet-configuration) | | Standard | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier.<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> | | Basic | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> |
The virtual network configuration is updated, and the instance is migrated to th
## Next steps * Learn more about using a [virtual network](virtual-network-concepts.md) with API Management.
-* Learn more about [zone redundancy](zone-redundancy.md).
+* Learn more about enabling [availability zones](../availability-zones/migrate-api-mgt.md).
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
With a private endpoint and Private Link, you can:
## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
- - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](zone-redundancy.md).
+ - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](../availability-zones/migrate-api-mgt.md).
- Do not deploy (inject) the instance into an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) virtual network. - A virtual network and subnet to host the private endpoint. The subnet may contain other Azure resources. - (Recommended) A virtual machine in the same or a different subnet in the virtual network, to test the private endpoint.
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/zone-redundancy.md
- Title: Availability zone support for Azure API Management
-description: Learn how to improve the resiliency of your Azure API Management service instance in a region by enabling zone redundancy.
---- Previously updated : 05/11/2022-----
-# Availability zone support for Azure API Management
-
-This article shows how to enable zone redundancy for your API Management instance by using the Azure portal. [Zone redundancy](../availability-zones/az-overview.md#availability-zones) provides resiliency and high availability to a service instance in a specific Azure region (location). With zone redundancy, the gateway and the control plane of your API Management instance (Management API, developer portal, Git configuration) are replicated across datacenters in physically separated zones, making it resilient to a zone failure.
-
-API Management also supports [multi-region deployments](api-management-howto-deploy-multi-region.md), which helps reduce request latency perceived by geographically distributed API consumers and improves availability of the gateway component if one region goes offline. The combination of availability zones for redundancy within a region, and multi-region deployments to improve the gateway availability if there is a regional outage, helps enhance both the reliability and performance of your API Management instance.
--
-## Supported regions
-
-Configuring API Management for zone redundancy is currently supported in the following Azure regions.
-
-* Australia East
-* Brazil South
-* Canada Central
-* Central India
-* Central US
-* East Asia
-* East US
-* East US 2
-* France Central
-* Germany West Central
-* Japan East
-* Korea Central (*)
-* North Europe
-* Norway East (*)
-* South Africa North (*)
-* South Central US
-* Southeast Asia
-* Switzerland North
-* UK South
-* West Europe
-* West US 2
-* West US 3
-
-> [!IMPORTANT]
-> The regions with * against them have restrictive access in an Azure subscription to enable availability zone support. Please work with your Microsoft sales or customer representative.
-
-## Prerequisites
-
-* If you haven't yet created an API Management service instance, see [Create an API Management service instance](get-started-create-service-instance.md). Select the Premium service tier.
-* If your API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), ensure that you set up a virtual network, subnet, and public IP address in any new location where you plan to enable zone redundancy.
-
-> [!NOTE]
-> If you've configured [autoscaling](api-management-howto-autoscale.md) for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
-
-## Enable zone redundancy - portal
-
-In the portal, optionally enable zone redundancy when you add a location to your API Management service, or update the configuration of an existing location.
-
-1. In the Azure portal, navigate to your API Management service and select **Locations** in the menu.
-1. Select an existing location, or select **+ Add** in the top bar. The location must [support availability zones](#supported-regions).
-1. Select the number of scale **[Units](upgrade-and-scale.md)** in the location.
-1. In **Availability zones**, select one or more zones. The number of units selected must distribute evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
-1. If the API Management instance is deployed in a [virtual network](api-management-using-with-vnet.md), select an existing virtual network, subnet, and public IP address that are available in the location. For an existing location, the virtual network and subnet must be configured from the Virtual Network blade.
-1. Select **Apply** and then select **Save**.
--
-> [!IMPORTANT]
-> The public IP address in the location changes when you enable, add, or remove availability zones. When updating availability zones in a region with network settings, you must configure a different public IP address resource than the one you set up previously.
-
-> [!NOTE]
-> It can take 15 to 45 minutes to apply the change to your API Management instance.
-
-## Next steps
-
-* Learn more about [deploying an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
-* You can also enable zone redundancy using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-simple-zones).
-* Learn more about [Azure services that support availability zones](../availability-zones/az-region.md).
-* Learn more about building for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
pip install -r requirements.txt
> [!NOTE] > If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
-Set environment variables to specify how to connect to a local PostgreSQL instance.
+This sample application requires an *.env* file describing how to connect to your local PostgreSQL instance. Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. This tutorial assumes the database name is *restaurant*. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
-This sample application requires an *.env* file describing how to connect to your local PostgreSQL instance. Create an *.env* file using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. This tutorial assumes the database name is *restaurant*. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
+```
+DBNAME=<database name>
+DBHOST=<database-hostname>
+DBUSER=<db-user-name>
+DBPASS=<db-password>
+```
For Django, you can use SQLite locally instead of PostgreSQL by following the instructions in the comments of the [*settings.py*](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/azureproject/settings.py) file.
python manage.py runserver
### [Flask](#tab/flask)
-In a web browser, go to the sample application at `http://localhost:5000` and add some restaurants and restaurant reviews to see how the app works.
+In a web browser, go to the sample application at `http://127.0.0.1:5000` and add some restaurants and restaurant reviews to see how the app works.
:::image type="content" source="./media/tutorial-python-postgresql-app/run-flask-postgresql-app-localhost.png" alt-text="A screenshot of the Flask web app with PostgreSQL running locally showing restaurants and restaurant reviews."::: ### [Django](#tab/django)
-In a web browser, go to the sample application at `http://localhost:8000` and add some restaurants and restaurant reviews to see how the app works.
+In a web browser, go to the sample application at `http://127.0.0.1:8000` and add some restaurants and restaurant reviews to see how the app works.
:::image type="content" source="./media/tutorial-python-postgresql-app/run-django-postgresql-app-localhost.png" alt-text="A screenshot of the Django web app with PostgreSQL running locally showing restaurants and restaurant reviews.":::
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and A
> [!div class="checklist"] > * Create an Azure Arc-enabled Kubernetes or AKS cluster.
-> * Connect your application and GitOps repositories to Azure Repos or Git Hub.
+> * Connect your application and GitOps repositories to Azure Repos or GitHub.
> * Implement CI/CD flow with either Azure Pipelines or GitHub. > * Connect your Azure Container Registry to Azure DevOps and Kubernetes. > * Create environment variable groups or secrets.
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
In Azure Functions, a function project is a container for one or more individual
# [Isolated process](#tab/isolated-process) ```console
- func init LocalFunctionProj --worker-runtime dotnet-isolated
+ func init LocalFunctionProj --worker-runtime dotnet-isolated --target-framework net6.0
```
+
1. Navigate into the project folder:
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
description: Learn how to use a .NET isolated process to run your C# functions i
Previously updated : 05/24/2022 Last updated : 07/06/2022 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
recommendations: false
# Guide for running C# Azure Functions in an isolated process
-This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on .NET 5.0, .NET 6.0, and .NET Framework 4.8 (preview support). [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 5.0.
+This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. Isolated process C# functions run on .NET 6.0, .NET 7.0, and .NET Framework 4.8 (preview support). [In-process C# class library functions](functions-dotnet-class-library.md) aren't supported on .NET 7.0.
| Getting started | Concepts| Samples | |--|--|--|
This article is an introduction to using C# to develop .NET isolated process fun
## Why .NET isolated process?
-Previously Azure Functions has only supported a tightly integrated mode for .NET functions, which run [as a class library](functions-dotnet-class-library.md) in the same process as the host. This mode provides deep integration between the host process and the functions. For example, .NET class library functions can share binding APIs and types. However, this integration also requires a tighter coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. To enable you to run outside these constraints, you can now choose to run in an isolated process. This process isolation also lets you develop functions that use current .NET releases (such as .NET 5.0), not natively supported by the Functions runtime. Both isolated process and in-process C# class library functions run on .NET 6.0. To learn more, see [Supported versions](#supported-versions).
+Previously Azure Functions has only supported a tightly integrated mode for .NET functions, which run [as a class library](functions-dotnet-class-library.md) in the same process as the host. This mode provides deep integration between the host process and the functions. For example, .NET class library functions can share binding APIs and types. However, this integration also requires a tighter coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. To enable you to run outside these constraints, you can now choose to run in an isolated process. This process isolation also lets you develop functions that use current .NET releases (such as .NET 7.0), not natively supported by the Functions runtime. Both isolated process and in-process C# class library functions run on .NET 6.0. To learn more, see [Supported versions](#supported-versions).
Because these functions run in a separate process, there are some [feature and functionality differences](#differences-with-net-class-library-functions) between .NET isolated function apps and .NET class library function apps.
A .NET isolated function project is basically a .NET console app project that ta
For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/NetFxWorker). > [!NOTE]
-> To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|6.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
+> To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|7.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
## Package references
This section describes the current state of the functional and behavioral differ
| Feature/behavior | In-process | Out-of-process | | - | - | - |
-| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 5.0<br/>.NET 6.0<br/>.NET Framework 4.8 (Preview) |
+| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 6.0<br/>.NET 7.0 (Preview)<br/>.NET Framework 4.8 (Preview) |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | Under [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Logging | [ILogger] passed to the function | [ILogger] obtained from [FunctionContext] |
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with
| **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. | | **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. | | **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `dotnet-isolated`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions). To generate a language-agnostic project with just the project files, use `custom`. When not set, you're prompted to choose your runtime during initialization. |
+| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, and `net48`. |
| > [!NOTE]
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 06/24/2022 Last updated : 07/06/2022 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
| Version | Support level | Description | | | | |
-| 4.x | GA | **_Recommended runtime version for functions in all languages._** Use this version to [run C# functions on .NET 6.0 and .NET Framework 4.8](functions-dotnet-class-library.md#supported-versions). |
+| 4.x | GA | **_Recommended runtime version for functions in all languages._** Use this version to [run C# functions on .NET 6.0, .NET 7.0, and .NET Framework 4.8](functions-dotnet-class-library.md#supported-versions). |
| 3.x | GA | Supports all languages. Use this version to [run C# functions on .NET Core 3.1 and .NET 5.0](functions-dotnet-class-library.md#supported-versions).| | 2.x | GA | Supported for [legacy version 2.x apps](#pinning-to-version-20). This version is in maintenance mode, with enhancements provided only in later versions.| | 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. |
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0` or `net48` as the target framework if you're using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net48` is currently in preview.
+You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
The Azure Maps account resource is defined in this template:
* **Subscription**: select an Azure subscription. * **Resource group**: select **Create new**, enter a unique name for the resource group, and then click **OK**.
- * **Location**: select a location. For example, **West US 2**.
+ * **Location**: select a location.
* **Account Name**: enter a name for your Azure Maps account, which must be globally unique. * **Pricing Tier**: select the appropriate pricing tier, the default value for the template is S0.
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md
You can manage your Azure Maps account through the Azure portal. After you have
## Account location
-Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations.
+Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations.
As an example, the managed identity infrastructure will communicate and notify the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources.
-Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location.
+Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location.
Read more about data-plane service coverage for Azure Maps services on [geographic coverage](./geographic-coverage.md).
Read more about data-plane service coverage for Azure Maps services on [geograph
4. Enter the information for your new account. ## Delete an account
Set up authentication with Azure Maps and learn how to get an Azure Maps subscri
> [Manage authentication](how-to-manage-authentication.md) Learn how to manage an Azure Maps account pricing tier:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Manage a pricing tier](how-to-manage-pricing-tier.md) Learn how to see the API usage metrics for your Azure Maps account:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [View usage metrics](how-to-view-api-usage.md)
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Create a new Azure Maps account using the following steps:
* Select the **Review + create** button. * Once you have ensured that everything is correct in the **Review + create** page, select the **Create** button.
- :::image type="content" source="./media/quick-android-map/create-account.png" alt-text="A screenshot that shows the Create Maps account pane in the Azure portal.":::
+ :::image type="content" source="./media/shared/create-account.png" alt-text="A screenshot that shows the Create Maps account pane in the Azure portal.":::
## Get the primary key for your account
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Create a new Azure Maps account with the following steps:
1. Select **Create a resource** in the upper left-hand corner of the [Azure portal](https://portal.azure.com). 2. Type **Azure Maps** in the *Search services and Marketplace* box.
-3. Select **Azure Maps** in the drop down list that appears, then select the **Create** button.
+3. Select **Azure Maps** in the drop-down list that appears, then select the **Create** button.
4. On the **Create an Azure Maps Account resource** page, enter the following values then select the **Create** button: * The *Subscription* that you want to use for this account. * The *Resource group* name for this account. You may choose to *Create new* or *Select existing* resource group.
Create a new Azure Maps account with the following steps:
* The *Pricing tier* for this account. Select **Gen2**. * Read the *License* and *Privacy Statement*, then select the checkbox to accept the terms.
- :::image type="content" source="./media/quick-demo-map-app/create-account.png" alt-text="A screen shot showing the Create an Azure Maps Account resource page in the Azure portal." lightbox="./media/quick-demo-map-app/create-account.png":::
+ :::image type="content" source="./media/shared/create-account.png" alt-text="Screenshot showing the Create an Azure Maps Account resource page in the Azure portal." lightbox="./media/shared/create-account.png":::
<a id="getkey"></a>
Once your Azure Maps account is successfully created, retrieve the primary key t
2. In the settings section, select **Authentication**. 3. Copy the **Primary Key** and save it locally to use later in this tutorial. >[!NOTE] > This quickstart uses the [Shared Key](azure-maps-authentication.md#shared-key-authentication) authentication approach for demonstration purposes, but the preferred approach for any production environment is to use [Azure Active Directory](azure-maps-authentication.md#azure-ad-authentication) authentication.
Once your Azure Maps account is successfully created, retrieve the primary key t
4. Try out the interactive search experience. In the search box on the upper-left corner of the demo web application, search for **restaurants**. 5. Move your mouse over the list of addresses and locations that appear below the search box. Notice how the corresponding pin on the map pops out information about that location. For privacy of private businesses, fictitious names and addresses are shown.
- :::image type="content" source="./media/quick-demo-map-app/interactive-search.png" alt-text="A screen shot showing the interactive map search web application." lightbox="./media/quick-demo-map-app/interactive-search.png":::
+ :::image type="content" source="./media/quick-demo-map-app/interactive-search.png" alt-text="Screenshot showing the interactive map search web application." lightbox="./media/quick-demo-map-app/interactive-search.png":::
## Clean up resources
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
This article shows you how to add the Azure Maps to an iOS app. It walks you through these basic steps:
-* Setup your development environment.
+* Set up your development environment.
* Create your own Azure Maps account. * Get your primary Azure Maps key to use in the app. * Reference the Azure Maps libraries from the project.
Create a new Azure Maps account with the following steps:
* Read the _License_ and _Privacy Statement_, and check the checkbox to accept the terms. * Select the **Create** button.
- ![Create an Azure maps account.](./media/ios-sdk/quick-ios-app/create-account.png)
+ ![Create an Azure maps account.](./media/shared/create-account.png)
## Get the primary key for your account
Once your Maps account is successfully created, retrieve the primary key that en
## Create a project in Xcode
-First, create a new iOS App project. Complete these steps to create a Xcode project:
+First, create a new iOS App project. Complete these steps to create an Xcode project:
1. Under **File**, select **New** -> **Project**.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
See also: [Regions that require endpoint modification](./custom-endpoints.md#reg
- [Profiler](./profiler-overview.md): `profiler` - [Snapshot](./snapshot-debugger.md): `snapshot`
+#### Is Connection string a secret?
+Connection string contains iKey which is a unique identifier used by the ingestion service to associate telemetry to a specific Application Insights resource. It is not to be considered a security token or key. The ingestion endpoint provides Azure AD-based authenticated telemetry ingestion options if you want to protect your AI resource from misuse.
+
+> [!NOTE]
+> Application Insights JavaScript SDK requires the connection string to be passed in during initialization/configuration. This is viewable in plain text in client browsers. There is no easy way to use the Azure AD-based authentication for browser telemetry. It is recommended that customers consider creating a separate Application Insights resource for browser telemetry if they need to secure the service telemetry.
## Connection string examples
Get started at development time with:
* [ASP.NET Core](./asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](./opencensus-python.md)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
# Overview of autoscale in Microsoft Azure This article describes what Microsoft Azure autoscale is, its benefits, and how to get started using it.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), [API Management services](../../api-management/api-management-key-concepts.md), and [Azure Data Explorer Clusters](/azure/data-explorer/).
+Azure autoscale supports a growing list of resource types. See the list of [supported resources](#supported-services-for-autoscale) for more details.
> [!NOTE] > Azure has two autoscale methods. An older version of autoscale applies to Virtual Machines (availability sets). This feature has limited support and we recommend migrating to virtual machine scale sets for faster and more reliable autoscale support. A link on how to use the older technology is included in this article.
You can set up autoscale via
| Spring Cloud |[Set up autoscale for microservice applications](../../spring-cloud/how-to-setup-autoscale.md)| | Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)| | Azure SignalR Service | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) |
+| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Logic Apps - Integration Service Environment(ISE) | [Add ISE Environment](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
+| Azure App Service Environment | [Autoscaling and App Service Environment v1](../../app-service/environment/app-service-environment-auto-scale.md) |
+| Service Fabric Managed Clusters | [Introduction to Autoscaling on Service Fabric managed clusters](../../service-fabric/how-to-managed-cluster-autoscale.md) |
+| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
+| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+ ## Next steps To learn more about autoscale, use the Autoscale Walkthroughs listed previously or refer to the following resources:
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
+
+ Title: "Migrate diagnostic settings storage retention to Azure Storage lifecycle management"
+description: "How to Migrate from diagnostic settings storage retention to Azure Storage lifecycle management"
+++++ Last updated : 07/10/2022+
+#Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated.
++
+# Migrate from diagnostic settings storage retention to Azure Storage lifecycle management
+
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+
+## Prerequisites
+
+An existing diagnostic setting logging to a storage account.
+
+## Migration Procedures
+
+To migrate your diagnostics settings retention rules, follow the steps below:
+
+1. Go to the Diagnostic Settings page for your logging resource and locate the diagnostic setting you wish to migrate
+1. Set the retention for your logged categories to *0*
+1. Select **Save**
+ :::image type="content" source="./media/retention-migration/diagnostics-setting.png" alt-text="A screenshot showing a diagnostics setting page.":::
+
+1. Navigate to the storage account you're logging to
+1. Under **Data management**, select **Lifecycle Management** to view or change lifecycle management policies
+1. Select List View, and select **Add a rule**
+1. Enter a **Rule name**
+1. Under **Rule Scope**, select **Limit blobs with filters**
+1. Under **Blob Type**, select **Append Blobs** and **Base blobs** under **Blob subtype**.
+1. Select **Next**
+
+1. Set your retention time, then select **Next**
+
+1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to.
+For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs.
+To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*.
+
+1. Select **Add** to save the rule.
+
+## Next steps
+
+[Configure a lifecycle management policy](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal).
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
Title: Profile web apps on an Azure VM - Application Insights Profiler
-description: Profile web apps on an Azure VM by using Application Insights Profiler.
+ Title: Enable Profiler for web apps on an Azure virtual machine
+description: Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler
Previously updated : 11/08/2019- Last updated : 07/18/2022+
-# Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler
+# Enable Profiler for web apps on an Azure virtual machine
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-You can also deploy Azure Application Insights Profiler on these
-* [Azure App Service](./profiler.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Service Fabric](?toc=%2fazure%2fazure-monitor%2ftoc.json)
+In this article, you learn how to run Application Insights Profiler on your Azure virtual machine (VM) or Azure virtual machine scale set via three different methods. Using any of these methods, you will:
-## Deploy Profiler on a virtual machine or a virtual machine scale set
-This article shows you how to get Application Insights Profiler running on your Azure virtual machine (VM) or Azure virtual machine scale set. Profiler is installed with the Azure Diagnostics extension for VMs. Configure the extension to run Profiler, and build the Application Insights SDK into your application.
+- Configure the Azure Diagnostics extension to run Profiler.
+- Install the Application Insights SDK onto a VM.
+- Deploy your application.
+- View Profiler traces via the Application Insights instance on Azure portal.
-1. Add the Application Insights SDK to your [ASP.NET application](../app/asp-net.md).
+## Pre-requisites
- To view profiles for your requests, you must send request telemetry to Application Insights.
+- A functioning [ASP.NET Core application](https://docs.microsoft.com/aspnet/core/getting-started)
+- An [Application Insights resource](../app/create-workspace-resource.md).
+- Review the Azure Resource Manager templates for the Azure Diagnostics extension:
+ - [VM](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
+ - [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json)
-1. Install Azure Diagnostics extension on your VM. For full Resource Manager template examples, see:
- * [Virtual machine](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
- * [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json)
-
- The key part is the ApplicationInsightsProfilerSink in the WadCfg. To have Azure Diagnostics enable Profiler to send data to your iKey, add another sink to this section.
+## Add Application Insights SDK to your application
+
+1. Open your ASP.NET core project in Visual Studio.
+
+1. Select **Project** > **Add Application Insights Telemetry**.
+
+1. Select **Azure Application Insights**, then click **Next**.
+
+1. Select the subscription where your Application Insights resource lives, then click **Next**.
+
+1. Select where to save connection string, then click **Next**.
+
+1. Select **Finish**.
+
+> [!NOTE]
+> For full instructions, including enabling Application Insights on your ASP.NET Core application without Visual Studio, see the [Application Insights for ASP.NET Core applications](../app/asp-net-core.md).
+
+## Confirm the latest stable release of the Application Insights SDK
+
+1. Go to **Project** > **Manage NuGet Packages**.
+
+1. Select **Microsoft.ApplicationInsights.AspNetCore**.
+
+1. In the side pane, select the latest version of the SDK from the dropdown.
+
+1. Select **Update**.
+
+ :::image type="content" source="../app/media/asp-net-core/update-nuget-package.png" alt-text="Screenshot of where to select the Application Insights package for update.":::
+
+## Enable Profiler
+
+You can enable Profiler by any of the following three ways:
+
+- Within your ASP.NET Core application using an Azure Resource Manager template and Visual Studio (recommended).
+- Using a PowerShell command via the Azure CLI.
+- Using Azure Resource Explorer.
+
+# [Visual Studio and ARM template](#tab/vs-arm)
+
+### Install the Azure Diagnostics extension
+
+1. Choose which Azure Resource Manager template to use:
+ - [VM](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
+ - [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json).
+
+1. In the template, locate the resource of type `extension`.
+
+1. In Visual Studio, navigate to the `arm.json` file in your ASP.NET Core application that was added when you installed the Application Insights SDK.
+
+1. Add the resource type `extension` from the template to the `arm.json` file to set up a VM or virtual machine scale set with Azure Diagnostics.
+
+1. Within the `WadCfg` tag, add your Application Insights instrumentation key to the `MyApplicationInsightsProfilerSink`.
- ```json
+ ```json
+ "WadCfg": {
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ }
+ ]
+ }
+ }
+ ```
+
+1. Deploy your application.
+
+# [PowerShell](#tab/powershell)
+
+The following PowerShell commands are an approach for existing VMs that touch only the Azure Diagnostics extension.
+
+> [!NOTE]
+> If you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
+
+### Install Application Insights via Azure Diagnostics config
+
+1. Export the currently deployed Azure Diagnostics config to a file:
+
+ ```powershell
+ $ConfigFilePath = [IO.Path]::GetTempFileName()
+ ```
+
+1. Add the Application Insights Profiler sink to the config returned by the following command:
+
+ ```powershell
+ (Get-AzVMDiagnosticsExtension -ResourceGroupName "YOUR_RESOURCE_GROUP" -VMName "YOUR_VM").PublicSettings | Out-File -Verbose $ConfigFilePath
+ ```
+
+ Application Insights Profiler `WadCfg`:
+
+ ```json
+ "WadCfg": {
"SinksConfig": { "Sink": [
- {
- "name": "ApplicationInsightsSink",
- "ApplicationInsights": "85f73556-b1ba-46de-9534-606e08c6120f"
- },
{ "name": "MyApplicationInsightsProfilerSink",
- "ApplicationInsightsProfiler": "85f73556-b1ba-46de-9534-606e08c6120f"
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
} ]
- },
- ```
+ }
+ }
+ ```
+
+1. Run the following command to pass the updated config to the `Set-AzVMDiagnosticsExtension` command.
+
+ ```powershell
+ Set-AzVMDiagnosticsExtension -ResourceGroupName "YOUR_RESOURCE_GROUP" -VMName "YOUR_VM" -DiagnosticsConfigurationPath $ConfigFilePath
+ ```
+
+ > [!NOTE]
+ > `Set-AzVMDiagnosticsExtension` might require the `-StorageAccountName` argument. If your original diagnostics configuration had the `storageAccountName` property in the `protectedSettings` section (which isn't downloadable), be sure to pass the same original value you had in this cmdlet call.
+
+### IIS Http Tracing feature
-1. Deploy the modified environment deployment definition.
+If the intended application is running through [IIS](https://www.microsoft.com/web/downloads/platform.aspx), enable the `IIS Http Tracing` Windows feature:
- Applying the modifications usually involves a full template deployment or a cloud service-based publish through PowerShell cmdlets or Visual Studio.
+1. Establish remote access to the environment.
- The following PowerShell commands are an alternate approach for existing virtual machines that touch only the Azure Diagnostics extension. Add the previously mentioned ProfilerSink to the config that's returned by the Get-AzVMDiagnosticsExtension command. Then pass the updated config to the Set-AzVMDiagnosticsExtension command.
+1. Use the [Add Windows features](/iis/configuration/system.webserver/tracing/) window, or run the following command in PowerShell (as administrator):
```powershell
- $ConfigFilePath = [IO.Path]::GetTempFileName()
- # After you export the currently deployed Diagnostics config to a file, edit it to include the ApplicationInsightsProfiler sink.
- (Get-AzVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM").PublicSettings | Out-File -Verbose $ConfigFilePath
- # Set-AzVMDiagnosticsExtension might require the -StorageAccountName argument
- # If your original diagnostics configuration had the storageAccountName property in the protectedSettings section (which is not downloadable), be sure to pass the same original value you had in this cmdlet call.
- Set-AzVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM" -DiagnosticsConfigurationPath $ConfigFilePath
+ Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All
```
-1. If the intended application is running through [IIS](https://www.microsoft.com/web/downloads/platform.aspx), enable the `IIS Http Tracing` Windows feature.
+ If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
- 1. Establish remote access to the environment, and then use the [Add Windows features](/iis/configuration/system.webserver/tracing/) window. Or run the following command in PowerShell (as administrator):
+ ```cli
+ az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All"
+ ```
- ```powershell
- Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All
- ```
-
- 1. If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
+1. Deploy your application.
- ```azurecli
- az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All"
- ```
+# [Azure Resource Explorer](#tab/azure-resource-explorer)
-1. Deploy your application.
+### Set Profiler sink using Azure Resource Explorer
-## Set Profiler Sink using Azure Resource Explorer
+Since the Azure portal doesn't provide a way to set the Application Insights Profiler sink, you can use [Azure Resource Explorer](https://resources.azure.com) to set the sink.
-We don't yet have a way to set the Application Insights Profiler sink from the portal. Instead of using PowerShell as described above, you can use Azure Resource Explorer to set the sink. But note, if you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
+> [!NOTE]
+> If you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
-1. Check that the Windows Azure Diagnostics extension is installed by viewing the extensions installed for your virtual machine.
+1. Verify the Microsoft Azure Diagnostics extension is installed by viewing the extensions installed for your virtual machine.
- ![Check if WAD extension is installed][wadextension]
+ :::image type="content" source="./media/profiler-vm/wad-extension.png" alt-text="Screenshot of checking if WAD extension is installed.":::
-2. Find the VM Diagnostics extension for your VM. Go to [https://resources.azure.com](https://resources.azure.com). Expand your resource group, Microsoft.Compute virtualMachines, virtual machine name, and extensions.
+1. Find the VM Diagnostics extension for your VM:
+ 1. Go to [https://resources.azure.com](https://resources.azure.com).
+ 1. Expand **subscriptions** and find the subscription holding the resource group with your VM.
+ 1. Drill down to your VM extensions by selecting your resource group, followed by **Microsoft.Compute** > **virtualMachines** > **[your virtual machine]** > **extensions**.
- ![Navigate to WAD config in Azure Resource Explorer][azureresourceexplorer]
+ :::image type="content" source="./media/profiler-vm/azure-resource-explorer.png" alt-text="Screenshot of navigating to WAD config in Azure Resource Explorer.":::
-3. Add the Application Insights Profiler sink to the SinksConfig node under WadCfg. If you don't already have a SinksConfig section, you may need to add one. Be sure to specify the proper Application Insights iKey in your settings. You'll need to switch the explorers mode to Read/Write in the upper right corner and Press the blue 'Edit' button.
+1. Add the Application Insights Profiler sink to the `SinksConfig` node under WadCfg. If you don't already have a `SinksConfig` section, you may need to add one. To add the sink:
- ![Add Application Insights Profiler Sink][resourceexplorersinksconfig]
+ - Specify the proper Application Insights iKey in your settings.
+ - Switch the explorers mode to Read/Write in the upper right corner.
+ - Press the blue **Edit** button.
-4. When you're done editing the config, press 'Put'. If the put is successful, a green check will appear in the middle of the screen.
+ :::image type="content" source="./media/profiler-vm/resource-explorer-sinks-config.png" alt-text="Screenshot of adding Application Insights Profiler sink.":::
- ![Send put request to apply changes][resourceexplorerput]
+ ```json
+ "WadCfg": {
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ }
+ ]
+ }
+ }
+ ```
+1. When you're done editing the config, press **PUT**.
+1. If the `put` is successful, a green check will appear in the middle of the screen.
+ :::image type="content" source="./media/profiler-vm/resource-explorer-put.png" alt-text="Screenshot of sending the put request to apply changes.":::
+ ## Can Profiler run on on-premises servers?
-We have no plan to support Application Insights Profiler for on-premises servers.
-## Next steps
--- Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.-- See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.-- For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+Currently, Application Insights Profiler is not supported for on-premises servers.
-[azureresourceexplorer]: ./media/profiler-vm/azure-resource-explorer.png
-[resourceexplorerput]: ./media/profiler-vm/resource-explorer-put.png
-[resourceexplorersinksconfig]: ./media/profiler-vm/resource-explorer-sinks-config.png
-[wadextension]: ./media/profiler-vm/wad-extension.png
+## Next steps
+Learn how to...
+> [!div class="nextstepaction"]
+> [Generate load and view Profiler traces](./profiler-data.md)
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 06/30/2022 Last updated : 07/18/2022 # Bicep CLI commands
The `decompile` command converts ARM template JSON to a Bicep file.
az bicep decompile --file main.json ```
+The command creates a file named _main.bicep_ in the same directory as _main.json_. If _main.bicep_ exists in the same directory, use the **--force** switch to overwrite the existing Bicep file.
+ For more information about using this command, see [Decompiling ARM template JSON to Bicep](decompile.md). ## install
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
Title: Decompile ARM template JSON to Bicep description: Describes commands for decompiling Azure Resource Manager templates to Bicep files. Previously updated : 04/12/2022 Last updated : 07/18/2022
To decompile ARM template JSON to Bicep, use:
az bicep decompile --file main.json ```
-The command creates a file named _main.bicep_ in the same directory as the ARM template.
+The command creates a file named _main.bicep_ in the same directory as _main.json_. If _main.bicep_ exists in the same directory, use the **--force** switch to overwrite the existing Bicep file.
> [!CAUTION] > Decompilation attempts to convert the file, but there is no guaranteed mapping from ARM template JSON to Bicep. You may need to fix warnings and errors in the generated Bicep file. Or, decompilation can fail if an accurate conversion isn't possible. To report any issues or inaccurate conversions, [create an issue](https://github.com/Azure/bicep/issues).
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
description: In this quickstart, you learn how to deploy Bicep files by using Gi
Previously updated : 05/19/2022 Last updated : 07/18/2022
The output is a JSON object with the role assignment credentials that provide ac
# [Open ID Connect](#tab/openid)
-Open ID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+Open ID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp ```
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
```azurecli-interactive az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
Open ID Connect is an authentication method that uses short-lived tokens. Settin
* Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >` * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
+ ```azurecli
- az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
```
-
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
+ ## Configure the GitHub secrets
To create a workflow, take the following steps:
subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }} resourceGroupName: ${{ secrets.AZURE_RG }} template: ./main.bicep
- parameters: storagePrefix=mystore
+ parameters: 'storagePrefix=mystore storageSKU=Standard_LRS'
failOnStdErr: false ```
-
+ Replace `mystore` with your own storage account name prefix. > [!NOTE]
To create a workflow, take the following steps:
- **name**: The name of the workflow. - **on**: The name of the GitHub events that triggers the workflow. The workflow is triggered when there's a push event on the main branch.
-
+ # [OpenID Connect](#tab/openid)
-
+ ```yml on: [push] name: Azure ARM
To create a workflow, take the following steps:
subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }} resourceGroupName: ${{ secrets.AZURE_RG }} template: ./main.bicep
- parameters: storagePrefix=mystore
+ parameters: 'storagePrefix=mystore storageSKU=Standard_LRS'
failOnStdErr: false ```
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
description: Create parameter file for passing in values during deployment of a
Previously updated : 06/16/2021 Last updated : 07/18/2022 # Create Bicep parameter file
Check the Bicep file for parameters with a default value. If a parameter has a d
} ```
+> [!NOTE]
+> For inline comments, you can use either // or /* ... */. In Visual Studio Code, save the parameter files with the **JSONC** file type, otherwise you will get an error message saying "Comments not permitted in JSON".
+ Check the Bicep's allowed values and any restrictions such as maximum length. Those values specify the range of values you can provide for a parameter. In this example, `storagePrefix` can have a maximum of 11 characters and `storageAccountType` must specify an allowed value. ```json
azure-resource-manager Microsoft Compute Credentialscombo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-compute-credentialscombo.md
Title: CredentialsCombo UI element description: Describes the Microsoft.Compute.CredentialsCombo UI element for Azure portal. -- Previously updated : 09/29/2018 -+ Last updated : 07/18/2022 + # Microsoft.Compute.CredentialsCombo UI element
-A group of controls with built-in validation for Windows and Linux passwords and SSH public keys.
+A group of controls with built-in validation for Windows passwords, and Linux passwords or SSH public keys.
## UI sample For Windows, users see:
-![Microsoft.Compute.CredentialsCombo Windows](./media/managed-application-elements/microsoft-compute-credentialscombo-windows.png)
For Linux with password selected, users see:
-![Microsoft.Compute.CredentialsCombo Linux password](./media/managed-application-elements/microsoft-compute-credentialscombo-linux-password.png)
For Linux with SSH public key selected, users see:
-![Microsoft.Compute.CredentialsCombo Linux key](./media/managed-application-elements/microsoft-compute-credentialscombo-linux-key.png)
## Schema
If `osPlatform` is **Linux** and the user provided an SSH public key, the contro
- If `constraints.required` is set to **true**, then the password or SSH public key text boxes must have values to validate successfully. The default value is **true**. - If `options.hideConfirmation` is set to **true**, then the second text box for confirming the user's password is hidden. The default value is **false**. - If `options.hidePassword` is set to **true**, then the option to use password authentication is hidden. It can be used only when `osPlatform` is **Linux**. The default value is **false**.-- Additional constraints on the allowed passwords can be implemented by using the `customPasswordRegex` property. The string in `customValidationMessage` is displayed when a password fails custom validation. The default value for both properties is **null**.
+- More constraints on the allowed passwords can be implemented by using the `customPasswordRegex` property. The string in `customValidationMessage` is displayed when a password fails custom validation. The default value for both properties is **null**. The schema shows an example of each property.
## Next steps
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 12/01/2021 Last updated : 07/18/2022 # Understand the structure and syntax of ARM templates
You have a few options for adding comments and metadata to your template.
### Comments
-For inline comments, you can use either `//` or `/* ... */`.
+For inline comments, you can use either `//` or `/* ... */`. In Visual Studio Code, save the parameter files with comments as the **JSON with comments (JSONC)** file type, otherwise you will get an error message saying "Comments not permitted in JSON".
> [!NOTE] >
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Azure Video Indexer analyzes the video and audio content by running 30+ AI model
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure Video Indexer flow.":::
-To start extracting insights with Azure Video Indexer, you need to [create an account](connect-to-azure.md) and upload videos, see the [how can i get started](#how-can-i-get-started-with-azure-video-indexer) section below.
+To start extracting insights with Azure Video Indexer, you need to [create an account](connect-to-azure.md) and upload videos, see the [how can I get started](#how-can-i-get-started-with-azure-video-indexer) section below.
## Compliance, Privacy and Security
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 07/13/2022 Last updated : 07/19/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
Jobs from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup
>- Azure workloads such as SQL and SAP HANA backups within Azure VMs have huge number of backup jobs. For example, log backups can run for every 15 minutes. So for such DB workloads, only user triggered operations are displayed. Scheduled backup operations aren't displayed. >- In Backup center you can view jobs for upto last 14 days. If you want to view jobs for a large duration, you can go to the individual Recovery Services vaults and select the **Backup Jobs** tab. For jobs older than 6 months, we recommend you to use Log Analytics and/or [Backup Reports](configure-reports.md) to reliably and efficiently query older jobs.
-## Backup Alerts in Recovery Services vault
-
-Alerts are primarily scenarios where users are notified so that they can take relevant action. The **Backup Alerts** section shows alerts generated by Azure Backup service. These alerts are defined by the service and user can't custom create any alerts.
-
-### Alert scenarios
-
-The following scenarios are defined by service as alertable scenarios.
--- Backup/Restore failures-- Backup succeeded with warnings for Microsoft Azure Recovery Services (MARS) agent-- Stop protection with retain data/Stop protection with delete data-- Soft-delete functionality disabled for vault-- [Unsupported backup type for database workloads](./backup-sql-server-azure-troubleshoot.md#backup-type-unsupported)-
-### Alerts from the following Azure Backup solutions are shown here
--- Azure VM backups-- Azure File backups-- Azure workload backups such as SQL, SAP HANA-- Microsoft Azure Recovery Services (MARS) agent-
-> [!NOTE]
-> Alerts from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup Server (MABS) aren't displayed here.
-
-### Consolidated Alerts
-
-For Azure workload backup solutions such as SQL and SAP HANA, log backups can be generated very frequently (up to every 15 minutes according to the policy). So it's also possible that the log backup failures are also very frequent (up to every 15 minutes). In this scenario, the end user will be overwhelmed if an alert is raised for each failure occurrence. So an alert is sent for the first occurrence and if the later failures are because of the same root cause, then further alerts aren't generated. The first alert is updated with the failure count. But if the alert is inactivated by the user, the next occurrence will trigger another alert and this will be treated as the first alert for that occurrence. This is how Azure Backup performs alert consolidation for SQL and SAP HANA backups. On-demand backup jobs are not consolidated.
-
-### Exceptions when an alert is not raised
-
-There are few exceptions when an alert isn't raised on a failure. They are:
--- User explicitly canceled the running job-- The job fails because another backup job is in progress (nothing to act on here since we just have to wait for the previous job to finish)-- The VM backup job fails because the backed-up Azure VM no longer exists-- [Consolidated Alerts](#consolidated-alerts)-
-The exceptions above are designed from the understanding that the result of these operations (primarily user triggered) shows up immediately on portal/PS/CLI clients. So the user is immediately aware and doesn't need a notification.
-
-### Alert types
-
-Based on alert severity, alerts can be defined in three types:
--- **Critical**: In principle, any backup or recovery failure (scheduled or user triggered) would lead to generation of an alert and would be shown as a Critical alert and also destructive operations such as delete backup.-- **Warning**: If the backup operation succeeds but with few warnings, they're listed as Warning alerts. Warning alerts are currently available only for Azure Backup Agent backups.-- **Informational**: Currently, no informational alert is generated by Azure Backup service.-
-## Notification for Backup Alerts
-
-> [!NOTE]
-> Configuration of notification can be done only through the Azure portal. PS/CLI/REST API/Azure Resource Manager Template support isn't supported.
-
-Once an alert is raised, users are notified. Azure Backup provides an inbuilt notification mechanism via e-mail. One can specify individual email addresses or distribution lists to be notified when an alert is generated. You can also choose whether to get notified for each individual alert or to group them in an hourly digest and then get notified.
-
-![RS vault inbuilt email notification screenshot](media/backup-azure-monitoring-laworkspace/rs-vault-inbuiltnotification.png)
-
-When notification is configured, you'll receive a welcome or introductory email. This confirms that Azure Backup can send emails to these addresses when an alert is raised.<br>
-
-If the frequency was set to an hourly digest and an alert was raised and resolved within an hour, it won't be a part of the upcoming hourly digest.
-
-> [!NOTE]
->
-> - If a destructive operation such as **stop protection with delete data** is performed, an alert is raised and an email is sent to subscription owners, admins, and co-admins even if notifications aren't configured for the Recovery Services vault.
-> - To configure notification for successful jobs use [Log Analytics](backup-azure-monitoring-use-azuremonitor.md#using-log-analytics-workspace).
-
-## Inactivating alerts
-
-To inactivate/resolve an active alert, you can select the list item corresponding to the alert you wish to inactivate. This opens up a screen that displays detailed information about the alert, with an **Inactivate** button on the top. Selecting this button will change the status of the alert to **Inactive**. You may also inactivate an alert by right-clicking on the list item corresponding to that alert and selecting **Inactivate**.
-
-![Screenshot for Backup center alert inactivation](media/backup-azure-monitoring-laworkspace/vault-alert-inactivate.png)
- ## Azure Monitor alerts for Azure Backup (preview)
-Azure Backup also provides alerts via Azure Monitor, to enable users to have a consistent experience for alert management across different Azure services, including backup. With Azure Monitor alerts, you can route alerts to any notification channel supported by Azure Monitor such as email, ITSM, Webhook, Logic App and so on.
+Azure Backup also provides alerts via Azure Monitor that enables you to have a consistent experience for alert management across different Azure services, including Azure Backup. With Azure Monitor alerts, you can route alerts to any notification channel supported by Azure Monitor, such as email, ITSM, Webhook, Logic App, and so on.
-Currently, Azure Backup has made two main types of built-in alerts available:
+Currently, Azure Backup provides two main types of built-in alerts:
+
+* **Security Alerts**: For scenarios, such as deletion of backup data, or disabling of soft-delete functionality for vault, security alerts (of severity Sev 0) are fired, and displayed in the Azure portal or consumed via other clients (PowerShell, CLI, and REST API). Security alerts are generated by default and can't be turned off. However, you can control the scenarios for which the notifications (for example, emails) should be fired. For more information on how to configure notifications, see [Action rules](../azure-monitor/alerts/alerts-action-rules.md).
+* **Job Failure Alerts**: For scenarios, such as backup failure and restore failure, Azure Backup provides built-in alerts via Azure Monitor (of Severity Sev 1). Unlike security alerts, you can choose to turn off Azure Monitor alerts for job failure scenarios. For example, you've already configured custom alert rules for job failures via Log Analytics, and don't need built-in alerts to be fired for every job failure. By default, alerts for job failures are turned off. For more information, see the [section on turning on alerts for these scenarios](#turning-on-azure-monitor-alerts-for-job-failure-scenarios).
-* **Security Alerts**: For scenarios, such as deletion of backup data, or disabling of soft-delete functionality for a vault, security alerts (of severity Sev 0) are fired, and displayed in the Azure portal or consumed via other clients (PowerShell, CLI and REST API). Security alerts are generated by default and can't be turned off. However, you can control the scenarios for which the notifications (for example, emails) should be fired. For more information on how to configure notifications, see [Action rules](../azure-monitor/alerts/alerts-action-rules.md).
-* **Job Failure Alerts**: For scenarios, such as backup failure and restore failure, Azure Backup provides built-in alerts via Azure Monitor (of Severity Sev 1). Unlike security alerts, you can choose to turn off Azure Monitor alerts for job failure scenarios. For example, if you have already configured custom alert rules for job failures via Log Analytics, and don't need built-in alerts to be fired for every job failure. By default, alerts for job failures are turned off. Refer to the [section on turning on alerts for these scenarios](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details.
-
The following table summarizes the different backup alerts currently available (in preview) via Azure Monitor and the supported workload/vault types: | **Alert Category** | **Alert Name** | **Supported workload types / vault types** | **Description** | | | - | | -- |
-| Security | Delete Backup Data | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | Stop protection with delete data alert is only generated if soft-delete functionality is enabled for the vault, that is, if soft-delete feature is disabled for a vault, then a single alert is sent to notify the user that soft-delete has been disabled. Subsequent deletion of backup data of any item does not raise an alert. |
-| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | For all workloads which support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service |
+| Security | Delete Backup Data | - Microsoft Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when you stop backup and deletes backup data. <br><br> **Note** <br> If you disable the soft-delete feature for the vault, Delete Backup Data alert is not received. |
+| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | For all workloads that support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service. |
| Security | Purge Complete | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | Delete Backup Data |
-| Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service |
-| Jobs | Backup Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
-| Jobs | Restore Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned off. Refer to the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios) for more details. |
-
+| Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service. |
+| Jobs | Backup Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). |
+| Jobs | Restore Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). |
+| Jobs | Unsupported backup type | - SQL in Azure VM <br><br> - SAP HANA in Azure VM | This alert is fired when the current settings for a database don't support certain backup types present in the policy. By default, alerts for unsupported backup type scenario are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). |
+| Jobs | Workload extension unhealthy | - SQL in Azure VM <br><br> - SAP HANA in Azure VM | This alert is fired when the Azure Backup workload extension for database backups is in an unhealthy state that might prevent future backups from succeeding. By default, alerts for workload extension unhealthy scenario are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). |
+
### Turning on Azure Monitor alerts for job failure scenarios
-To opt in to Azure Monitor alerts for backup failure and restore failure scenarios, follow the below steps:
+To opt in to Azure Monitor alerts for backup failure and restore failure scenarios, follow these steps:
**Choose a vault type**: # [Recovery Services vaults](#tab/recovery-services-vaults)
-1. Go to the Azure portal and search for **Preview Features**.
+Built-in Azure Monitor alerts are generated for job failures by default. If you want to turn off alerts for these scenarios, you can edit the monitoring settings property of the vault accordingly.
- :::image type="content" source="media/backup-azure-monitoring-laworkspace/portal-preview-features.png" alt-text="Screenshot for viewing preview features in portal.":::
+To manage monitoring settings for a Backup vault, follow these steps:
-1. You can view the list of all preview features that are available for you to opt in to.
+1. Go to the vault and select **Properties**.
- To receive job failure alerts for workloads backed up to Recovery Services vaults, select the flag named **EnableAzureBackupJobFailureAlertsToAzureMonitor** corresponding to *Microsoft.RecoveryServices* provider (column 3).
+1. Locate the **Monitoring Settings** vault property and select **Update**.
- :::image type="content" source="media/backup-azure-monitoring-laworkspace/alert-preview-feature-flags.png" alt-text="Screenshot for Alerts preview registration.":::
+ :::image type="content" source="./media/backup-azure-monitoring-laworkspace/recovery-services-vault-monitoring-settings.png" alt-text="Screenshot showing how to update monitoring settings in Recovery Services vault.":::
-1. Click **Register** to enable this feature for your subscription.
+1. In the context pane, select the appropriate options to enable/disable built-in Azure Monitor alerts for job failures depending on your requirement.
- > [!NOTE]
- > It may take up to 24 hours for the registration to take effect. To enable this feature for multiple subscriptions, repeat the above process by selecting the relevant subscription at the top of the screen. We also recommend to re-register the preview flag if a new resource has been created in the subscription after the initial registration to continue receiving alerts.
+ :::image type="content" source="./media/backup-azure-monitoring-laworkspace/recovery-services-vault-job-failure-alert-setting.png" alt-text="Screenshot showing options to enable or disable built-in Azure Monitoring alerts.":::
-1. As a best practice, we also recommend you to register the resource provider to ensure that the feature registration information gets synced with the Azure Backup service as expected.
+1. We also recommend you to select the checkbox **Use only Azure Monitor alerts**.
- To register the resource provider, run the following PowerShell command in the subscription for which you have registered the feature flag.
+ By selecting this option, you are consenting to receive backup alerts only via Azure Monitor and you will stop receiving alerts from the older classic alerts solution. [Review the key differences between classic alerts and built-in Azure Monitor alerts](./move-to-azure-monitor-alerts.md).
- ```powershell
- Register-AzResourceProvider -ProviderNamespace <ProviderNamespace>
- ```
+ :::image type="content" source="./media/backup-azure-monitoring-laworkspace/recovery-services-vault-opt-out-classic-alerts.png" alt-text="Screenshot showing the option to enable receiving backup alerts.":::
- To receive alerts for Recovery Services vaults, use the value *Microsoft.RecoveryServices* for the *ProviderNamespace* parameter.
+1. Select **Update** to save the setting for the vault.
# [Backup vaults](#tab/backup-vaults)
For Backup vaults, you no longer need to use a feature flag to opt in to alerts
To manage monitoring settings for a Backup vault, follow these steps:
-1. Go to the vault and click **Properties**.
+1. Go to the vault and select **Properties**.
-1. Locate the **Monitoring Settings** vault property and click **Update**.
+1. Locate the **Monitoring Settings** vault property and select **Update**.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/monitoring-settings-backup-vault.png" alt-text="Screenshot for monitoring settings in backup vault."::: - 1. In the context pane, select the appropriate options to enable/disable built-in Azure Monitor alerts for job failures depending on your requirement.
-1. Click **Update** to save the setting for the vault.
+1. Select **Update** to save the setting for the vault.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/job-failure-alert-setting-inline.png" alt-text="Screenshot for updating Azure Monitor alert settings in backup vault." lightbox="media/backup-azure-monitoring-laworkspace/job-failure-alert-setting-expanded.png":::
To manage monitoring settings for a Backup vault, follow these steps:
### Viewing fired alerts in the Azure portal
-Once an alert is fired for a vault, you can go to Backup center to view the alert in the Azure portal. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two kinds of alerts displayed:
+Once an alert is fired for a vault, you can go to Backup center to view the alert in the Azure portal. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two types of alerts displayed:
-* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed up (for example, back up or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section.
-* **Global Alerts**: Alerts that are not tied to a specific datasource (for example, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section.
+* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed-up (for example, back up or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section.
+* **Global Alerts**: Alerts that aren't tied to a specific datasource (for example, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section.
-Each of the above types of alerts is further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure since these alerts are only fired after registering the feature in the preview portal.
+Each of the above types of alerts is further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure because these alerts are only fired after registering the feature in the preview portal.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/backup-center-azure-monitor-alerts.png" alt-text="Screenshot for viewing alerts in Backup center.":::
-Clicking any of the numbers (or on the **Alerts** menu item) opens up a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can click any of the alerts to get more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
+Selecting any number (or the **Alerts** menu item) opens a list of all active alerts fired with the relevant filters applied. You can filter on a range of properties, such as subscription, resource group, vault, severity, state, and so on. You can select any alert to view more details about the alert, such as the affected datasource, alert description and recommended action, and so on.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/backup-center-alert-details.png" alt-text="Screenshot for viewing details of the alert.":::
-You can change the state of an alert to **Acknowledged** or **Closed** by clicking on **Change Alert State**.
+You can change the state of an alert to **Acknowledged** or **Closed** by selecting on **Change Alert State**.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/backup-center-change-alert-state.png" alt-text="Screenshot for changing state of the alert."::: > [!NOTE]
-> - In Backup center, only alerts for Azure-based workloads are displayed currently. To view alerts for on-premises resources, navigate to the Recovery Services vault and click the **Alerts** menu item.
-> - Only Azure Monitor alerts are displayed in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) are not displayed in Backup center. For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
-> - Currently, in case of blob restore alerts, alerts appear under datasource alerts only if you select both the dimensions - *datasourceId* and *datasourceType* while creating the alert rule. If any dimensions aren't selected, the alerts appear under global alerts.
+> - In Backup center, only alerts for Azure-based workloads currently appear. To view alerts for on-premises resources, go to the Recovery Services vault and select the **Alerts** menu item.
+> - Only Azure Monitor alerts appear in Backup center. Alerts raised by the older alerting solution (accessed via the [Backup Alerts](#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) don't appear in Backup center. For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
+> - Currently, for blob restore alerts, alerts appear under datasource alerts only if you select both the dimensions - *datasourceId* and *datasourceType* while creating the alert rule. If any dimensions aren't selected, the alerts appear under global alerts.
### Configuring notifications for alerts
-To configure notifications for Azure Monitor alerts, create an [alert processing rule](../azure-monitor/alerts/alerts-action-rules.md). To create an alert processing rule (earlier called _action rule_) to send email notifications to a given email address, follow these steps. Also, follow these steps to routing these alerts to other notification channels, such as ITSM, webhook, logic app, and so on.
+To configure notifications for Azure Monitor alerts, create an [alert processing rule](../azure-monitor/alerts/alerts-action-rules.md). To create an alert processing rule (earlier called *action rule*) to send email notifications to a given email address, follow these steps. Also, follow these steps to route these alerts to other notification channels, such as ITSM, webhook, logic app, and so on.
1. Go to **Backup center** in the Azure portal.
-1. Click **Alerts (Preview)** from the menu and select **Alert processing rules (preview)**.
+1. Select **Alerts (Preview)** from the menu and select **Alert processing rules (preview)**.
:::image type="content" source="./media/backup-azure-monitoring-laworkspace/backup-center-manage-alert-processing-rules-inline.png" alt-text="Screenshot for Manage Actions in Backup center." lightbox="./media/backup-azure-monitoring-laworkspace/backup-center-manage-alert-processing-rules-expanded.png":::
-1. Click **Create**.
+1. Select **Create**.
:::image type="content" source="./media/backup-azure-monitoring-laworkspace/backup-center-create-alert-processing-rule.png" alt-text="Screenshot for creating a new action rule.":::
To configure notifications for Azure Monitor alerts, create an [alert processing
:::image type="content" source="media/backup-azure-monitoring-laworkspace/azure-monitor-email.png" alt-text="Screenshot for setting notification properties.":::
-1. Click **Review+Create** and then **Create** to deploy the action group.
+1. Select **Review+Create** -> **Create** to deploy the action group.
-8. Save the action rule.
+1. Save the action rule.
[Learn more](../azure-monitor/alerts/alerts-action-rules.md) about Action Rules in Azure Monitor.
+## Backup alerts in Recovery Services vault
+
+> [!IMPORTANT]
+> This section describes an older alerting solution (referred to as classic alerts). We recommend you to switch to using Azure Monitor based alerts as it offers multiple benefits. For more information on how to switch, see [Switch Azure Monitor Based alerts](./move-to-azure-monitor-alerts.md).
+
+Alerts are primarily the scenarios where you're notified to take relevant action. The **Backup Alerts** section shows alerts that the Azure Backup service generates. These alerts are defined by the service and you can't custom create any alerts.
+
+### Alert scenarios
+
+The following scenarios are defined by service as alert-able scenarios:
+
+- Backup/Restore failures
+- Backup succeeded with warnings for Microsoft Azure Recovery Services (MARS) agent
+- Stop protection with delete data
+- Soft-delete functionality disabled for vault
+- [Unsupported backup type for database workloads](./backup-sql-server-azure-troubleshoot.md#backup-type-unsupported)
+- Workload extension health issues for database backup
+
+### Alerts from the various Azure Backup solutions
+
+The following are alerts from Azure Backup solutions are:
+
+- Azure VM backups
+- Azure File backups
+- Azure workload backups such as SQL, SAP HANA
+- Microsoft Azure Recovery Services (MARS) agent
+
+> [!NOTE]
+>- Alerts from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup Server (MABS) aren't displayed here.
+>- Stop protection with delete data alerts are currently not sent for Azure Files backup.
+>- Stop protection with delete data alert is only generated if sot-delete functionality is enabled for the vault, that is, if soft-delete feature is disabled for a vault, then a single alert is sent to notify you that soft-delete has been disabled. Subsequent deletion of the backup data of any item doesn't raise an alert.
+
+### Consolidated alerts
+
+For Azure workload backup solutions, such as SQL and SAP HANA, log backups can be generated frequently (up to every 15 minutes according to the policy). So, you might encounter frequent log backup failures (up to every 15 minutes). In this scenario, the end user will be overwhelmed if an alert is raised for each failure occurrence.
+
+So, an alert is sent for the first occurrence, and if the later failures are because of the same root cause, then further alerts aren't generated. The first alert is updated with the failure count. But if you've inactivated the alert, the next occurrence will trigger another alert and this will be treated as the first alert for that occurrence. This is how Azure Backup performs alert consolidation for SQL and SAP HANA backups.
+
+On-demand backup jobs aren't consolidated.
+
+### Exceptions when an alert isn't raised
+
+There're a few exceptions when an alert isn't raised on a failure. They are:
+
+- You've explicitly canceled the running job.
+- The job fails because another backup job is in progress (no actions to be taken as we've to wait for the previous job to finish).
+- The VM backup job fails because the backed-up Azure VM no longer exists.
+- [Consolidated Alerts](#consolidated-alerts)
+
+The exceptions above are designed from the understanding that the result of these operations (primarily user triggered) shows up immediately on portal/PS/CLI clients. So, you're immediately aware and doesn't need a notification.
+
+### Alert types
+
+Based on alert severity, you can define three types of alerts:
+
+- **Critical**: In principle, any backup or recovery failure (scheduled or user triggered) would lead to generation of an alert and would be shown as a *Critical* alert. The alert is also generated for destructive operations, such as delete backup.
+- **Warning**: If the backup operation succeeds, but with few warnings, they're listed as *Warning* alerts. Warning alerts are currently available only for Azure Backup Agent backups.
+- **Informational**: Currently, no informational alerts are generated by the Azure Backup service.
+
+## Notification for backup alerts
+
+> [!NOTE]
+> Configuration of notification can be done only through the Azure portal. PS/CLI/REST API/Azure Resource Manager Template client is currently not supported.
+
+Once an alert is raised, you're notified. Azure Backup provides a built-in notification mechanism via email. You can specify individual email addresses or distribution lists to be notified when an alert is generated. You can also choose if you need to receive notified for each individual alert or to group them in an hourly digest and then get notified.
++
+When notification is configured, you'll receive a welcome or introductory email. This confirms that Azure Backup can send emails to these addresses when an alert is raised.
+
+If the frequency was set to an hourly digest, and an alert was raised and resolved within an hour, it won't be a part of the upcoming hourly digest.
+
+> [!NOTE]
+>- If a destructive operation, such as **stop protection with delete data** is performed, an alert is raised and an email is sent to subscription owners, admins, and co-admins even if notifications aren't configured for the Recovery Services vault.
+>- To configure notification for successful jobs, use [Log Analytics](backup-azure-monitoring-use-azuremonitor.md#using-log-analytics-workspace).
+
+## Inactivating alerts
+
+To inactivate/resolve an active alert, you can select the list item corresponding to the alert you wish to inactivate. This opens up a screen that shows detailed information about the alert, with an **Inactivate** button at the top. Selecting this button will change the status of the alert to **Inactive**. You may also inactivate an alert by right-clicking the list item corresponding to that alert and selecting **Inactivate**.
++ ## Next steps [Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Move To Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md
+
+ Title: Switch to Azure Monitor based alerts for Azure Backup
+description: This article describes the new and improved alerting capabilities via Azure Monitor and the process to configure Azure Monitor.
+ Last updated : 07/19/2022++++++
+# Switch to Azure Monitor based alerts for Azure Backup
+
+Azure Backup now provides new and improved alerting capabilities via Azure Monitor. If you're using the older [classic alerts solution](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#backup-alerts-in-recovery-services-vault) for Recovery Services vaults, we recommend you move to Azure Monitor alerts.
+
+## Key benefits of Azure Monitor alerts
+
+- **Configure notifications to a wide range of notification channels**: Azure Monitor supports a wide range of notification channels, such as email, ITSM, webhooks, logic apps, and so on. You can configure notifications for backup alerts to any of these channels without investing much time in creating custom integrations.
+
+- **Enable notifications for selective scenarios**: With Azure Monitor alerts, you can choose the scenarios to be notified about. Also, you can enable notifications for test subscriptions.
+
+- **Monitor alerts at-scale via Backup center**: In addition to enabling you to manage the alerts from Azure Monitor dashboard, Azure Backup also provides an alert management experience tailored to backups via Backup center. This allows you to filter alerts by backup specific properties, such as workload type, vault location, and so on, and a way to get quick visibility into the active backup security alerts that need attention.
+
+- **Manage alerts and notifications programmatically**: You can use Azure MonitorΓÇÖs REST APIs to manage alerts and notifications via non-portal clients as well.
+
+- **Consistent alert management for multiple Azure services, including backup**: Azure Monitor is the native service for monitoring resources across Azure. With the integration of Azure Backup with Azure Monitor, you can manage backup alerts in the same way as alerts for other Azure services, without requiring a separate learning curve.
+
+## Supported alerting solutions
+
+Azure Backup now supports different kinds of Azure Monitor based alerting solutions. You can use a combination of any of these based on your specific requirements. Some of these solutions are:
+
+- **Built-in Azure Monitor alerts**: Azure Backup automatically generates built-in alerts for certain default scenarios, such as deletion of backup data, disabling of soft-delete, backup failures, restore failures, and so on. You can view these alerts out of the box via Backup center. To configure notifications for these alerts (for example, emails), you can use Azure Monitor's *Alert Processing Rules* and Action groups to route alerts to a wide range of notification channels.
+- **Metric alerts**: You can write custom alert rules using Azure Monitor metrics to monitor the health of your backup items across different KPIs.
+- **Log Alerts**: If you've scenarios where an alert needs to be generated based on custom logic, you can use Log Analytics based alerts for such scenarios, provided you've configured your vaults to send diagnostics data to a Log Analytics (LA) workspace.
+
+Learn more about [monitoring solutions supported by Azure Backup](monitoring-and-alerts-overview.md).<br><br>
+
+> [!VIDEO https://www.youtube.com/embed/3zzWxWJGWPg]
+
+## Migrate from classic alerts to built-in Azure Monitor alerts
+
+Among the different Azure Monitor based alert solutions, built-in Azure Monitor alerts come closest to classic alerts as per user experience and functionality. So, to quickly switch from classic alerts to Azure Monitor, you can use built-in Azure Monitor alerts.
+
+The following table lists the differences between classic backup alerts and built-in Azure Monitor alerts for backup:
+
+| Actions | Classic alerts | Built-in Azure Monitor alerts |
+| | | |
+| **Setting up notifications** | - You must enable the configure notifications feature for each Recovery Services vault, along with the email id(s) to which the notifications should be sent. <br><br> - For certain destructive operations, email notifications are sent to the subscription owner, admin and co-admin irrespective of the notification settings of the vault.| - Notifications are configured by creating an alert processing rule. <br><br> - While *alerts* are generated by default and can't be turned off for destructive operations, the notifications are in the control of the user, allowing you to clearly specify which set of email address (or other notification endpoints) you wish to route alerts to. |
+| **Notification suppression for database backup scenarios** | When there are multiple failures for the same database due to the same error code, a single alert is generated (with the occurrence count updated for each failure type) and a new alert is only generated when the original alert is inactivated. | The behavior is currently different. Here, a separate alert is generated for every backup failure. If there's a window of time when backups will fail for a certain known item (for example, during a maintenance window), you can create a suppression rule to suppress email noise for that backup item during the given period. |
+| **Pricing** | There're no additional charges for this solution. | Alerts for critical operations/failures generate by default (that you can view in the Azure portal or via non-portal interfaces) at no additional charge. However, to route these alerts to a notification channel (such as email), it incurs a minor charge for notifications beyond the *free tier* (of 1000 emails per month). Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
+
+Azure Backup now provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with just a few selects.
+
+Follow these steps:
+
+1. On the Azure portal, go to **Backup center** -> **Overview**.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/backup-center-overview-inline.png" alt-text="Screenshot showing Overview tab in Backup center." lightbox="./media/move-to-azure-monitor-alerts/backup-center-overview-expanded.png":::
+
+1. On the **Alerts** tile, the count of vaults appears that still has the classic alerts enabled.
+
+ Select the link to take the required action.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/backup-center-alerts-link-inline.png" alt-text="Screenshot showing number of vaults which have classic alerts enabled." lightbox="./media/move-to-azure-monitor-alerts/backup-center-alerts-link-expanded.png":::
+
+ On the next screen, there're two recommended actions:
+
+ - **Create rule**: This action creates an alert processing rule attached to an action group to enable you to receive notifications for Azure Monitor alerts. After selecting, it leads you to a template deployment experience.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/recommended-action-one.png" alt-text="Screenshot showing recommended alert migration action Create rule for Recovery Services vaults.":::
+
+ You can deploy two resources via this template:
+
+ - **Alert Processing Rule**: A rule that specifies alert types to be routed to each notification channel. This template deploys alert processing rules that span all Azure Monitor based alerts on all Recovery Services vaults in the subscription that the rule is created in.
+ - **Action Group**: The notification channel to which alerts should be sent. This template deploys an email action group so that alerts are routed to the email ID(s) specified while deploying the template.
+
+ To modify any of these parameters, for example, scope of alert processing rule, or choice of notification channels, you can edit these resources after creation, or you can [create the alert processing rule and action group from scratch](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#configuring-notifications-for-alerts) via the Azure portal.
+
+1. Enter the subscription, resource group, and region in which the alert processing rule and action group should be created. Also specify the email ID(s) to which notifications should be sent. Other parameters populate with default values and only need to be edited, if you want to customize the names and descriptions that the resources are created in.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-parameters.png" alt-text="Screenshot showing template parameters to setup notification rules for Azure Monitor alerts.":::
+
+1. Select **Review+Create** to initiate deployment.
+
+ Once deployed, notifications for Azure Monitor based alerts are enabled. If you have multiple subscriptions, repeat the above process to create an alert processing rule for each subscription.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-deploy.png" alt-text="Screenshot showing deployment of notification rules for Azure Monitor alerts.":::
+
+1. Next, you need to opt-out of classic alerts to avoid receiving duplicate alerts from two solutions.
+
+ Select **Manage Alerts** to view the vaults for which classic alerts are currently enabled.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/recommended-action-two.png" alt-text="Screenshot showing recommended alert migration action Manage Alerts for Recovery Services vaults.":::
+
+1. Select **Update** -> **Use only Azure Monitor alerts** checkbox.
+
+ By doing so, you agree to receive backup alerts only via Azure Monitor, and you'll stop receiving alerts from the older (classic alerts) solution.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/classic-alerts-vault.png" alt-text="Screenshot showing how to opt out of classic alerts for vault.":::
+
+1. To select multiple vaults on a page and update the settings for these vaults with a single action, select **Update** from the top menu.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/classic-alerts-multiple-vaults.png" alt-text="Screenshot showing how to opt out of classic alerts for multiple vaults.":::
+
+## Suppress notifications during a planned maintenance window
+
+For certain scenarios, you might want to suppress notifications for a particular window of time when backups are going to fail. This is especially important for database backups, where log backups could happen as frequently as every 15 minutes, and you don't want to receive a separate notification every 15 minutes for each failure occurrence. In such a scenario, you can create a second alert processing rule that exists alongside the main alert processing rule used for sending notifications. The second alert processing rule won't be linked to an action group, but is used to specify the time for notification types tha notification should be suppressed.
+
+By default, the suppression alert processing rule takes priority over the other alert processing rule. If a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
+
+To create a suppression alert processing rule, follow these steps:
+
+1. Go to **Backup center** -> **Alerts**, and select **Alert processing rules**.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-blade.png" alt-text="Screenshot showing alert processing rules blade in portal.":::
+
+1. Select **Create**.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-create.png" alt-text="Screenshot showing creation of new alert processing rule.":::
+
+1. Select **Scope**, for example, subscription or resource group, that the alert processing rule should span.
+
+ You can also select more granular filters if you want to suppress notifications only for a particular backup item. For example, if you want to suppress notifications for *testdb1* database within Virtual Machine *VM1*, you can specify filters "where Alert Context (payload) contains /subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/VM1/providers/Microsoft.RecoveryServices/backupProtectedItem/SQLDataBase;MSSQLSERVER;testdb1".
+
+ To get the required format of your required backup item, see the *SourceId field* from the [Alert details page](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#viewing-fired-alerts-in-the-azure-portal).
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-scope.png" alt-text="Screenshot showing specified scope of alert processing rule.":::
+
+1. Under **Rule Settings**, select **Suppress notifications**.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-settings.png" alt-text="Screenshot showing alert processing rule settings.":::
+
+1. Under **Scheduling**, select the window of time for which the alert processing rule will apply.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-schedule.png" alt-text="Screenshot showing alert processing rules scheduling.":::
+
+1. Under **Details**, specify the subscription, resource group, and name under that the alert processing rule should be created.
+
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-details.png" alt-text="Screenshot showing alert processing rules details.":::
+
+1. Select **Review+Create**.
+
+ If your suppression windows are one-off scenarios and not recurring, you can **Disable** the alert processing rule once you don't need it anymore. You can enable it again in future when you have a new maintenance window in the future.
+
+## Programmatic options
+
+You can also use programmatic methods to opt-out of classic alerts and manage Azure Monitor notifications.
+
+- **Opting out of classic backup alerts**: The **monitoringSettings** vault property helps you specify whether you want to disable classic alerts. You can create a custom ARM/Bicep template or Azure Policy to modify this setting for your vaults. Below is an example of this property for a vault where classic alerts are disabled and built-in Azure Monitor alerts are enabled for all job failures.
+
+ ```json
+ {
+ "monitoringSettings": {
+ "classicAlertsForCriticalOperations": "Disabled",
+ "azureMonitorAlertSettings": {
+ "alertsForAllJobFailures": "Enabled"
+ }
+ }
+ }
+ ```
+
+- **Setting up notifications for Azure Monitor alerts**:
+
+You can use the following standard programmatic interfaces supported by Azure Monitor to manage action groups and alert processing rules.
+
+- [Azure Monitor REST API reference](/rest/api/monitor/)
+- [Azure Monitor PowerShell reference](/powershell/module/az.monitor/?view=azps-8.0.0&preserve-view=true)
+- [Azure Monitor CLI reference](/cli/azure/monitor?view=azure-cli-latest)
+
+## Next steps
+Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
++++++
+
backup Restore Azure Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-encrypted-virtual-machines.md
Title: Restore encrypted Azure VMs description: Describes how to restore encrypted Azure VMs with the Azure Backup service. Previously updated : 08/20/2021 Last updated : 07/18/2022 # Restore encrypted Azure virtual machines
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Title: Apply the Key Vault VM extension in Azure Cloud Services (extended suppor
description: Learn about the Key Vault VM extension for Windows and how to enable it in Azure Cloud Services. --++ Last updated 05/12/2021
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
When you use SSML, keep in mind that special characters, such as quotation marks
Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used and when an element is required or optional. > [!IMPORTANT]
-> Don't forget to use double quotation marks around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML might not recognize attribute values that aren't in double quotation marks.
+> Don't forget to use double quotation marks around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML might not recognize attribute values that aren't in double quotation marks.
## Create an SSML document
The following table has descriptions of each supported role.
By default, all neural voices are fluent in their own language and English without using the `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
-You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
+You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
**Syntax**
The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must sp
</speak> ```
-Within the `speak` element, you can specify multiple languages including `en-US` for text-to-speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`.
+Within the `speak` element, you can specify multiple languages including `en-US` for text-to-speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US"> <voice name="en-US-JennyMultilingualNeural"> <lang xml:lang="es-MX">
- ¡Esperamos trabajar con usted!
+ ¡Esperamos trabajar con usted!
</lang> <lang xml:lang="en-US"> We look forward to working with you!
The optional `emphasis` element is used to add or remove word-level stress for t
**Example**
-This SSML snippet demonstrates how the `emphasis` element is used to add moderate level emphasis for the word "meetings".
-
+This SSML snippet demonstrates how the `emphasis` element is used to add moderate level emphasis for the word "meetings".
+ ```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-GuyNeural">
- I can help you join your <emphasis level="moderate">meetings</emphasis> fast.
- </voice>
-</speak>
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-GuyNeural">
+ I can help you join your <emphasis level="moderate">meetings</emphasis> fast.
+ </voice>
+</speak>
``` ## Add say-as element
Only one background audio file is allowed per SSML document. You can intersperse
> [!NOTE] > The `mstts:backgroundaudio` element is not supported by the Long Audio API.
+> [!NOTE]
+> The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element.
+ **Syntax** ```xml
For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cogniti
## Supported MathML elements
-The Mathematical Markup Language (MathML) is an XML-compliant markup language that lets developers specify how input text is converted into synthesized speech by using text-to-speech.
+The Mathematical Markup Language (MathML) is an XML-compliant markup language that lets developers specify how input text is converted into synthesized speech by using text-to-speech.
> [!NOTE]
-> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
+> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
**Example**
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 06/28/2022 Last updated : 07/18/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## July 2022
+
+* New AI models for [sentiment analysis](./sentiment-opinion-mining/overview.md) and [key phrase extraction](./key-phrase-extraction/overview.md) based on [z-code models](https://www.microsoft.com/research/project/project-zcode/), providing:
+ * Performance and quality improvements for the following 11 [languages](./sentiment-opinion-mining/language-support.md) supported by sentiment analysis: `ar`, `da`, `el`, `fi`, `hi`, `nl`, `no`, `pl`, `ru`, `sv`, `tr`
+ * Performance and quality improvements for the following 20 [languages](./key-phrase-extraction/language-support.md) supported by key phrase extraction: `af`, `bg`, `ca`, `hr`, `da`, `nl`, `et`, `fi`, `el`, `hu`, `id`, `lv`, `no`, `pl`, `ro`, `ru`, `sk`, `sl`, `sv`, `tr`
++ ## June 2022 * v1.0 client libraries for [conversational language understanding](./conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request) and [orchestration workflow](./orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request) are Generally Available for the following languages: * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
This article provides details on the REST API endpoints for the Azure OpenAI Se
The Azure OpenAI Service is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
-[**Management APIs reference documentation**](/azure/rest/api/cognitiveservices/)
+[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
## Authentication
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Title: DCasv5 and ECasv5 series confidential VMs (preview)
+ Title: DCasv5 and ECasv5 series confidential VMs
description: Learn about Azure DCasv5, DCadsv5, ECasv5, and ECadsv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements. Previously updated : 3/27/2022 Last updated : 7/08/2022
-# DCasv5 and ECasv5 series confidential VMs (preview)
-
-> [!IMPORTANT]
-> Azure DCasv5/ECasv5-series confidential virtual machines are currently in Preview. Use is subject to your [Azure subscription](https://azure.microsoft.com/support/legal/) and terms applicable to "Previews" as detailed in the Universal License Terms for Online Services section of the [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and the [Microsoft Products and Services Data Protection Addendum](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA) ("DPA").
-
+# DCasv5 and ECasv5 series confidential VMs
Azure confidential computing offers confidential VMs based on [AMD processors with SEV-SNP technology](virtual-machine-solutions-amd.md). Confidential VMs are for tenants with high security and confidentiality requirements. These VMs provide a strong, hardware-enforced boundary to help meet your security needs. You can use confidential VMs for migrations without making changes to your code, with the platform protecting your VM's state from being read or modified.
Some of the benefits of confidential VMs include:
## Full-disk encryption
-Confidential VMs offer a new and enhanced disk encryption scheme. This scheme protects all critical partitions of the disk. It also binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. These encryption keys can securely bypass Azure components, including the hypervisor and host operating system. To minimize the attack potential, a dedicated and separate cloud service also encrypts the disk during the initial creation of the VM.
+Azure confidential VMs offer a new and enhanced disk encryption scheme. This scheme protects all critical partitions of the disk. It also binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. These encryption keys can securely bypass Azure components, including the hypervisor and host operating system. To minimize the attack potential, a dedicated and separate cloud service also encrypts the disk during the initial creation of the VM.
-If the compute platform is missing critical settings for your VM's isolation then during boot [Azure Attestation](https://azure.microsoft.com/services/azure-attestation/) will not attest to the platform's health. This will prevent the VM from starting. For example, this scenario happens if you haven't enabled SEV-SNP.
+If the compute platform is missing critical settings for your VM's isolation, then during boot [Azure Attestation](https://azure.microsoft.com/services/azure-attestation/) won't attest to the platform's health. It will prevent the VM from starting. For example, this scenario happens if you haven't enabled SEV-SNP.
Full-disk encryption is optional, because this process can lengthen the initial VM creation time. You can choose between:
With Secure Boot, trusted publishers must sign OS boot components (including the
### Encryption pricing differences
-Confidential VMs use both the OS disk and a small encrypted virtual machine guest state (VMGS) disk of several megabytes. The VMGS disk contains the security state of the VM's components. Some components include the vTPM and UEFI bootloader. The small VMGS disk might incur a monthly storage cost.
+Azure confidential VMs use both the OS disk and a small encrypted virtual machine guest state (VMGS) disk of several megabytes. The VMGS disk contains the security state of the VM's components. Some components include the vTPM and UEFI bootloader. The small VMGS disk might incur a monthly storage cost.
From July 2022, encrypted OS disks will incur higher costs. This change is because encrypted OS disks use more space, and compression isn't possible. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/). ## Attestation and TPM
-Confidential VMs boot only after successful attestation of the platform's critical components and security settings. The attestation report includes:
+Azure confidential VMs boot only after successful attestation of the platform's critical components and security settings. The attestation report includes:
- A signed attestation report issued by AMD SEV-SNP - Platform boot settings - Platform firmware measurements - OS measurements
-Confidential VMs feature a virtual TPM (vTPM) for Azure VMs. The vTPM is a virtualized version of a hardware TPM, and complies with the TPM2.0 spec. You can use a vTPM as a dedicated, secure vault for keys and measurements. Confidential VMs have their own dedicated vTPM instance, which runs in a secure environment outside the reach of any VM.
+You can initialize an attestation request inside of a confidential VM to verify that your confidential VMs are running a hardware instance with AMD SEV-SNP enabled processors. For more information, see [Azure confidential VM guest attestation](https://aka.ms/CVMattestation).
+
+Azure confidential VMs feature a virtual TPM (vTPM) for Azure VMs. The vTPM is a virtualized version of a hardware TPM, and complies with the TPM2.0 spec. You can use a vTPM as a dedicated, secure vault for keys and measurements. Confidential VMs have their own dedicated vTPM instance, which runs in a secure environment outside the reach of any VM.
## Limitations
Confidential VMs *don't support*:
- Azure Backup - Azure Site Recovery - Azure Dedicated Host -- Microsoft Azure Virtual Machine Scale Sets for encrypted OS disks-- Capturing an image of a VM-- Azure Compute Gallery-- Ephemeral OS disks
+- Microsoft Azure Virtual Machine Scale Sets with full OS disk encryption enabled
+- Limited Azure Compute Gallery support
- Shared disks - Ultra disks - Accelerated Networking-- User-attestable platform reports - Live migration
confidential-computing Create Confidential Vm From Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/create-confidential-vm-from-compute-gallery.md
+
+ Title: Create a confidential VM from an Azure Compute Gallery image
+description: Learn how to quickly create and deploy an AMD-based DCasv5 or ECasv5 series Azure confidential virtual machine (confidential VM) from an Azure Compute Gallery image
++++++ Last updated : 07/14/2022+
+ms.devlang: azurecli
++
+# Quickstart: Deploy a confidential VM from an Azure Compute Gallery image using the Azure portal
+
+[Azure confidential virtual machines](confidential-vm-overview.md) supports the creation and sharing of custom images using Azure Compute Gallery. There are two types of images that you can create, based on the security types of the image:
+
+- [Confidential VM (`ConfidentialVM`) images](#confidential-vm-images) are images where the source already has the [VM Guest state information](confidential-vm-faq-amd.yml). This image type might also have confidential disk encryption enabled.
+- [Confidential VM supported (`ConfidentialVMSupported`) images](#confidential-vm-supported-images) are images where the source doesn't have VM Guest state information and confidential disk encryption is not enabled.
+
+## Confidential VM images
+
+For the following image sources, the security type on the image definition should be set to `ConfidentialVM` as the image source already has [VM Guest State information](confidential-vm-faq-amd.yml#is-there-an-extra-cost-for-using-confidential-vms-) and may also have confidential disk encryption enabled:
+- Confidential VM capture
+- Managed OS disk
+- Managed OS disk snapshot
+
+The resulting image version can be used only to create confidential VMs.
+
+This image version can be replicated within the source region **but cannot be replicated to a different region** or across subscriptions currently.
+
+> [!NOTE]
+> If you want to create an image from a Windows confidential VM that has confidential compute disk encryption enabled with a platform-managed key or a customer-managed key, you can only create a specialized image. This limitation exists because the generalization tool (**sysprep**), might not be able to generalized the encrypted image source. This limitation applies to the OS disk, which is implicitly created along with the Windows confidential VM, and the snapshot created from this OS disk.
+
+### Create a Confidential VM type image using Confidential VM capture
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the **Virtual machines** service.
+1. Open the confidential VM that you want to use as the image source.
+1. If you want to create a generalized image, [remove machine-specific information](../virtual-machines/generalize.md) before you create the image.
+1. Select **Capture**.
+1. In the **Create an image** page that opens, [create your image definition and version](../virtual-machines/image-version.md?tabs=portal#create-an-image).
+ 1. Allow the image to be shared to Azure Compute Gallery as a VM image version. Managed images aren't supported for confidential VMs.
+ 1. Either create a new gallery, or select an existing gallery.
+ 1. For the **Operating system state**, select either **Generalized** or **Specialized**, depending on your use case.
+ 1. Create an image definition by providing a name, publisher, offer, and SKU details. Make sure the security type is set to **Confidential**.
+ 1. Provide a version number for the image.
+ 1. For **Replication**, modify the replica count, if required.
+ 1. Select **Review + Create**.
+ 1. When the image validation succeeds, select **Create** to finish creating the image.
+1. Select the image version to go to the resource directly. Or, you can go to the image version through the image definition. The image definition also shows the encryption type, so you can check that the image and source VM match.
+1. On the image version page, select **Create VM**.
+
+Now, you can [create a Confidential VM from your custom image](#create-a-confidential-vm-from-gallery-image).
+
+### Create a Confidential VM type image from managed disk or snapshot
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you want to create a generalized image, [remove machine-specific information](../virtual-machines/generalize.md) for the disk or snapshot before you create the image.
+1. Search for and select **VM Image Versions** in the search bar.
+1. Select **Create**
+1. On the **Create VM image version** page's **Basics** tab:
+ 1. Select an Azure subscription.
+ 1. Select an existing resource group, or create a new resource group.
+ 1. Select an Azure region.
+ 1. Enter a version number for the image.
+ 1. For **Source**, select **Disks and/or Snapshots**.
+ 1. For **OS disk**, select either a managed disk or managed disk snapshot.
+ 1. For **Target Azure compute gallery**, select or create a gallery to share the image in.
+ 1. For **Operating system state**, select either **Generalized** or **Specialized** depending on your use case.
+ 1. For **Target VM image definition**, select **Create new**.
+ 1. In the **Create a VM image definition** pane, enter a name for the definition. Make sure the **Security type** is **Confidential**. Enter the publisher, offer, and SKU information. Then, select **Ok**.
+1. On the **Encryption** tab, make sure the **Confidential compute encryption type** matches the source disk or snapshot's type.
+1. Select **Review + Create** to review your settings.
+1. After the settings are validated, select **Create** to finish creating the image version.
+1. After the image version is successfully created, select **Create VM**.
+
+Now, you can [create a Confidential VM from your custom image](#create-a-confidential-vm-from-gallery-image).
+
+## Confidential VM Supported images
+
+For the following image sources, the security type on the image definition should be set to `ConfidentialVMSupported` as the image source does not have VM Guest state information and confidential disk encryption:
+- OS Disk VHD
+- Gen2 Managed Image
+
+The resulting image version can be used to create either Azure Gen2 VMs or confidential VMs.
+
+This image can be replicated within the source region and to different target regions.
+
+> [!NOTE]
+> The OS disk VHD or Managed Image should be created from an image that is compatible with confidential VM. The size of the VHD or managed image should be less than 32 GB
+
+### Create a Confidential VM Supported type image
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **VM image versions** in the search bar
+1. On the **VM image versions** page, select **Create**.
+1. On the **Create VM image version** page, on the **Basics** tab:
+ 1. Select the Azure subscription.
+ 1. Select an existing resource group or create a new resource group.
+ 1. Select the Azure region.
+ 1. Enter an image version number.
+ 1. For **Source**, select either **Storage Blobs (VHD)** or **Managed Image**.
+ 1. If you selected **Storage Blobs (VHD)**, enter an OS disk VHD (without the VM Guest state). Make sure to use a Gen 2 VHD.
+ 1. If you selected **Managed Image**, select an existing managed image of a Gen 2 VM.
+ 1. For **Target Azure compute gallery**, select or create a gallery to share the image.
+ 1. For **Operating system state**, select either **Generalized** or **Specialized** depending on your use case. If you're using a managed image as the source, always select **Generalized**. If you're using a storage blob (VHD) and want to select **Generalized**, follow the steps to [generalize a Linux VHD](../virtual-machines/linux/create-upload-generic.md) or [generalize a Windows VHD](../virtual-machines/windows/upload-generalized-managed.md) before you continue.
+ 1. For **Target VM Image Definition**, select **Create new**.
+ 1. In the **Create a VM image definition** pane, enter a name for the definition. Make sure the security type is set to **Confidential supported**. Enter publisher, offer, and SKU information. Then, select **Ok**.
+1. On the **Replication** tab, enter the replica count and target regions for image replication, if required.
+1. On the **Encryption** tab, enter SSE encryption-related information, if required.
+1. Select **Review + Create**.
+1. After the configuration is successfully validated, select **Create** to finish creating the image.
+1. After the image version is created, select **Create VM**.
+
+## Create a Confidential VM from gallery image
+Now that you have successfully created an image, you can now use that image to create a confidential VM.
+
+1. On the **Create a virtual machine** page, configure the **Basics** tab:
+ 1. Under **Project details**, for **Resource group**, create a new resource group or select an existing resource group.
+ 1. Under **Instance details**, enter a VM name and select a region that supports confidential VMs. For more information, find the confidential VM series in the table of [VM products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
+ 1. If you're using a *Confidential* image, the security type is set to **Confidential virtual machines** and can't be modified. If you're using a *Confidential Supported* image, you have to select the security type as **Confidential virtual machines** from **Standard**.
+ 1. vTPM is enabled by default and can't be modified.
+ 1. Secure Boot is enabled by default. To modify the setting, use ***Configure Security features***. Secure Boot is required to use confidential compute encryption.
+1. On the **Disks** tab, configure your encryption settings if necessary.
+ 1. If you're using a *Confidential* image, the confidential compute encryption and the confidential disk encryption set (if you're using customer-managed keys) are populated based on the selected image version and can't be modified.
+ 1. If you're using a *Confidential supported* image, you can select confidential compute encryption, if required. Then, provide a confidential disk encryption set, if you want to use customer-managed keys.
+1. Enter the administrator account information.
+1. Configure any inbound port rules.
+1. Select **Review + Create**.
+1. On the validation page, review the details of the VM.
+1. After the validation succeeds, select **Create** to finish creating the VM.
++
+## Next steps
+For more information on Confidential Computing, see the [Confidential Computing overview](overview.md) page.
confidential-computing Key Rotation Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/key-rotation-offline.md
+
+ Title: Rotate customer-managed keys for Azure confidential virtual machines
+description: Learn how to rotate customer-managed keys for confidential virtual machines (confidential VMs) in Azure.
++++ Last updated : 07/06/2022+++
+# Rotate customer-managed keys for confidential VMs
+
+Confidential virtual machines (confidential VMs) in Azure supports customer-managed keys. Customer-managed keys help confidential VMs and associated artifacts work properly. You can manage these keys in Azure Key Vault or through a managed Hardware Security Module (managed HSM). This article focuses on managing the keys through a managed HSM, unless stated otherwise.
+
+If you want to use a customer-managed key, you must supply a Disk Encryption Set resource when you create your confidential VM. The Disk Encryption Set must reference the customer-managed key. Typically, you might associate a single Disk Encryption Set with multiple confidential VMs.
+It's recommended that you periodically rotate a customer-managed key as a security best practice. The frequency of rotation is an organizational policy decision. Rotation is also necessary if a customer-managed key is compromised.
+
+## Change customer-managed key
+
+You can change the key that you're using for confidential VMs at any time. To rotate a customer-managed key:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the **Virtual Machines** service.
+1. Stop all confidential VMs with the same Disk Encryption Set. If one or more VMs isn't in the stopped state, then none of the VMs can receive the new key.
+1. Go to the **Disk Encryption Sets** service.
+1. Select the Disk Encryption Set resource that's associated with your confidential VM.
+1. On the resource's menu, under **Settings**, select **Key**.
+1. Select **Change key**.
+1. Select the appropriate key vault, key and version.
+1. Save your changes. The save operation updates the key for all confidential VM artifacts.
+
+## Retry key rotation
+
+In rare cases, the customer-managed key might not be rotated for all confidential VMs, even when all the VMs were stopped. If the customer-managed key isn't rotated, the Disk Encryption Set resource still contains a reference to the old key. In this state, some confidential VMs can have the new key and some can have the old key.
+
+To resolve this issue, repeat the steps to [update the Disk Encryption Set](#change-customer-managed-key).
+
+## Limitations
+
+- Automatic key rotation isn't currently supported for confidential VMs.
+- Key rotation isn't supported for ephemeral disks. It's recommended to have a separate Disk Encryption Set for confidential VMs with an ephemeral disk. If confidential VMs with ephemeral and non-ephemeral disks share the same Disk Encryption Set, you must delete the confidential VMs with ephemeral disks before you rotate the keys for the confidential VMs with non-ephemeral disks.
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Title: Create an Azure AMD-based confidential VM with ARM template (preview)
+ Title: Create an Azure AMD-based confidential VM with ARM template
description: Learn how to quickly create and deploy an AMD-based DCasv5 or ECasv5 series Azure confidential virtual machine (confidential VM) using an ARM template. Previously updated : 3/21/2022 Last updated : 7/14/2022 ms.devlang: azurecli
-# Quickstart: Deploy confidential VM with ARM template (preview)
-
-> [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Quickstart: Deploy confidential VM with ARM template
You can use an Azure Resource Manager template (ARM template) to create an Azure [confidential VM](confidential-vm-overview.md) quickly. Confidential VMs run on AMD processors backed by AMD SEV-SNP to achieve VM memory encryption and isolation. For more information, see [Confidential VM Overview](confidential-vm-overview.md).
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
Title: Create an Azure AMD-based confidential VM in the Azure portal (preview)
+ Title: Create an Azure AMD-based confidential VM in the Azure portal
description: Learn how to quickly create an AMD-based confidential virtual machine (confidential VM) in the Azure portal using Azure Marketplace images.
-# Quickstart: Create confidential VM on AMD in the Azure portal (preview)
-
-> [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Quickstart: Create confidential VM on AMD in the Azure portal
You can use the Azure portal to create a [confidential VM](confidential-vm-overview.md) based on an Azure Marketplace image quickly.There are multiple [confidential VM options on AMD](virtual-machine-solutions-amd.md) with AMD SEV-SNP technology.
You can use the Azure portal to create a [confidential VM](confidential-vm-overv
- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). - If you're using a Linux-based confidential VM, use a BASH shell for SSH or install an SSH client, such as [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).-- If Confidential disk encryption with a customer-managed key is required, please run below command to opt-in service principal `Confidential VM Orchestrator` to your tenant.
+- If Confidential disk encryption with a customer-managed key is required, please run below command to opt in service principal `Confidential VM Orchestrator` to your tenant.
```azurecli Connect-AzureAD -Tenant "your tenant ID"
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. For **Security Type**, select **Confidential virtual machines**.
- 1. For **Image**, select the OS image to use for your VM. For this tutorial, select **Ubuntu Server 20.04 LTS (Confidential VM preview)**, **Windows Server 2019 [Small disk] Data Center**, or **Windows Server 2022 [Small disk] Data Center**.
+ 1. For **Image**, select the OS image to use for your VM. For this tutorial, select **Ubuntu Server 20.04 LTS (Confidential VM)**, **Windows Server 2019 [Small disk] Data Center**, or **Windows Server 2022 [Small disk] Data Center**.
> [!TIP] > Optionally, select **See all images** to open Azure Marketplace. Select the filter **Security Type** &gt; **Confidential** to show all available confidential VM images.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Last updated 11/15/2021
-# Azure Confidential VM options on AMD (preview)
+# Azure Confidential VM options on AMD
> [!IMPORTANT] > Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
Consider the following settings and choices before deploying confidential VMs.
### Azure subscription
-To deploy a confidential VM instance, consider a pay-as-you-go subscription or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate amount of Azure compute cores.
+To deploy a confidential VM instance, consider a pay-as-you-go subscription or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes.
For more information about supported and unsupported VM scenarios, see [support
### High availability and disaster recovery
-You're responsible for creating high availability and disaster recovery solutions for your confidential VMs. Planning for these scenarios helps minimize avoid prolonged downtime.
+You're responsible for creating high availability and disaster recovery solutions for your confidential VMs. Planning for these scenarios helps minimize and avoid prolonged downtime.
### Deployment with ARM templates
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
spID=$(az identity show \
resourceID=$(az identity show \ --resource-group myResourceGroup \ --name myACIId \
- --query id --output tsv)
+ --query id --output none)
``` ### Grant user-assigned identity access to the key vault
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md
Previously updated : 11/03/2021 Last updated : 07/18/2022 # Security in Azure Cosmos DB - overview
Let's dig into each one in detail.
|Respond to attacks|Once you have contacted Azure support to report a potential attack, a 5-step incident response process is kicked off. The goal of the 5-step process is to restore normal service security and operations as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).| |Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, US Gov).| |Protected facilities|Data in Azure Cosmos DB is stored on SSDs in Azure's protected data centers.<br><br>Learn more in [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters)|
-|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB also supports TLS 1.2.<br>It is possible to enforce a minimum TLS version server-side. To do so, open an [Azure support ticket](https://azure.microsoft.com/support/options/).|
+|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.3 (included).<br>It is possible to enforce a minimum TLS level server-side. To do so, open an [Azure support ticket](https://azure.microsoft.com/support/options/).|
|Encryption at rest|All data stored into Azure Cosmos DB is encrypted at rest. Learn more in [Azure Cosmos DB encryption at rest](./database-encryption-at-rest.md)| |Patched servers|As a managed database, Azure Cosmos DB eliminates the need to manage and patch servers, that's done for you, automatically.| |Administrative accounts with strong passwords|It's hard to believe we even need to mention this requirement, but unlike some of our competitors, it's impossible to have an administrative account with no password in Azure Cosmos DB.<br><br> Security via TLS and HMAC secret based authentication is baked in by default.|
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
>
-In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage a Cosmos DB SQL API account.
+In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage a Cosmos DB SQL API account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md)
* [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier) * Without an Azure active subscription:
- * [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/), a tests environment that lasts for 30 days.
+ * [Try Azure Cosmos DB for free](https://aka.ms/trycosmosdb), a tests environment that lasts for 30 days.
* [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) - [Go 1.16 or higher](https://golang.org/dl/) - [Azure CLI](/cli/azure/install-azure-cli)
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
Title: Azure Cosmos DB performance tips for .NET SDK v3 description: Learn client configuration options to help improve Azure Cosmos DB .NET v3 SDK performance.-+ Last updated 03/31/2022-+ ms.devlang: csharp
cosmos-db Troubleshoot Dot Net Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-header-too-large.md
Title: Troubleshoot a "Request header too large" message or 400 bad request in Azure Cosmos DB description: Learn how to diagnose and fix the request header too large exception.-+ Last updated 09/29/2021-+
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
Title: Troubleshoot Azure Cosmos DB HTTP 408 or request timeout issues with the .NET SDK description: Learn how to diagnose and fix .NET SDK request timeout exceptions.-+ Last updated 02/02/2022-+
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md
Title: Troubleshoot Azure Cosmos DB not found exceptions description: Learn how to diagnose and fix not found exceptions.-+ Last updated 05/26/2021-+
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
Title: Troubleshoot Azure Cosmos DB request rate too large exceptions description: Learn how to diagnose and fix request rate too large exceptions.-+ Last updated 03/03/2022-+
cosmos-db Troubleshoot Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout.md
Title: Troubleshoot Azure Cosmos DB service request timeout exceptions description: Learn how to diagnose and fix Azure Cosmos DB service request timeout exceptions.-+ Last updated 07/13/2020-+
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable.md
Title: Troubleshoot Azure Cosmos DB service unavailable exceptions description: Learn how to diagnose and fix Azure Cosmos DB service unavailable exceptions.-+ Last updated 08/06/2020-+
cosmos-db Troubleshoot Unauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-unauthorized.md
Title: Troubleshoot Azure Cosmos DB unauthorized exceptions description: Learn how to diagnose and fix unauthorized exceptions.-+ Last updated 07/13/2020-+
cost-management-billing Automation Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/automation-faq.md
No the Cost Details API is free. Make sure to abide by the rate-limiting policie
<! For more information, see [Data latency and rate limits](api-latency-rate-limits.md). -->
-### What's the difference between the Invoice API, the Transaction API, and the Cost Details API?
+### What's the difference between the Invoices API, the Transactions API, and the Cost Details API?
These APIs provide a different view of the same data: -- The [Invoice API](/api/billing/2019-10-01-preview/invoices) provides an aggregated view of your monthly charges.
+- The [Invoices API](/api/billing/2019-10-01-preview/invoices) provides an aggregated view of your monthly charges.
- The [Transactions API](/rest/api/billing/2020-05-01/transactions/list-by-invoice) provides a view of your monthly charges aggregated at product/service family level.-- The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report provides a granular view of the usage and cost records for each day. Both Enterprise and Microsoft Customer Agreement customers can use it. If you're a legacy pay-as-you-go customer, see [Get Cost Details as a legacy customer](get-usage-details-legacy-customer.md).
+- The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report provides a granular view of the usage and cost records for each day. The Cost Details API is available for Enterprise Agreement and Microsoft Customer Agreement accounts. For pay-as-you-go subscriptions, use the Exports API. If Exports don't meet your needs and you need an on-demand solution, see [Get Cost Details for a pay-as-you-go subscription](get-usage-details-legacy-customer.md).
### I recently migrated from an EA to an MCA agreement. How do I migrate my API workloads?
cost-management-billing Get Small Usage Datasets On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-small-usage-datasets-on-demand.md
If you want to get large amounts of exported data regularly, see [Retrieve large
To learn more about the data in cost details (formerly referred to as *usage details*), see [Ingest cost details data](automation-ingest-usage-details-overview.md).
-The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you're an MSDN, Pay-As-You-Go or Visual Studio customer, see [Get cost details as a legacy customer](get-usage-details-legacy-customer.md).
+The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you're an MSDN, Pay-As-You-Go or Visual Studio customer, see [Get cost details for a pay-as-you-go subscription](get-usage-details-legacy-customer.md).
## Cost Details API best practices
cost-management-billing Get Usage Details Legacy Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-usage-details-legacy-customer.md
Title: Get Azure cost details as a legacy customer
+ Title: Get Azure cost details for a pay-as-you go subscription
-description: This article explains how you get cost data if you're a legacy customer.
+description: This article explains how you get cost data if you have a MOSP pay-as-you-go subscription.
Last updated 07/15/2022
-# Get cost details as a legacy customer
+# Get cost details for a pay-as-you-go subscription
-If you have an MSDN, pay-as-you-go, or Visual Studio Azure subscription, we recommend that you use [Exports](../costs/tutorial-export-acm-data.md) or the [Exports API](../costs/ingest-azure-usage-at-scale.md) to get cost details data (formerly known as usage details). The [Cost Details](/rest/api/cost-management/generate-cost-details-report) API report isn't supported for your subscription type yet.
+If you have an MSDN, Microsoft Online Service Program (MOSP) pay-as-you-go, or Visual Studio Azure subscription, we recommend that you use [Exports](../costs/tutorial-export-acm-data.md) or the [Exports API](../costs/ingest-azure-usage-at-scale.md) to get cost details data (formerly known as usage details). The [Cost Details](/rest/api/cost-management/generate-cost-details-report) API report isn't supported for your subscription type yet.
If you need to download small datasets and you don't want to use Azure Storage, you can also use the Consumption Usage Details API. Instructions about how to use the API are below.
The following example requests are used by Microsoft customers to address common
The data that's returned by the request corresponds to the date when the data was received by the billing system. It might include costs from multiple invoices. The call to use varies by your subscription type.
-For legacy customers, use the following call.
+For pay-as-you-go subscriptions, use the following call.
```http GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?$filter=properties%2FusageStart%20ge%20'2020-02-01'%20and%20properties%2FusageEnd%20le%20'2020-02-29'&$top=1000&api-version=2019-10-01
cost-management-billing Migrate Ea Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-usage-details-api.md
description: This article has information to help you migrate from the EA Usage Details APIs. Previously updated : 07/15/2022 Last updated : 07/18/2022
The table below provides a summary of the old fields available in the solutions
| location | ResourceLocation | | | chargesBilledSeparately | isAzureCreditEligible | The properties are opposites. If isAzureCreditEnabled is true, ChargesBilledSeparately would be false. | | partNumber | PartNumber | |
-| resourceGuid | MeterId | |
+| resourceGuid | MeterId | Values vary. `resourceGuid` is a GUID value. `meterId` is a long number. |
| offerId | OfferId | | | cost | CostInBillingCurrency | | | accountId | AccountId | | | resourceLocationId | | Not available. |
-| consumedServiceId | ConsumedService | |
+| consumedServiceId | ConsumedService | `consumedServiceId` only provides a number value. `ConsumedService` provides the name of the service. |
| departmentId | InvoiceSectionId | | | accountOwnerEmail | AccountOwnerId | | | accountName | AccountName | |
cost-management-billing Usage Details Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/usage-details-best-practices.md
If the Cost Details API is your chosen solution, review the best practices to ca
To learn more about how to properly call the [Cost Details](/rest/api/cost-management/generate-cost-details-report) API, see [Get small usage data sets on demand](get-small-usage-datasets-on-demand.md).
-The Cost Details API is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you're an MSDN, pay-as-you-go or Visual Studio customer, see [Get usage details as a legacy customer](get-usage-details-legacy-customer.md).
+The Cost Details API is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you're an MSDN, pay-as-you-go or Visual Studio customer, see [Get usage details for pay-as-you-go subscriptions](get-usage-details-legacy-customer.md).
## Power BI
cost-management-billing Consumption Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/consumption-api-overview.md
# Azure Consumption API overview
-The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments, Web Direct Subscriptions (with a few exceptions), and CSP Azure plan subscriptions. The APIs are continually updated to support other types of Azure subscriptions.
+The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. The APIs are continually updated to support other types of Azure subscriptions.
-Azure Consumption APIs provide access to:
-- Enterprise and Web Direct Customers
+- Microsoft Customer Agreement customers can use:
+ - All APIs
+- CSP Azure plan customers (Azure plan customers that get Azure through a Microsoft partner) can use:
+ - All APIs
+- Enterprise and web direct customers can use:
- Usage Details - Marketplace Charges - Reservation Recommendations - Reservation Details - Reservation Summaries-- Enterprise Customers Only
+- Enterprise customers only can use:
- Price sheet - Budgets - Balances
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: Azure portal administration for direct Enterprise Agreements
description: This article explains the common tasks that a direct enterprise administrator accomplishes in the Azure portal. Previously updated : 11/16/2021 Last updated : 07/08/2022
If your enterprise administrator can't assist you, create anΓÇ»[Azure support re
- Email address to add, and authentication type (work, school, or Microsoft account) - Email approval from an existing enterprise administrator
-If the existing enterprise administrator isn't available, contact your partner or software advisor to request that they change the contact details through the Volume Licensing Service Center (VLSC) tool.
+>[!NOTE]
+> - We recommend that you have at least one active Enterprise Administrator at all times. If no active Enterprise Administrator is available, contact your partner to change the contact information on the Volume License agreement. Your partner can make changes to the customer contact information by using the Contact Information Change Request (CICR) process available in the eAgreements (VLCM) tool.
+> - Any new EA administrator account created using the CICR process is assigned read-only permissions to the enrollment in the EA portal and Azure portal. To elevate access, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
## Create an Azure enterprise department
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
Title: Azure EA portal administration
description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 10/13/2021 Last updated : 07/08/2022
If your enterprise administrator can't assist you, create an [Azure support requ
- Enrollment number - Email address to add, and authentication type (work, school, or Microsoft account) - Email approval from an existing enterprise administrator
- - If the existing enterprise administrator isn't available, contact your partner or software advisor to request that they change the contact details through the Volume Licensing Service Center (VLSC) tool.
+
+>[!NOTE]
+> - We recommend that you have at least one active Enterprise Administrator at all times. If no active Enterprise Administrator is available, contact your partner to change the contact information on the Volume License agreement. Your partner can make changes to the customer contact information by using the Contact Information Change Request (CICR) process available in the eAgreements (VLCM) tool.
+> - Any new EA administrator account created using the CICR process is assigned read-only permissions to the enrollment in the EA portal and Azure portal. To elevate access, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+ ## Create an Azure Enterprise department
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Title: Azure subscription and reservation transfer hub description: This article helps you understand what's needed to transfer Azure subscriptions and provides links to other articles for more detailed information. -+ tags: billing Previously updated : 05/04/2022 Last updated : 07/18/2022 - # Azure subscription and reservation transfer hub
The following table describes product transfer support between the different agr
Currently transfer isn't supported for [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) or [Azure in Open (AIO)](https://azure.microsoft.com/offers/ms-azr-0111p/) products. For a workaround, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-Support plans can't be transferred. If you have a support plan, you should cancel it for the old agreement and then buy a new one for the new agreement.
+Support plans can't be transferred. If you have a support plan, you should cancel. Then buy a new one for the new agreement.
Dev/Test products aren't shown in the following table. Transfers for Dev/Test products are handled in the same way as other product types. For example, an EA Dev/Test product transfer is handled in the way an EA product transfer.
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| Source (current) product agreement type | Destination (future) product agreement type | Notes | | | | |
-| EA | MOSP (PAYG) | <ul><li>Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| EA | MCA - individual | <ul><li>For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| EA | EA | <ul><li>Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Self-service reservation transfers are supported. <li> Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
-| EA | MCA - Enterprise | <ul><li> Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md). <li> If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| EA | MPA | <ul><li> Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program. <li> There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
-| MCA - individual | MOSP (PAYG) | <ul><li> For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md). <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| MCA - individual | MCA - individual | <ul><li>For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MCA - individual | EA | <ul><li> For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). <li> Self-service reservation transfers are supported. |
-| MCA - individual | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<li> Self-service reservation transfers are supported. |
-| MCA - Enterprise | MOSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| MCA - Enterprise | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MCA - Enterprise | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MCA - Enterprise | MPA | <ul><li> Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program. <li> Self-service reservation transfers are supported. <li> There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
-| Previous Azure offer in CSP | Previous Azure offer in CSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. |
+| EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
+| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). - Self-service reservation transfers are supported. |
+| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
+| MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation transfers are supported. |
+| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
+| MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
| Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
-| MPA | EA | <ul><li> Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product. <li> Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| MPA | MPA | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MOSP (PAYG) | MOSP (PAYG) | <ul><li> If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
-| MOSP (PAYG) | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MOSP (PAYG) | EA | <ul><li>If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). <li> If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
-| MOSP (PAYG) | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
+| MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
+| MPA | MPA | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
+| MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MOSP (PAYG) | EA | ΓÇó If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| MOSP (PAYG) | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
## Perform resource transfers
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
Last updated 06/10/2022
-# User defined functions (Preview) in mapping data flow
+# User defined functions in mapping data flow
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
A user defined function is a customized expression you can define to be able to
Whenever you find yourself building the same logic in an expression across multiple mapping data flows this would be a good opportunity to turn that into a user defined function.
-> [!IMPORTANT]
-> User defined functions and mapping data flow libraries are currently in public preview.
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4Zkek] >
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Spark connector is supported for the following activities:
+This Spark connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Spark to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
This article outlines how to use the copy activity in Azure Data Factory and Azu
## Supported capabilities
-This SQL Server connector is supported for the following activities:
+This SQL Server connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Mapping data flow](concepts-data-flow-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|
+|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|
-You can copy data from a SQL Server database to any supported sink data store. Or, you can copy data from any supported source data store to a SQL Server database. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this SQL Server connector supports:
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Sybase connector is supported for the following activities:
+This Sybase connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-You can copy data from Sybase database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Sybase connector supports:
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md
This article outlines how to use the copy activity in Azure Data Factory and Syn
## Supported capabilities
-This Teradata connector is supported for the following activities:
+This Teradata connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Teradata Vantage to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Teradata connector supports:
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Vertica connector is supported for the following activities:
+This Vertica connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Vertica to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 07/07/2022 Last updated : 07/18/2022 # What's new in Microsoft Defender for IoT?
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | ||| |**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.3**:<br><br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**OT networks** |**Sensor software version 22.2.3**:<br><br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) | ### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
For more information, see:
- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support) - [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) -- ### Improved security for uploading protocol plugins This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
This version of the sensor provides an improved security for uploading proprieta
For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
+### Sensor names shown in browser tabs
+
+Starting in sensor version 22.2.3, your sensor's name is displayed in the browser tab, making it easier for you to identify the sensors you're working with.
+
+For example:
++ ### Microsoft Sentinel incident synch with Defender for IoT alerts The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-route-events.md
# Mandatory fields. Title: Endpoints and event routes
-description: Learn how to route events within Azure Digital Twins and to other Azure Services.
+description: Learn how to route Azure Digital Twins events, both within the service and externally to other Azure services.
Previously updated : 03/01/2022 Last updated : 07/18/2022 + # Optional fields. Don't forget to remove # if you need a field. #
#
-# Route events within and outside of Azure Digital Twins
+# Route Azure Digital Twins events
-This article covers *event routes* and how Azure Digital Twins uses them to send data internally and to consumers outside the service.
+Azure Digital Twins uses *event routes* to send event data, both for routing events internally within Azure Digital Twins, and for sending event data externally to consumers outside the service.
-There are two major cases for sending Azure Digital Twins data:
-* Sending data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin based on the updated data.
-* Sending data to downstream data services for more storage or processing (also known as *data egress*). For instance, a business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries using their Azure Maps and Azure Digital Twins data together.
-
-Event routes are used for both of these scenarios.
+This article describes how event routes work, including the process of setting up *endpoints*, and then setting up event routes connected to the endpoints. It also explains what happens when an endpoint fails to deliver an event in time (a process known as *dead lettering*).
## About event routes
-An event route lets you send event data from digital twins in Azure Digital Twins to custom-defined endpoints in your subscriptions. Three Azure services are currently supported for endpoints: [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). Each of these Azure services can be connected to other services and acts as the middleman, sending data along to final destinations such as Time Series Insights or Azure Maps for whatever processing you need.
+There are two main scenarios for sending Azure Digital Twins data, and event routes are used to accomplish both:
+* Sending event data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin based on the updated data.
+* Sending data outside Azure Digital Twins to downstream data services for more storage or processing. For instance, if you're already using [Azure Maps](../azure-maps/about-azure-maps.md), you might want to contribute Azure Digital Twins data to enhance your solution with integrated modeling or queries.
-Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
+For any event destination, an event route works by sending event data from Azure Digital Twins to custom-defined *endpoints* in your subscriptions. Three Azure services are currently supported for endpoints: [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). Each of these Azure services can be connected to other services and acts as the middleman, sending data along to final destinations such as Azure Maps, or back into Azure Digital Twins for dependent graph updates.
-The following diagram illustrates the flow of event data through a larger IoT solution with an Azure Digital Twins aspect:
+The following diagram illustrates the flow of event data through a larger IoT solution, which includes sending Azure Digital Twins data through endpoints to other Azure Services, as well as back into Azure Digital Twins:
:::image type="content" source="media/concepts-route-events/routing-workflow.png" alt-text="Diagram of Azure Digital Twins routing data through endpoints to several downstream services." border="false":::
-Typical downstream targets for event routes are resources like Time Series Insights, Azure Maps, storage, and analytics solutions.
+For egress of data outside Azure Digital Twins, typical downstream targets for event routes are Time Series Insights, Azure Maps, storage, and analytics solutions. Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
-### Event routes for internal digital twin events
+For routing of internal digital twin events within the same Azure Digital Twins solution, continue to the next section.
-Event routes are also used to handle events within the twin graph and send data from digital twin to digital twin. This sort of event handling is done by connecting event routes through Event Grid to compute resources, such as [Azure Functions](../azure-functions/functions-overview.md). These functions then define how twins should receive and respond to events.
+### Route internal digital twin events
-When a compute resource wants to modify the twin graph based on an event that it received via event route, it's helpful for it to know which twin it wants to modify ahead of time.
+Event routes are the mechanism that's used for handling events within the twin graph, sending data from digital twin to digital twin. This sort of event handling is done by connecting event routes through Event Grid to compute resources, such as [Azure Functions](../azure-functions/functions-overview.md). These functions then define how twins should receive and respond to events.
-The event message also contains the ID of the source twin that sent the message, so the compute resource can use queries or traverse relationships to find a target twin for the desired operation.
+When a compute resource wants to modify the twin graph based on an event that it received via event route, it's helpful for it to know ahead of time which twin it should modify. The event message also contains the ID of the source twin that sent the message, so the compute resource can use queries or traverse relationships to find a target twin for the desired operation.
The compute resource also needs to establish security and access permissions independently.
Different types of events in IoT Hub and Azure Digital Twins produce different t
## Next steps
-See how to set up and manage an event route:
+Continue to the step-by-step instructions for setting up endpoints and event routes:
* [Manage endpoints and routes](how-to-manage-routes.md)
-Or, see how to use Azure Functions to route events within Azure Digital Twins:
+Or, follow this walkthrough to set up an Azure Function for twin-to-twin event handling within Azure Digital Twins:
* [Set up twin-to-twin event handling](how-to-send-twin-to-twin-events.md).
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
You can create a system topic in two ways:
When you use the Azure portal, you're always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
-When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/controlplane-version2021-12-01/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
+When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/controlplane-version2022-06-15/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
### Failure to create system topics The system topic creation fails if you have set up Azure policies in such a way that the Event Grid service can't create it. For example, you may have a policy that allows creation of only certain types of resources (for example: Azure Storage, Azure Event Hubs, and so on.) in the subscription.
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
az eventhubs namespace application-group create --namespace-name mynamespace \
To learn more about the CLI command, see [`az eventhubs namespace application-group create`](/cli/azure/eventhubs/namespace/application-group#az-eventhubs-namespace-application-group-create). ### [Azure PowerShell](#tab/powershell)
-Use the PowerShell command: [`New-AzEventHubApplicationGroup`](//powershell/module/az.eventhub/new-azeventhubapplicationgroup) to create an application group in an Event Hubs namespace.
+Use the PowerShell command: [`New-AzEventHubApplicationGroup`](/powershell/module/az.eventhub/new-azeventhubapplicationgroup) to create an application group in an Event Hubs namespace.
The following example uses the [`New-AzEventHubThrottlingPolicyConfig`](/powershell/module/az.eventhub/new-azeventhubthrottlingpolicyconfig) to create two policies that will be associated with the application.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Title: 'Tutorial: Link a VNet to a ExpressRoute circuit - Azure PowerShell'
+ Title: 'Tutorial: Link a VNet to an ExpressRoute circuit - Azure PowerShell'
description: This tutorial provides an overview of how to link virtual networks (VNets) to ExpressRoute circuits by using the Resource Manager deployment model and Azure PowerShell. Previously updated : 12/03/2021 Last updated : 07/18/2022 --+
-# Tutorial: Connect a virtual network to an ExpressRoute circuit
+# Tutorial: Connect a virtual network to an ExpressRoute circuit using Azure PowerShell
+ > [!div class="op_single_selector"] > * [Azure portal](expressroute-howto-linkvnet-portal-resource-manager.md) > * [PowerShell](expressroute-howto-linkvnet-arm.md) > * [Azure CLI](howto-linkvnet-cli.md)
-> * [Video - Azure portal](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit)
> * [PowerShell (classic)](expressroute-howto-linkvnet-classic.md) >
-This article helps you link virtual networks (VNets) to Azure ExpressRoute circuits by using the Resource Manager deployment model and PowerShell. Virtual networks can either be in the same subscription or part of another subscription. This article also shows you how to update a virtual network link.
-
-* You can link up to 10 virtual networks to a standard ExpressRoute circuit. All virtual networks must be in the same geopolitical region when using a standard ExpressRoute circuit.
-
-* A single VNet can be linked to up to 16 ExpressRoute circuits. Use the steps in this article to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
-
-* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
-
-* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+This tutorial helps you link virtual networks (VNets) to Azure ExpressRoute circuits by using the Resource Manager deployment model and PowerShell. Virtual networks can either be in the same subscription or part of another subscription. This tutorial also shows you how to update a virtual network link.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
* Ensure that Azure private peering gets configured and establishes BGP peering between your network and Microsoft for end-to-end connectivity. * Ensure that you have a virtual network and a virtual network gateway created and fully provisioned. Follow the instructions to [create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md). A virtual network gateway for ExpressRoute uses the GatewayType 'ExpressRoute', not VPN.
+* You can link up to 10 virtual networks to a standard ExpressRoute circuit. All virtual networks must be in the same geopolitical region when using a standard ExpressRoute circuit.
+
+* A single VNet can be linked to up to 16 ExpressRoute circuits. Use the steps in this article to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
+
+* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
+
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+
+* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+ ### Working with Azure PowerShell [!INCLUDE [updated-for-az](../../includes/hybrid-az-ps.md)]
$connection.ExpressRouteGatewayBypass = $True
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection ```
-### FastPath and Private Link for 100Gbps ExpressRoute Direct
+### FastPath and Private Link for 100 Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100 Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
1. Send an email to **ERFastPathPL@microsoft.com**, providing the following information: * Azure Subscription ID
-* Virtual Network (Vnet) Resource ID
+* Virtual Network (VNet) Resource ID
* Azure Region where the Private Endpoint/Private Link service is deployed 2. Once you receive a confirmation from Step 1, run the following Azure PowerShell command in the target Azure subscription. ```azurepowershell-interactive Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass -ProviderNamespace Microsoft.Network ```
-3. Disable and Enable FastPath on the target connection(s) to enables the changes. Once this step is complete. 100Gb Private Link traffic over ExpressRoute will bypass the ExpressRoute Virtual Network Gateway in the data path.
+3. Disable and Enable FastPath on the target connection(s) to enable the changes. Once this step is complete. 100 Gb Private Link traffic over ExpressRoute will bypass the ExpressRoute Virtual Network Gateway in the data path.
> [!NOTE]
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering is now in Public preview, both IPv4 and IPv6 scenarios are supported. IPv4 FastPath and Vnet peering can be enabled on connections associated to both ExpressRoute Direct and ExpressRoute Partner circuits. IPv6 FastPath and Vnet peering support is limited to connections associated to ExpressRoute Direct.
+FastPath support for virtual network peering is now in Public preview, both IPv4 and IPv6 scenarios are supported. IPv4 FastPath and VNet peering can be enabled on connections associated to both ExpressRoute Direct and ExpressRoute Partner circuits. IPv6 FastPath support for VNet peering is limited to connections associated to ExpressRoute Direct.
### FastPath virtual network peering and user defined routes (UDRs).
With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct Ex
> The previews for virtual network peering and user defined routes (UDRs) are offered together. You cannot enable only one scenario. >
-To enroll in these previews, run the follow Azure PowerShell command in the target Azure subscription:
+To enroll in these previews, run the following Azure PowerShell command in the target Azure subscription:
```azurepowershell-interactive Register-AzProviderFeature -FeatureName ExpressRouteVnetPeeringGatewayBypass -ProviderNamespace Microsoft.Network ```
-### FastPath and Private Link for 10Gbps ExpressRoute Direct
+### FastPath and Private Link for 10 Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10 Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
To enroll in this preview, run the following Azure PowerShell command in the target Azure subscription:
Remove-AzVirtualNetworkGatewayConnection "MyConnection" -ResourceGroupName "MyRG
``` ## Next steps
-For more information about ExpressRoute, see the ExpressRoute FAQ.
+
+In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and in a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
+
+To learn how to configure route filters for Microsoft peering using PowerShell, advance to the next tutorial.
> [!div class="nextstepaction"]
-> [ExpressRoute FAQ](expressroute-faqs.md)
+> [Configure route filters for Microsoft peering](how-to-routefilter-powershell.md)
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
Previously updated : 07/15/2022 Last updated : 07/18/2022
-# Tutorial: Connect a virtual network to an ExpressRoute circuit using the portal
+# Tutorial: Connect a virtual network to an ExpressRoute circuit using the Azure portal
> [!div class="op_single_selector"] > * [Azure portal](expressroute-howto-linkvnet-portal-resource-manager.md) > * [PowerShell](expressroute-howto-linkvnet-arm.md) > * [Azure CLI](howto-linkvnet-cli.md)
-> * [Video - Azure portal](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit)
> * [PowerShell (classic)](expressroute-howto-linkvnet-classic.md) >
-This tutorial helps you create a connection to link a virtual network to an Azure ExpressRoute circuit using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or be part of another subscription.
+This tutorial helps you create a connection to link a virtual network (VNet) to an Azure ExpressRoute circuit using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription.
In this tutorial, you learn how to: > [!div class="checklist"] > - Connect a virtual network to a circuit in the same subscription. > - Connect a virtual network to a circuit in a different subscription.
+> - Configure ExpressRoute FastPath.
> - Delete the link between the virtual network and ExpressRoute circuit. ## Prerequisites
You can delete a connection and unlink your VNet to an ExpressRoute circuit by s
## Next steps
-In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
+In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and in a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
To learn how to configure route filters for Microsoft peering using the Azure portal, advance to the next tutorial.
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 07/08/2022 Last updated : 07/15/2022
By default, the new resource specific tables are disabled. Open a support ticket
In addition, when setting up your log analytics workspace, you must select whether you want to work with the AzureDiagnostics table (default) or with Resource Specific Tables.
-Additional KQL log queries were added (as seen in the following screenshot) to query structured firewall logs.
-
+Additional KQL log queries were added to query structured firewall logs.
> [!NOTE] > Existing Workbooks and any Sentinel integration will be adjusted to support the new structured logs when **Resource Specific** mode is selected.
+### Policy Analytics (preview)
+
+Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
+
+For large, geographically dispersed organizations, manually managing Firewall rules and policies is a complex and sometimes error-prone process. The new Policy Analytics feature is the answer to this common challenge faced by IT teams.
+
+You can now refine and update Firewall rules and policies with confidence in just a few steps in the Azure portal. You have granular control to define your own custom rules for an enhanced security and compliance posture. You can automate rule and policy management to reduce the risks associated with a manual process.
+
+#### Pricing
+
+Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no additional cost.
+
+#### Key Policy Analytics features
+
+- **Policy insight panel**: Aggregates insights and highlights relevant policy information.
+- **Rule analytics**: Analyzes existing DNAT, Network, and Application rules to identify rules with low utilization or rules with low usage in a specific time window.
+- **Traffic flow analysis**: Maps traffic flow to rules by identifying top traffic flows and enabling an integrated experience.
+- **Single Rule analysis**: Analyzes a single rule to learn what traffic hits that rule to refine the access it provides and improve the overall security posture.
+
+### Prerequisites
+
+- An Azure Firewall Standard or Premium
+- An Azure Firewall Standard or Premium policy attached to the Firewall
+- The [network rule name logging preview feature](#network-rule-name-logging-preview) must be enabled to view network rules analysis
+- The [structured firewall logs feature](#structured-firewall-logs-preview) must be enabled on Firewall Standard or Premium
++
+### Enable Policy Analytics
+
+#### Firewall with no Azure Diagnostics settings configured
++
+1. Once all prerequisites are met, select **Policy analytics (preview)** in the table of contents.
+2. Next, select **Configure Workspaces**.
+3. In the pane that opens, select the **Enable Policy Analytics** checkbox.
+4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
+5. Select **Save** after you choose the log analytics workspace.
+6. Go to the Firewall attached to the policy and enter the **Diagnostic settings** page. You'll see the **FirewallPolicySetting** added there as part of the policy analytics feature.
+7. Select **Edit Setting**, and ensure the **Resource specific** toggle is checked, and the highlighted tables are checked. In the previous example, all logs are written to the log analytics workspace.
+
+#### Firewall with Azure Diagnostics settings already configured
+
+1. Ensure that the Firewall attached to the policy is connected to **Resource Specific** tables, and that the following three tables are enabled:
+ - AZFWApplicationRuleAggregation
+ - AZFWNetworkRuleAggregation
+ - AZFWNatRuleAggregation
+2. Next, select **Policy Analytics (preview)** in the table of contents. Once inside the feature, select **Configure Workspaces**.
+3. Now, select **Enable Policy Analytics**.
+4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
+5. Select **Save** after you choose the log analytics workspace.
+
+ During the save process, you might see the following error message: **Failed to update Diagnostic Settings**
+
+ You can disregard this error message if the policy was successfully updated.
+ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
If the value is false, then it means the request is responded from origin shield
| Routing rule with caching enabled. Cache misses at edge POP but cache hit at parent cache POP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Parent cache POP hostname</br>2. Empty | 1. True</br>2. False | 1. MISS</br>2. HIT | | Routing rule with caching enabled. Caches miss at edge POP but PARTIAL cache hit at parent cache POP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Parent cache POP hostname</br>2. Backend that helps populate cache | 1. True</br>2. False | 1. MISS</br>2. PARTIAL_HIT | | Routing rule with caching enabled. Cache PARTIAL_HIT at edge POP but cache hit at parent cache POP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Edge POP code</br>2. Parent cache POP code | 1. True</br>2. False | 1. PARTIAL_HIT</br>2. HIT |
-| Routing rule with caching enabled. Cache misses at both edge and parent cache POPP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Edge POP code</br>2. Parent cache POP code | 1. True</br>2. False | 1. MISS</br>2. MISS |
+| Routing rule with caching enabled. Cache misses at both edge and parent cache POP | 2 | 1. Edge POP code</br>2. Parent cache POP code | 1. Edge POP code</br>2. Parent cache POP code | 1. True</br>2. False | 1. MISS</br>2. MISS |
+| Error processing the request | | | | | N/A |
> [!NOTE] > For caching scenarios, the value for Cache Status will be partial_hit when some of the bytes for a request get served from the Azure Front Door edge or origin shield cache while some of the bytes get served from the origin for large objects.
hdinsight Find Host Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/find-host-name.md
Title: How to get the host names of Azure HDInsight cluster nodes
description: Learn about how to get host names and FQDN name of Azure HDInsight cluster nodes. -- Previously updated : 03/23/2021++ Last updated : 07/18/2022 # Find the host names of cluster nodes
$resp = Invoke-WebRequest -Uri "https://$clusterName.azurehdinsight.net/api/v1/c
-Credential $creds -UseBasicParsing $respObj = ConvertFrom-Json $resp.Content $respObj.items.Hosts.host_name
-```
+```
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
Title: 'Quickstart: Create Apache Hadoop cluster in Azure HDInsight using Bicep' description: In this quickstart, you create Apache Hadoop cluster in Azure HDInsight using Bicep--++ Previously updated : 04/14/2022 Last updated : 07/18/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Bicep
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
Previously updated : 05/27/2019 Last updated : 07/18/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - architecture best practices
hdinsight Apache Hadoop Use Sqoop Mac Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
description: Learn how to use Apache Sqoop to import and export between Apache H
Previously updated : 11/28/2019 Last updated : 07/18/2022 # Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database
Now you've learned how to use Sqoop. To learn more, see:
* [Use Apache Oozie with HDInsight](../hdinsight-use-oozie-linux-mac.md): Use Sqoop action in an Oozie workflow. * [Analyze flight delay data using HDInsight](../interactive-query/interactive-query-tutorial-analyze-flight-data.md): Use Interactive Query to analyze flight delay data, and then use Sqoop to export data to a database in Azure.
-* [Upload data to HDInsight](../hdinsight-upload-data.md): Find other methods for uploading data to HDInsight/Azure Blob storage.
+* [Upload data to HDInsight](../hdinsight-upload-data.md): Find other methods for uploading data to HDInsight/Azure Blob storage.
hdinsight Apache Hbase Accelerated Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-accelerated-writes.md
Title: Azure HDInsight Accelerated Writes for Apache HBase
description: Gives an overview of the Azure HDInsight Accelerated Writes feature, which uses premium managed disks to improve performance of the Apache HBase Write Ahead Log. Previously updated : 01/24/2020 Last updated : 07/18/2022 # Azure HDInsight Accelerated Writes for Apache HBase
hdinsight Hdinsight Apache Spark With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-spark-with-kafka.md
description: Learn how to use Apache Spark to stream data into or out of Apache
Previously updated : 11/21/2019 Last updated : 07/18/2022 # Apache Spark streaming (DStream) example with Apache Kafka on HDInsight
hdinsight Hdinsight Retired Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-retired-versions.md
Title: Azure HDInsight retired versions
description: Learn about retired versions in Azure HDInsight. Previously updated : 02/08/2021 Last updated : 07/18/2022 # Retired HDInsight versions
hdinsight Hdinsight Ubuntu 1804 Qa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-ubuntu-1804-qa.md
Title: Azure HDInsight Ubuntu 18.04 update
description: Learn about Azure HDInsight Ubuntu 18.04 OS changes. -- Previously updated : 05/25/2021++ Last updated : 07/18/2022 # HDInsight Ubuntu 18.04 OS update
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Title: Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0 description: Learn how to migrate Apache Hive workloads on HDInsight 3.6 to HDInsight 4.0.--++ Previously updated : 11/4/2020 Last updated : 07/18/2022 # Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Title: Hive Warehouse Connector - Apache Zeppelin using Livy - Azure HDInsight description: Learn how to integrate Hive Warehouse Connector with Apache Zeppelin on Azure HDInsight.--++ Previously updated : 05/26/2022 Last updated : 07/18/2022 # Integrate Apache Zeppelin with Hive Warehouse Connector in Azure HDInsight
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
Title: Apache Spark & Hive - Hive Warehouse Connector - Azure HDInsight description: Learn how to integrate Apache Spark and Apache Hive with the Hive Warehouse Connector on Azure HDInsight.--++ Previously updated : 04/01/2022 Last updated : 07/18/2022 # Integrate Apache Spark and Apache Hive with Hive Warehouse Connector in Azure HDInsight
hdinsight Hive Default Metastore Export Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-default-metastore-export-import.md
Title: Migrate default Hive metastore to external metastore on Azure HDInsight description: Migrate default Hive metastore to external metastore on Azure HDInsight---++ Previously updated : 11/4/2020 Last updated : 07/18/2022 # Migrate default Hive metastore DB to external metastore DB
sudo python "$SCRIPT" --query "$QUERY" > $OUTPUT_FILE
* [Migrate workloads from HDInsight 3.6 to 4.0](./apache-hive-migrate-workloads.md) * [Hive Workload Migration across Storage Accounts](./hive-migration-across-storage-accounts.md) * [Connect to Beeline on HDInsight](../hadoop/connect-install-beeline.md)
-* [Troubleshoot Permission Error Create Table](./interactive-query-troubleshoot-permission-error-create-table.md)
+* [Troubleshoot Permission Error Create Table](./interactive-query-troubleshoot-permission-error-create-table.md)
hpc-cache Hpc Cache Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-support-ticket.md
Title: Open a support ticket for Azure HPC Cache
-description: How to open a help request for Azure HPC Cache, including how to create a quota increase request
+description: How to open a help request for Azure HPC Cache, including how to request a quota increase
Previously updated : 07/13/2022 Last updated : 07/18/2022
Navigate to your cache instance, then click the **New support request** link tha
To open a ticket when you do not have an active cache, use the main Help + support page from the Azure portal. Open the portal menu from the control at the top left of the screen, then scroll to the bottom and click **Help + support**.
-> [!TIP]
-> If you need a quota increase, most requests can be handled automatically. Follow the instructions below in [Request a quota increase](#request-a-quota-increase).
+Support requests are also used to request a quota increase. If you want to host more HPC Caches than your subscription currently allows, follow the instructions below in [Request a quota increase](#request-a-quota-increase).
+
+## Request technical support
Choose **Create a support request**. On the support request form, write a summary of the issue, and select **Technical** as the **Issue type**.
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
You need the following prerequisites to complete the steps in this guide:
To get started, fork the IoT Central CI/CD GitHub repository and then clone your fork to your local machine:
-1. To fork the Git Hub repository, open the [IoT Central CI/CD GitHub repository](https://github.com/Azure/iot-central-CICD-sample) and select **Fork**.
+1. To fork the GitHub repository, open the [IoT Central CI/CD GitHub repository](https://github.com/Azure/iot-central-CICD-sample) and select **Fork**.
1. Clone your fork of the repository to your local machine by opening a console or bash window and running the following command.
Now configure the pipeline to push configuration changes to your IoT Central app
``` 1. Select **Save and run**.
-1. The YAML file is saved to your Git Hub repository, so you need to provide a commit message and then select **Save and run** again.
+1. The YAML file is saved to your GitHub repository, so you need to provide a commit message and then select **Save and run** again.
Your pipeline is queued. It may take a few minutes before it runs.
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
To verify that IoT Central is mapping the telemetry, navigate to **Raw data** vi
If you don't see your mapped data after refreshing the **Raw data** several times, check that the JSONPath expression you're using matches the structure of the telemetry message.
-For IoT Edge devices, the data mapping applies to the telemetry from all the IoT Edge modules and hub. You can't apply mappings to a specific IoT edge module.
+For IoT Edge devices, the data mapping applies to the telemetry from all the IoT Edge modules and hub. You can't apply mappings to a specific Azure IoT Edge module.
-For devices assigned to a device template, you can't map data for components or inherited interfaces. However, you can map any data from your device before you assign it to a device template.
+For devices assigned to a device template, you can't map data for components or inherited interfaces. However, you can [map any data from your device before you assign it to a device template](#map-unmodeled-telemetry).
## Manage mappings
To view, edit, or delete mappings, navigate to the **Mapped aliases** page. Sele
By default, data exports from IoT Central include mapped data. To exclude mapped data, use a [data transformation](howto-transform-data-internally.md) in your data export.
+## Map unmodeled telemetry
+
+You can map unmodeled telemetry, including telemetry from unmodeled components. For example, given the `workingSet` telemetry defined in the root component and the `temperature` telemetry defined in a thermostat component shown in the following example:
+
+```json
+{
+ "_unmodeleddata": {
+ "workingSet": 74
+ },
+ "_eventtype": "Telemetry",
+ "_timestamp": "2022-07-18T09:22:40.257Z"
+}
+
+{
+ "_unmodeleddata": {
+ "thermostat2": {
+ "__t": "c",
+ "temperature": 44
+ }
+ },
+ "_eventtype": "Telemetry",
+ "_timestamp": "2022-07-18T09:21:48.69Z"
+}
+```
+
+You can map this telemetry using the following mapping definitions:
+
+* `$["workingSet"] ws`
+* `$["temperature"] temp`
+
+> [!NOTE]
+> Don't include the component name in the mapping definition.
+
+The results of these mapping rules look like the following examples:
+
+```json
+{
+ "telemetries": {
+ "workingSet": 84,
+ "_mappeddata": {
+ "ws": 84
+ }
+ }
+}
+
+{
+ "_unmodeleddata": {
+ "thermostat2": {
+ "__t": "c",
+ "temperature": 12
+ },
+ "_mappeddata": {
+ "thermostat2": {
+ "__t": "c",
+ "temp": 12
+ }
+ }
+ },
+ "_eventtype": "Telemetry",
+ "_timestamp": "2022-07-18T09:31:21.088Z"
+}
+```
+
+Now you can use the mapped aliases to display telemetry on a chart or dashboard. You can also use the mapped aliases when you export telemetry.
+ ## Next steps Now that you've learned how to map data for your device, a suggested next step is to learn [How to use data explorer to analyze device data](howto-create-analytics.md).
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
The following table lists the components included in each release starting with
| Release | aziot-edge | edgeHub<br>edgeAgent | aziot-identity-service | | - | - | -- | - |
+| **1.3** | 1.3.0 | 1.3.0 | 1.3.0 |
| **1.2** | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br><br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br>1.2.6<br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br> | The following table lists the components included in each release up to the 1.1 LTS release. The components listed in this table can be installed or updated individually, and are backwards compatible with older versions.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
description: This tutorial walks through setting up your development machine and
Previously updated : 07/30/2020 Last updated : 07/18/2022
Once your new solution loads in the Visual Studio Code window, take a moment to
### Set IoT Edge runtime version
-The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is version 1.2. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
+The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is 1.3. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
1. Select **View** > **Command Palette**.
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
A combination of roles can be used to provide the right level of access. For exa
## Configuring access for Azure Device Update service principal in the IoT Hub
-Device Update for IoT Hub uses [Automatic Device Management](../iot-hub/iot-hub-automatic-device-management.md) for deployments and uses ADM configs to perform device management operations like updates at scale. In order to enable Device Update to do this, users need to set Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
+Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
-Below actions will be blocked (after 9/28/22) if these permissions are not set:
+Below actions will be blocked, after 9/28/22, if these permissions are not set:
* Create Deployment * Cancel Deployment * Retry Deployment
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
Title: Integrating Key Vault with DigiCert certificate authority description: This article describes how to integrate Key Vault with DigiCert certificate authority so you can provision, manage, and deploy certificates for your network. -+ tags: azure-resource-manager Last updated 01/24/2022-+
lighthouse Manage Sentinel Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-sentinel-workspaces.md
If you are managing Microsoft Sentinel resources for multiple customers, you can
## Create cross-tenant workbooks
-[Azure Monitor Workbooks in Microsoft Sentinel](../../sentinel/overview.md#workbooks) help you visualize and monitor data from your connected data sources to gain insights. You can use the built-in workbook templates in Microsoft Sentinel, or create custom workbooks for your scenarios.
+[Azure Monitor workbooks in Microsoft Sentinel](../../sentinel/monitor-your-data.md) help you visualize and monitor data from your connected data sources to gain insights. You can use the built-in workbook templates in Microsoft Sentinel, or create custom workbooks for your scenarios.
You can deploy workbooks in your managing tenant and create at-scale dashboards to monitor and query data across customer tenants. For more information, see [Cross-workspace workbooks](../../sentinel/extend-sentinel-across-workspaces-tenants.md#using-cross-workspace-workbooks).
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Previously updated : 10/21/2021 Last updated : 07/14/2022 -+ # Prebuilt Docker images for inference Prebuilt Docker container images for inference are used when deploying a model with Azure Machine Learning. The images are prebuilt with popular machine learning frameworks and Python packages. You can also extend the packages to add other packages by using one of the following methods:
-* [Add Python packages](how-to-prebuilt-docker-images-inference-python-extensibility.md).
-* [Use prebuilt inference image as base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md). Using this method, you can install both **Python packages and apt packages**.
- ## Why should I use prebuilt images? * Reduces model deployment latency.
Prebuilt Docker container images for inference are used when deploying a model w
## List of prebuilt Docker images for inference
+> [!IMPORTANT]
+> The list provided below includes only **currently supported** inference docker images by Azure Machine Learning.
+ [!INCLUDE [list-of-inference-prebuilt-docker-images](../../includes/aml-inference-list-prebuilt-docker-images.md)]
+## How to use inference prebuilt docker images?
+
+[Check examples in the Azure machine learning GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container)
+ ## Next steps
-* [Add Python packages to prebuilt images](how-to-prebuilt-docker-images-inference-python-extensibility.md).
-* [Use a prebuilt package as a base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md).
+* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-managed-online-endpoints.md)
+* [Learn more about custom containers](how-to-deploy-custom-container.md)
+* [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online)
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
To access the terminal:
In addition to the steps above, you can also access the terminal from:
-* RStudio (See [Add RStudio]([Create and manage an Azure Machine Learning compute instance]): Select the **Terminal** tab on top left.
+* RStudio (See [Add RStudio](how-to-create-manage-compute-instance.md?tabs=python#setup-rstudio-workbench)): Select the **Terminal** tab on top left.
* Jupyter Lab: Select the **Terminal** tile under the **Other** heading in the Launcher tab. * Jupyter: Select **New>Terminal** on top right in the Files tab. * SSH to the machine, if you enabled SSH access when the compute instance was created.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To use RStudio open source, set up a custom application as follows:
1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port. 1. Set up the application to be accessed on **Published port** `8787` - you can configure the application to be accessed on a different Published port if you wish. 1. Point the **Docker image** to `ghcr.io/azure/rocker-rstudio-ml-verse:latest`.
-1. Select **Create** to set up RStudio as a custom application on your compute instance.
+1. Use **Bind mounts** to add access to the files in your default storage account:
+ * Specify **/home/azureuser/cloudfiles** for **Host path**.
+ * Specify **/home/azureuser/cloudfiles** for the **Container path**.
+ * Select **Add** to add this mounting. Because the files are mounted, changes you make to them will be available in other compute instances and applications.
+3. Select **Create** to set up RStudio as a custom application on your compute instance.
-
:::image type="content" source="media/how-to-create-manage-compute-instance/rstudio-open-source.png" alt-text="Screenshot shows form to set up RStudio as a custom application" lightbox="media/how-to-create-manage-compute-instance/rstudio-open-source.png"::: ### Setup other custom applications Set up other custom applications on your compute instance by providing the application on a Docker image.
-1. Follow the steps listed above to **Add application** when creating your compute instance.
-1. Select **Custom Application** on the **Application** dropdown.
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **Custom Application** on the **Application** dropdown.
1. Configure the **Application name**, the **Target port** you wish to run the application on, the **Published port** you wish to access the application on and the **Docker image** that contains your application. 1. Optionally, add **Environment variables** and **Bind mounts** you wish to use for your application. 1. Select **Create** to set up the custom application on your compute instance.
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
Previously updated : 05/14/2021 Last updated : 07/14/2022 # Azure Machine Learning inference HTTP server (preview)
There are two ways to use Visual Studio Code (VSCode) and [Python Extension](htt
1. User starts the AzureML Inference Server in a command line and use VSCode + Python Extension to attach to the process. 1. User sets up the `launch.json` in the VSCode and start the AzureML Inference Server within VSCode.
+**launch.json**
+```json
+{
+ "name": "Debug score.py",
+ "type": "python",
+ "request": "launch",
+ "module": "azureml_inference_server_http.amlserver",
+ "args": [
+ "--entry_script",
+ "score.py"
+ ]
+}
+```
+ In both ways, user can set breakpoint and debug step by step. ## Frequently asked questions
-### Do I need to reload the server when changing the score script?
+### 1. I encountered the following error during server startup:
+
+```bash
+
+TypeError: register() takes 3 positional arguments but 4 were given
+
+ File "/var/azureml-server/aml_blueprint.py", line 251, in register
+
+ super(AMLBlueprint, self).register(app, options, first_registration)
+
+TypeError: register() takes 3 positional arguments but 4 were given
+
+```
+
+You have **Flask 2** installed in your python environment but are running a server (< 7.0.0) that does not support Flask 2. To resolve, please upgrade to the latest version of server.
+
+### 2. I encountered an ``ImportError`` or ``ModuleNotFoundError`` on modules ``opencensus``, ``jinja2``, ``MarkupSafe``, or ``click`` during startup like the following:
+
+```bash
+ImportError: cannot import name 'Markup' from 'jinja2'
+```
+
+Older versions (<= 0.4.10) of the server did not pin Flask's dependency to compatible versions. This is fixed in the latest version of the server.
+
+### 3. Do I need to reload the server when changing the score script?
After changing your scoring script (`score.py`), stop the server with `ctrl + c`. Then restart it with `azmlinfsrv --entry_script score.py`.
-### Which OS is supported?
+### 4. Which OS is supported?
The Azure Machine Learning inference server runs on Windows & Linux based operating systems. ## Next steps
-* For more information on creating an entry script and deploying models, see [How to deploy a model using Azure Machine Learning](how-to-deploy-and-where.md).
+* For more information on creating an entry script and deploying models, see [How to deploy a model using Azure Machine Learning](how-to-deploy-managed-online-endpoints.md).
* Learn about [Prebuilt docker images for inference](concept-prebuilt-docker-images-inference.md)
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Title: Set up authentication
description: Learn how to set up and configure authentication for various resources and workflows in Azure Machine Learning. --+++ Previously updated : 02/02/2022 Last updated : 07/18/2022
The easiest way to create an SP and grant access to your workspace is by using t
## Configure a managed identity > [!IMPORTANT]
-> Managed identity is only supported when using the Azure Machine Learning SDK from an Azure Virtual Machine or with an Azure Machine Learning compute cluster. Using a managed identity with a compute cluster is currently in preview.
+> Managed identity is only supported when using the Azure Machine Learning SDK from an Azure Virtual Machine or with an Azure Machine Learning compute cluster.
### Managed identity with a VM
ws.get_details()
You can use a service principal for Azure CLI commands. For more information, see [Sign in using a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal).
-### Use a service principal with the REST API (preview)
+### Use a service principal with the REST API
-The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/) (preview). You use the Azure Active Directory [client credentials grant flow](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows.
+The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/). You use the Azure Active Directory [client credentials grant flow](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows.
> [!IMPORTANT] > If you are currently using Azure Active Directory Authentication Library (ADAL) to get credentials, we recommend that you [Migrate to the Microsoft Authentication Library (MSAL)](../active-directory/develop/msal-migration.md). ADAL support is scheduled to end on June 30, 2022.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
with mlflow.start_run():
If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options. > [!WARNING]
-> For [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md), you have to [deploy Azure Databricks in your own network (VNet injection)](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject.md) to ensure proper connectivity.
+> For [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md), you have to [deploy Azure Databricks in your own network (VNet injection)](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject) to ensure proper connectivity.
You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example:
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
>[!IMPORTANT] > To view more information about curated environment packages and versions, visit the Environments tab in the Azure Machine Learning [studio](./how-to-manage-environments-in-studio.md).
-## Training curated environments
+## Curated environments
### PyTorch
Azure ML pipeline training workflows that use AutoML automatically selects a cur
For more information on AutoML and Azure ML pipelines, see [use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
-## Inference curated environments and prebuilt docker images
-- ## Support Version updates for supported environments, including the base images they reference, are released every two weeks to address vulnerabilities no older than 30 days. Based on usage, some environments may be deprecated (hidden from the product but usable) to support more common machine learning scenarios.
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/18/2022 # Create a Dynamics 365 apps on Dataverse and Power Apps offer
Enter a descriptive name that we'll use to refer to this offer solely within Par
If you choose this option, the Enable app license management through Microsoft check box is enabled and cannot be changed.
- > [!NOTE]
- > This capability is currently in Public Preview.
- - Select **No**, if you prefer to only list your offer through the marketplace and process transactions independently. If you choose this option, you can use the **Enable app license management through Microsoft** check box to choose whether or not to enable app license management through Microsoft. For more information, see [ISV app license management](isv-app-license.md).
marketplace Power Bi Visual Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-availability.md
Previously updated : 09/21/2021 Last updated : 07/18/2022 # Define the availability of a Power BI visual offer
-This page lets you define where and how to make your offer available, including markets and release date.
+The _Availability_ page lets you define where and how to make your offer available, including markets and release date.
## Markets
-To specify the markets in which your offer should be available, select **Edit markets**.
+1. To specify the markets in which your offer should be available, select **Edit markets**.
+ :::image type="content" source="media/power-bi-visual/markets.png" alt-text="Screenshot of the Availability page in Partner Center, including selection of Markets.":::
-Your selections here apply only to new acquisitions; if someone already has your app in a market you later remove, they can continue using it, but no new customers in that market will be able to get your offer.
+ Your selections here apply only to new acquisitions; if someone already has your app in a market you later remove, they can continue using it, but no new customers in that market will be able to get your offer.
->[!IMPORTANT]
->It is your responsibility to meet any local legal requirements, even if those requirements aren't listed here or in Partner Center. Even if you select all markets, local laws, restrictions, or other factors may prevent certain offers from being listed in some countries and regions.
+ >[!IMPORTANT]
+ >It is your responsibility to meet any local legal requirements, even if those requirements aren't listed here or in Partner Center. Even if you select all markets, local laws, restrictions, or other factors may prevent certain offers from being listed in some countries and regions.
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Technical configuration**.
+1. Select the markets you want and then select **Save**.
+1. Select **Save draft** before continuing to the next tab in the left-nav menu: **Technical configuration**.
## Next steps
marketplace Power Bi Visual Manage Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-manage-names.md
Previously updated : 09/21/2021 Last updated : 07/18/2022 # Manage Power BI visual offer names This page lets you reserve additional names for your offer. These names can be for use in a different language or if you decide to change the productΓÇÖs name.
-In the left-nav menu, select **Offer Management**, then **Manage offer names**.
+## Reserve a name
+1. In the left-nav menu, select **Offer Management**, then **Manage offer names**.
-## Reserve a name
+ :::image type="content" source="media/power-bi-visual/manage-product-names.png" alt-text="Shows the tab for managing product names in a Power BI visual offer.":::
1. Enter a **Name** and select **Check availability** to see if that name is available. 1. If the name is available, select **Reserve product name** to add it to the list of names. 1. Repeat for each name you want to reserve.
-To delete a name, select **Delete**.
+> [!TIP]
+> To delete a name, select **Delete**.
To finish submitting your offer, return to any prior tab (such as **Offer setup**) and select **Review and publish** at the top-right or bottom of the page. ## Next steps -- [Review and publish your offer](review-publish-offer.md)
+- [Review and publish your offer](review-publish-offer.md)
marketplace Power Bi Visual Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-listing.md
Previously updated : 09/21/2021 Last updated : 07/18/2022 # Configure Power BI visual offer listing details
This page lets you define the offer details such as offer name, description, lin
## Languages
-Provide listing details in any one or multiple supported languages. Select **Manage additional languages** to add a language. Select each language to add its listing details.
+Provide listing details in any one or multiple supported languages.
+1. On the **Offer listing** page, select **Manage additional languages** to add a language.
+1. Select each language you want to add its listing details.
+1. Select **Update**. The languages you selected appear in the **Language** column.
+
+ :::image type="content" source="media/power-bi-visual/listing-languages.png" alt-text="Shows the selection of languages for the offer listing.":::
## Marketplace details
+In the **Language** column, select the language you want to configure.
+ - The **[Name](/office/dev/store/reserve-solution-name)** you enter here is shown to customers as the title of the offer. This field is pre-populated with the name you entered when you created the offer, but you can change it. If you want to reserve more names (for example, in another language) select [Reserve more names](power-bi-visual-manage-names.md). - Enter a **Summary** of your offer for the Search results summary. This description may be used in marketplace search results. - Enter a thorough **Description** of your offer, up to 3,000 characters. Customers will see this in the Marketplace listing overview.
For additional marketplace listing resources, see [Best practices for marketplac
Select **Save draft**.
-If you selected additional languages, select each from the dropdown list at the top of the page and repeat the above steps for each one. When finished, continue to the next tab in the left-nav menu, **Availability**.
+If you selected additional languages, select each from the dropdown list at the top of the page and repeat the above steps for each one. When finished, continue to the next tab in the left-nav menu: Availability.
## Next steps -- [**Availability**](power-bi-visual-availability.md)
+- [Define the availability of a Power BI visual offer](power-bi-visual-availability.md)
marketplace Power Bi Visual Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-setup.md
Previously updated : 03/28/2022 Last updated : 07/18/2022 # Create a Power BI visual offer
Review [Plan a Power BI visual offer](marketplace-power-bi-visual.md). It will e
## Setup details
-For **Additional purchases**, select whether or not your offer requires purchases of a service or additional in-app purchases.
+1. On the **Offer setup** page, under **Setup details** select one of the radio buttons:
-For **Power BI certification** (optional), read the description carefully and if you want to request Power BI certification, select the check box. [Certified](/power-bi/developer/visuals/power-bi-custom-visuals-certified) Power BI visuals meet certain specified code requirements that the Microsoft Power BI team has tested and approved. We recommend that you submit and publish your Power BI visual *before* you request certification, because the certification process takes extra time that could delay publishing of your offer.
+ - **Managing license and selling with Microsoft** to enable your offer to be transactable in [Microsoft AppSource](https://appsource.microsoft.com/) and get license management. This is a one-time setting, and you canΓÇÖt change it after your offer is published.
+ > [!NOTE]
+ > This capability is currently in Public Preview.
+ - **My offer requires purchase of a service or offers additional in-app purchase** to manage licenses and transactions independently.
+ > [!NOTE]
+ > This capability is currently in Public Preview.
+ - **My offer does not require purchase of a service and does not offer in app purchases** to provide a free offer.
+
+1. Under **Power BI certification** (optional), read the description carefully and if you want to request [Power BI certification](/power-bi/developer/visuals/power-bi-custom-visuals-certified), select the check box. [Certified](/power-bi/developer/visuals/power-bi-custom-visuals-certified) Power BI visuals meet certain specified code requirements that the Microsoft Power BI team has tested and approved. We recommend that you submit and publish your Power BI visual *before* you request certification, because the certification process takes extra time that could delay publishing of your offer.
## Customer leads
You can also connect the product to your customer relationship management (CRM)
> [!NOTE] > Connecting to a CRM system is optional.
-To configure the lead management in Partner Center:
+To connect your offer to your CRM:
1. In Partner Center, go to the **Offer setup** tab. 1. Under **Customer leads**, select the **Connect** link.
-1. In the **Connection details** dialog box, select a lead destination from the list.
+1. In the **Connection details** dialog box, select a lead destination.
1. Complete the fields that appear. For detailed steps, see the following articles: - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
To configure the lead management in Partner Center:
- [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce) 1. To validate the configuration you provided, select the **Validate link**.
-1. When youΓÇÖve configured the connection details, select **Connect**.
+1. After youΓÇÖve configured and validated the connection details, select **Connect**.
For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
-1. Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
-
-Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+1. Select **Save draft** before continuing to the next tab in the left-nav menu: **Properties**.
## Next steps
marketplace Power Bi Visual Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-plans.md
+
+ Title: Create Power BI visual plans in Partner Center for Microsoft AppSource
+description: Learn how to create plans for a Power BI visual offer in Partner Center for Microsoft AppSource.
++++++ Last updated : 07/18/2022++
+# Create Power BI visual plans
+
+> [!NOTE]
+> If you enabled the ΓÇ£Managing license and selling with MicrosoftΓÇ¥ option on the [Offer setup](power-bi-visual-offer-setup.md#setup-details) page, the **Plans overview** tab appears in the left-nav as shown in the following screenshot. Otherwise, go to [Manage Power BI visual offer names](power-bi-visual-manage-names.md).
++
+You need to define at least one plan, if your offer has app license management enabled. You can create a variety of plans with different options for the same offer. These plans (sometimes referred to as SKUs) can differ in terms of monetization or tiers of service.
+
+## Create a plan
+
+1. In the left-nav, select **Plan overview**.
+1. Near the top of the page, select **+ Create new plan**.
+1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**.
+1. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 200 characters.
+ > [!NOTE]
+ > This is the plan that customers will see in [Microsoft AppSource](https://appsource.microsoft.com/) and [Microsoft 365 admin center](https://admin.microsoft.com/).
+1. Select **Create**.
+
+## Define the plan listing
+
+On the **Plan listing** tab, you can define the plan name and description as you want them to appear in the commercial marketplace. This information will be shown on the Microsoft AppSource listing page.
+
+1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer.
+1. In the **Plan description** box, explain what makes this plan unique and any differences from other plans within your offer. This description may contain up to 3,000 characters.
+1. Select **Save draft**.
+
+## Define pricing and availability
+
+1. In the left-nav, select **Pricing and availability**.
+1. In the **Markets** section, select **Edit markets**.
+1. On the side panel that appears, select at least one market. To make your offer available in every possible market, choose **Select all** or select only the specific markets you want. When you're finished, select **Save**.
+
+ Your selections here apply only to new acquisitions; if someone already has your app in a certain market, and you later remove that market, the people who already have the offer in that market can continue to use it, but no new customers in that market will be able to get your offer.
+
+ > [!IMPORTANT]
+ > It is your responsibility to meet any local legal requirements, even if those requirements aren't listed here or in Partner Center. Even if you select all markets, local laws, restrictions, or other factors may prevent certain offers from being listed in some countries and regions.
+
+### Configure per user pricing
+
+1. On the **Pricing and availability** tab, under **User limits**, optionally specify the minimum and maximum number of users for this plan.
+
+ > [!NOTE]
+ > If you choose not to define the user limits, the default value of one to one million users will be used.
+
+1. Under Billing term, specify a monthly price, annual price, or both.
+
+ > [!NOTE]
+ > You must specify a price for your offer, even if the price is zero.
+
+ :::image type="content" source="./media/power-bi-visual/pricing-and-availability.png" alt-text="Screenshot of the pricing and availability tab.":::
+
+### Enable a free trial
+
+You can optionally configure a free trial for each plan in your offer. To enable a free trial, select the **Allow a one-month free trial** check box.
+
+> [!IMPORTANT]
+> After your transactable offer has been published with a free trial, it cannot be disabled for that plan. Make sure this setting is correct before you publish the offer to avoid having to re-create the plan.
+
+If you select this option, customers are not charged for the first month of use. At the end of the free month, one of the following occurs:
+
+- If the customer chooses recurring billing, they will automatically be upgraded to a paid plan and the selected payment method is charged.
+- If the customer didnΓÇÖt choose recurring billing, the plan will expire at the end of the free trial.
+
+### Choose who can see your plan
+
+You can configure each plan to be visible to everyone (public) or to only a specific audience (private).
+
+Private plans restrict the discovery and deployment of your solution to a specific set of customers you choose and offer customized software and pricing. You can provide specialized pricing and support custom scenarios, as well as early access to limited release software to such customers.
+
+If you only configure private plans for a visual:
+
+- They'll be hidden from everyone else. Therefore, if a visual is already available to the public, you canΓÇÖt change it to private plan only.
+- They wonΓÇÖt be auto-updated, wonΓÇÖt appear in the store, and canΓÇÖt be marked with a certification badge.
+
+> [!NOTE]
+> If you publish a private plan, you can change its visibility to public later. However, once you publish a public plan, you cannot change its visibility to private. If you upgrade an offer from list only to transactable, and add private plans only, the offer will be hidden from AppSource.
+
+You grant access to a private plan using tenant IDs with the option to include a description of each tenant ID you assign. You can add a maximum of 10 tenant IDs manually or up to 20,000 tenant IDs using a .CSV file.
+
+#### Make your plan public
+
+1. Under **Plan visibility**, select **Public**.
+1. Select **Save draft**, and then go to [View your plans](#view-your-plans).
+
+#### Manually add tenant IDs for a private plan
+
+1. Under **Plan visibility**, select **Private**.
+1. In the **Tenant ID** box that appears, enter the Azure AD tenant ID of the audience you want to grant access to this private plan. A minimum of one tenant ID is required.
+1. (Optional) Enter a description of this audience in the **Description** box.
+1. To add another tenant ID, select **Add ID**, and then repeat steps 2 and 3.
+1. When you're done adding tenant IDs, select **Save draft**, and then go to [View your plans](#view-your-plans).
+
+#### Use a .CSV file for a private plan
+
+1. Under **Plan visibility**, select **Private**.
+1. Select the **Export Audience (csv)** link.
+1. Open the .CSV file and add the Azure IDs you want to grant access to the private offer to the **ID** column.
+1. (Optional) Enter a description for each audience in the **Description** column.
+1. Add "TenantID" in the **Type** column, for each row with an Azure ID.
+1. Save the .CSV file.
+1. On the **Pricing and availability** tab, under **Plan visibility**, select the **Import Audience (csv)** link.
+1. In the dialog box that appears, select **Yes**.
+1. Select the .CSV file and then select **Open**.
+1. Select **Save draft**, and then the next section: View your plans.
+
+### View your plans
+
+1. Select **Save draft** before leaving the _Pricing and availability_ page.
+1. In the breadcrumb at the top of the page, select **Plan overview**.
+1. To create another plan for this offer, at the top of the **Plan overview** page, repeat the steps in the [Create a plan](#create-a-plan) section. Otherwise, if you're done creating plans, go to the next section: Integrate your Visual with the Power BI License APIs.
+
+## Integrate your Visual with the Power BI License APIs
+
+You need to update the license enforcement in your visual.
+
+For information about how to create a visual package, see [Package a Power BI visual](/power-bi/developer/visuals/package-visual).
+For instructions on how to update license enforcement in your visual, see [Licensing API](/power-bi/developer/visuals/licensing-api).
+
+## Next steps
+
+- [Manage Power BI visual offer names](power-bi-visual-manage-names.md)
marketplace Power Bi Visual Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-properties.md
Previously updated : 02/10/2022 Last updated : 07/18/2022 # Configure Power BI visual offer properties
-This page lets you define the [categories](./categories.md) used to group your offer on Microsoft AppSource, the legal contracts that support your offer, and support documentation.
+The _Properties_ page lets you define the [categories](./categories.md) used to group your offer on Microsoft AppSource, the legal contracts that support your offer, and support documentation.
## Categories
-Select up to three **[Categories](./categories.md)** for grouping your offer into the appropriate marketplace search areas. This table shows the categories that are available for Power BI visuals.
+Select up to two **[Categories](./categories.md)** for grouping your offer into the appropriate marketplace search areas. This table shows the categories that are available for Power BI visuals.
| Category | Description | | | - |
-| All | All the different types of visuals that are certified for use within your organization. |
| Change over time | These visuals are used to display the changing trend of measures over time. | | Comparison | These visuals are used to compare categories by their measures. | | Correlation | These visuals show the degree to which two or more variables are correlated. |
To simplify the procurement process for customers and reduce legal complexity fo
> [!NOTE] > After you publish an offer using the Standard Contract for the commercial marketplace, you can't use your own custom terms and conditions. Either offer your solution under the standard contract with optional amendments or under your own terms and conditions.
+1. Go to [Privacy policy link](#privacy-policy-link).
+ ### Use your own terms and conditions You may provide your own terms and conditions instead of using the standard contract, or use our EULA specific for Power BI visual offers. Customers must accept these terms before they can try your offer.
-1. Clear the **Use the Standard Contract for Microsoft's commercial marketplace** check box.
-1. In the **EULA** filed (see image above), enter a single web address for your terms and conditions. Or, point to the Power BI visuals contract at `https://visuals.azureedge.net/app-store/Power%20BI%20-%20Default%20Custom%20Visual%20EULA.pdf` (PDF). Either will display as an active link in AppSource.
+1. Clear the **Use the Standard Contract...** check box.
+1. In the **EULA** field, enter a single web address for your terms and conditions. Or, point to the Power BI visuals contract at `https://visuals.azureedge.net/app-store/Power%20BI%20-%20Default%20Custom%20Visual%20EULA.pdf` (PDF). Either will display as an active link in AppSource.
### Privacy policy link
marketplace Power Bi Visual Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-technical-configuration.md
Previously updated : 09/21/2021 Last updated : 07/18/2022 # Set up Power BI visual offer technical configuration
On the **Technical configuration** tab, provide the files needed for the Power B
:::image type="content" source="media/power-bi-visual/technical-configuration.png" alt-text="Shows the Technical Configuration page in Partner Center.":::
+## Sample PBIX report file
+
+To showcase your visual offer, help users get familiar with the visual. Highlight the value the visual brings to the user and give examples of usage and formatting options. Add a "hints" page at the end with tips, tricks, and things to avoid. The sample PBIX report file must work offline, without any external connections.
+
+> [!NOTE]
+> - The PBIX report must use the same version of the visual as the PBIVIZ.
+> - The PBIX report file must work offline, without any external connections.
+ ## PBIVIZ package [Pack your Power BI visual](/power-bi/developer/visuals/package-visual) into a PBIVIZ package containing all the required metadata:
On the **Technical configuration** tab, provide the files needed for the Power B
> [!NOTE] > If you are updating or resubmitting a visual: > - The GUID must remain the same.
-> - The version number should be incremented between package updates.
-
-## Sample PBIX report file
-
-To showcase your visual offer, help users get familiar with the visual. Highlight the value the visual brings to the user and give examples of usage and formatting options. Add a "hints" page at the end with tips, tricks, and things to avoid. The sample PBIX report file must work offline, without any external connections.
-
-> [!NOTE]
-> - The PBIX report must use the same version of the visual as the PBIVIZ.
-> - The PBIX report file must work offline, without any external connections.
+> - The version number should be increased between package updates.
-Select **Save draft** before skipping in the left-nav menu to the **Offer management** tab.
+Select **Save draft** before continuing to the next tab in the left-nav menu.
## Next steps -- [Offer management](power-bi-visual-manage-names.md)
+- If you enabled the ΓÇ£Managing license and selling with MicrosoftΓÇ¥ option on the [Offer setup](power-bi-visual-offer-setup.md) page, go to [Create Power Bi visual plans](power-bi-visual-plans.md)
+- Otherwise, go to [Manage Power BI visual offer names](power-bi-visual-manage-names.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
+| Azure Migrate (Microsoft.Migrate) / migrate projects, assessment project and discovery site | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
remote-rendering Render Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/render-model.md
You can now explore the scene graph by selecting the new node and clicking **Sho
![Unity Hierarchy](./media/unity-hierarchy.png)
-There is a [cut plane](../overview/features/cut-planes.md) object in the scene. Try enabling it in its properties and moving it around:
+There is a [cut plane](../overview/features/cut-planes.md) object in the scene. Try enabling it by checking the box in front of **CutPlane** under Inspector pane and move around:
![Changing the cut plane](media/arr-sample-unity-cutplane.png)
route-server Next Hop Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md
+
+ Title: 'Next Hop IP Support'
+description: Learn about Next Hop IP support for Azure Route Server.
++++ Last updated : 07/18/2022+++
+# Next Hop IP support
+
+Azure Route Server simplifies the exchange of routing information between any Network Virtual Appliance (NVA) that supports the Border Gateway Protocol (BGP) routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables. With the support for Next Hop IP in Azure Route Server, you can peer with NVAs deployed behind an Azure Internal Load Balancer (ILB). The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance.
++
+## Active-passive NVA connectivity
+
+You can deploy a set of active-passive NVAs behind an internal load balancer to ensure symmetrical routing to and from the NVA. With the support for Next Hop IP, you can define the next hop for both the active and passive NVAs as the IP address of the internal load balancer and set up the load balancer to direct traffic towards the Active NVA instance.
+
+## Active-active NVA connectivity
+
+You can deploy a set of active-active NVAs behind an internal load balancer to optimize connectivity performance. With the support for Next Hop IP, you can define the next hop for both NVA instances as the IP address of the internal load balancer. Traffic that reaches the load balancer will be sent to both NVA instances.
+
+## Next hop IP configuration
+
+Next Hop IPs are set up in the BGP configuration of the target NVAs. The Next Hop IP isn't part of the Azure Route Server configuration.
+
+## Next steps
+
+- Learn how to [configure Azure Route Server](quickstart-configure-route-server-portal.md).
+- Learn how to [monitor Azure Route Server](monitor-route-server.md).
route-server Route Injection In Spokes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-injection-in-spokes.md
If multiple NVA instances are used for in an active/active fashion for better re
Multiple NVA instances can be deployed in an active/passive setup as well, for example if one of them advertises worse routes (with a longer AS path) than the other. In this case, Azure Route Server will only inject the preferred route in the VNet virtual machines, and the less preferred route will only be used when the primary NVA instance stops advertising over BGP.
+## Different Route Servers to advertise routes to Virtual Network Gateways and to VNets
+
+As the previous sections have shown, Azure Route Server has a double role:
+
+- It learns and advertises routes to/from Virtual Network Gateways (VPN and ExpressRoute)
+- It configures learnt routes on its VNet, and on directly peered VNets
+
+This dual functionality often is interesting, but at times it can be detrimental to certain requirements. For example, if the Route Server is deployed in a VNet with an NVA advertising a 0.0.0.0/0 route and an ExpressRoute gateway advertising prefixes from on-premises, it will configure all routes (both the 0.0.0.0/0 from the NVA and the on-premises prefixes) on the virtual machines in its VNet and directly peered VNets. As a consequence, since the on-premises prefixes will be more specific than 0.0.0.0/0, traffic between on-premises and Azure will bypass the NVA. If this is not desired, the previous sections in this article have shown how to disable BGP propagation in the VM subnets and configure UDRs.
+
+However, there is an alternative, more dynamic approach. It is possible using different Azure Route Servers for different functionality: one of them will be responsible for interacting with the Virtual Network Gateways, and the other one for interacting with the Virtual Network routing. The following diagram shows a possible design for this:
++
+In the figure above, Azure Route Server 1 in the hub is used to inject the prefixes from the SDWAN into ExpressRoute. Since the spokes are peered with the hub VNet without the "Use Remote Gateways" and "Allow Gateway Transit" VNet peering options, the spokes will not learn these routes (neither the SDWAN prefixes nor the ExpressRoute prefixes).
+
+To propagate routes to the spokes the NVA leverages a second Azure Route Server 2, deployed in a new auxiliary VNet. The NVA will only propagate a single `0.0.0.0/0` route to this Azure Route Server 2. Since the spokes are peered with this auxiliary VNet with "Use Remote Gateways" and "Allow Gateway Transit" VNet peering options, this `0.0.0.0/0` route will be learnt by all the Virtual Machines in the spokes.
+
+Note that the next hop for this `0.0.0.0/0` route will be the NVA, so the spokes still need to be peered to the hub VNet. Another important aspect to notice is that the hub VNet needs to be peered to the VNet where the new Azure Route Server 2 is deployed, otherwise it will not be able to create the BGP adjacency.
+
+This design allows automatic injection of routes in a spoke VNets without interference from other routes learnt from ExpressRoute, VPN or an SDWAN environment.
+ ## Next steps * [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
When you're configuring an automation rule and adding a **run playbook** action,
#### Permissions in a multi-tenant architecture
-Automation rules fully support cross-workspace and [multi-tenant deployments](extend-sentinel-across-workspaces-tenants.md#managing-workspaces-across-tenants-using-azure-lighthouse) (in the case of multi-tenant, using [Azure Lighthouse](../lighthouse/index.yml)).
+Automation rules fully support cross-workspace and [multi-tenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multi-tenant, using [Azure Lighthouse](../lighthouse/index.yml)).
Therefore, if your Microsoft Sentinel deployment uses a multi-tenant architecture, you can have an automation rule in one tenant run a playbook that lives in a different tenant, but permissions for Sentinel to run the playbooks must be defined in the tenant where the playbooks reside, not in the tenant where the automation rules are defined.
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
For playbooks that are triggered by incident creation and receive incidents as t
> > When you add the **run playbook** action to an automation rule, a drop-down list of playbooks will appear for your selection. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant permission to Microsoft Sentinel on the spot by selecting the **Manage playbook permissions** link. >
- > In a multi-tenant ([Lighthouse](extend-sentinel-across-workspaces-tenants.md#managing-workspaces-across-tenants-using-azure-lighthouse)) scenario, you must define the permissions on the tenant where the playbook lives, even if the automation rule calling the playbook is in a different tenant. To do that, you must have **Owner** permissions on the playbook's resource group.
+ > In a multi-tenant ([Lighthouse](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse)) scenario, you must define the permissions on the tenant where the playbook lives, even if the automation rule calling the playbook is in a different tenant. To do that, you must have **Owner** permissions on the playbook's resource group.
> > There's a unique scenario facing a **Managed Security Service Provider (MSSP)**, where a service provider, while signed into its own tenant, creates an automation rule on a customer's workspace using [Azure Lighthouse](../lighthouse/index.yml). This automation rule then calls a playbook belonging to the customer's tenant. In this case, Microsoft Sentinel must be granted permissions on ***both tenants***. In the customer tenant, you grant them in the **Manage playbook permissions** panel, just like in the regular multi-tenant scenario. To grant the relevant permissions in the service provider tenant, you need to add an additional Azure Lighthouse delegation that grants access rights to the **Azure Security Insights** app, with the **Microsoft Sentinel Automation Contributor** role, on the resource group where the playbook resides. [Learn how to add this delegation](tutorial-respond-threats-playbook.md#permissions-to-run-playbooks).
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
Title: Extend Microsoft Sentinel across workspaces and tenants | Microsoft Docs
+ Title: Extend Microsoft Sentinel across workspaces and tenants
description: How to use Microsoft Sentinel to query and analyze data across workspaces and tenants. Previously updated : 05/03/2022 Last updated : 07/14/2022 -
+#Customer intent: As a security operator, I want to extend my workspace so I can query and analyze data across workspaces and tenants.
# Extend Microsoft Sentinel across workspaces and tenants
## The need to use multiple Microsoft Sentinel workspaces
-Microsoft Sentinel is built on top of a Log Analytics workspace. You'll notice that the first step in onboarding Microsoft Sentinel is to select the Log Analytics workspace you wish to use for that purpose.
+When you onboard Microsoft Sentinel, your first step is to select your Log Analytics workspace. While you can get the full benefit of the Microsoft Sentinel experience with a single workspace, in some cases, you might want to extend your workspace to query and analyze your data across workspaces and tenants.
-You can get the full benefit of the Microsoft Sentinel experience when using a single workspace. Even so, there are some circumstances that may require you to have multiple workspaces. The following table lists some of these situations and, when possible, suggests how the requirement may be satisfied with a single workspace:
+This table lists some of these scenarios and, when possible, suggests how you may use a single workspace for the scenario.
| Requirement | Description | Ways to reduce workspace count | |-|-|--|
-| Sovereignty and regulatory compliance | A workspace is tied to a specific region. If data needs to be kept in different [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/) to satisfy regulatory requirements, it must be split into separate workspaces. | |
+| Sovereignty and regulatory compliance | A workspace is tied to a specific region. To keep data in different [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/) to satisfy regulatory requirements, split up the data into separate workspaces. | |
| Data ownership | The boundaries of data ownership, for example by subsidiaries or affiliated companies, are better delineated using separate workspaces. | | | Multiple Azure tenants | Microsoft Sentinel supports data collection from Microsoft and Azure SaaS resources only within its own Azure Active Directory (Azure AD) tenant boundary. Therefore, each Azure AD tenant requires a separate workspace. | | | Granular data access control | An organization may need to allow different groups, within or outside the organization, to access some of the data collected by Microsoft Sentinel. For example:<br><ul><li>Resource owners' access to data pertaining to their resources</li><li>Regional or subsidiary SOCs' access to data relevant to their parts of the organization</li></ul> | Use [resource Azure RBAC](resource-context-rbac.md) or [table level Azure RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043) |
-| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion](../azure-monitor/logs/personal-data-mgmt.md#exporting-and-deleting-personal-data) |
+| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion]([Managing personal data in Log Analytics and Application Insights](../azure-monitor/logs/personal-data-mgmt.md#exporting-and-deleting-personal-data) |
| Split billing | By placing workspaces in separate subscriptions, they can be billed to different parties. | Usage reporting and cross-charging |
-| Legacy architecture | The use of multiple workspaces may stem from a historical design that took into consideration limitations or best practices which do not hold true anymore. It might also be an arbitrary design choice that can be modified to better accommodate Microsoft Sentinel.<br><br>Examples include:<br><ul><li>Using a per-subscription default workspace when deploying Microsoft Defender for Cloud</li><li>The need for granular access control or retention settings, the solutions for which are relatively new</li></ul> | Re-architect workspaces |
+| Legacy architecture | The use of multiple workspaces may stem from a historical design that took into consideration limitations or best practices which don't hold true anymore. It might also be an arbitrary design choice that can be modified to better accommodate Microsoft Sentinel.<br><br>Examples include:<br><ul><li>Using a per-subscription default workspace when deploying Microsoft Defender for Cloud</li><li>The need for granular access control or retention settings, the solutions for which are relatively new</li></ul> | Re-architect workspaces |
### Managed Security Service Provider (MSSP)
-A particular use case that mandates multiple workspaces is an MSSP Microsoft Sentinel service. In this case, many if not all of the above requirements apply, making multiple workspaces, across tenants, the best practice. The MSSP can use [Azure Lighthouse](../lighthouse/overview.md) to extend Microsoft Sentinel cross-workspace capabilities across tenants.
+In case of an MSSP, many if not all of the above requirements apply, making multiple workspaces, across tenants, the best practice. The MSSP can use [Azure Lighthouse](../lighthouse/overview.md) to extend Microsoft Sentinel cross-workspace capabilities across tenants.
## Microsoft Sentinel multiple workspace architecture
-As implied by the requirements above, there are cases where multiple Microsoft Sentinel workspaces, potentially across Azure Active Directory (Azure AD) tenants, need to be centrally monitored and managed by a single SOC.
+As implied by the requirements above, there are cases where a single SOC needs to centrally manage and monitor multiple Microsoft Sentinel workspaces, potentially across Azure Active Directory (Azure AD) tenants.
- An MSSP Microsoft Sentinel Service.
As implied by the requirements above, there are cases where multiple Microsoft S
- A SOC monitoring multiple Azure AD tenants within an organization.
-To address this requirement, Microsoft Sentinel offers multiple-workspace capabilities that enable central monitoring, configuration, and management, providing a single pane of glass across everything covered by the SOC, as presented in the diagram below.
+To address these cases, Microsoft Sentinel offers multiple-workspace capabilities that enable central monitoring, configuration, and management, providing a single pane of glass across everything covered by the SOC. This diagram shows an example architecture for such use cases.
This model offers significant advantages over a fully centralized model in which all data is copied to a single workspace: -- Flexible role assignment to the global and local SOCs, or to the MSSP and its customers.
+- Flexible role assignment to the global and local SOCs, or to the MSSP its customers.
-- Fewer challenges regarding data ownership, data privacy and regulatory compliance.
+- Fewer challenges regarding data ownerships, data privacy and regulatory compliance.
- Minimal network latency and charges. - Easy onboarding and offboarding of new subsidiaries or customers.
-In the following sections, we will explain how to operate this model, and particularly how to:
+In the following sections, we'll explain how to operate this model, and particularly how to:
- Centrally monitor multiple workspaces, potentially across tenants, providing the SOC with a single pane of glass.
In the following sections, we will explain how to operate this model, and partic
### Manage incidents on multiple workspaces
-Microsoft Sentinel supports a [multiple workspace incident view](./multiple-workspace-view.md) facilitating central incident monitoring and management across multiple workspaces. The centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace.
+Microsoft Sentinel supports a [multiple workspace incident view](./multiple-workspace-view.md) where you can centrally manage and monitor incidents across multiple workspaces. The centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace.
### Cross-workspace querying
-Microsoft Sentinel supports querying [multiple workspaces in a single query](../azure-monitor/logs/cross-workspace-query.md), allowing you to search and correlate data from multiple workspaces in a single query.
+You can query [multiple workspaces](../azure-monitor/logs/cross-workspace-query.md), allowing you to search and correlate data from multiple workspaces in a single query.
- Use the [workspace() expression](../azure-monitor/logs/workspace-expression.md) to refer to a table in a different workspace.
You can then write a query across both workspaces by beginning with `unionSecuri
#### Cross-workspace analytics rules<a name="scheduled-alerts"></a> <!-- Bookmark added for backward compatibility with old heading -->
-Cross-workspace queries can now be included in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse) as in the case of an MSSP, subject to the following limitations:
+You can now include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. Note these limitations:
-- **Up to 20 workspaces** can be included in a single query.-- Microsoft Sentinel must be **deployed on every workspace** referenced in the query.-- Alerts generated by a cross-workspace analytics rule, and the incidents created from them, exist **only in the workspace where the rule was defined**. They will not be displayed in any of the other workspaces referenced in the query.
+- You can include **up to 20 workspaces** in a single query.
+- You must deploy Microsoft Sentinel **on every workspace** referenced in the query.
+- Alerts generated by a cross-workspace analytics rule, and the incidents created from them, exist **only in the workspace where the rule was defined**. The alerts won't be displayed in any of the other workspaces referenced in the query.
-Alerts and incidents created by cross-workspace analytics rules will contain all the related entities, including those from all the referenced workspaces as well as the "home" workspace (where the rule was defined). This will allow analysts to have a full picture of alerts and incidents.
+Alerts and incidents created by cross-workspace analytics rules contain all the related entities, including those from all the referenced workspaces and the "home" workspace (where the rule was defined). This way, analysts get a full picture of alerts and incidents.
> [!NOTE] > Querying multiple workspaces in the same query might affect performance, and therefore is recommended only when the logic requires this functionality. #### Cross-workspace workbooks<a name="using-cross-workspace-workbooks"></a> <!-- Bookmark added for backward compatibility with old heading -->
-[Workbooks](./overview.md#workbooks) provide dashboards and apps to Microsoft Sentinel. When working with multiple workspaces, they provide monitoring and actions across workspaces.
-Workbooks can provide cross-workspace queries in one of three methods, each of which cater to different levels of end-user expertise:
+Workbooks provide dashboards and apps to Microsoft Sentinel. When working with multiple workspaces, workbooks provide monitoring and actions across workspaces.
+
+Workbooks can provide cross-workspace queries in one of three methods, suitable for different levels of end-user expertise:
| Method | Description | When should I use? | ||-|--|
-| Write cross-workspace queries | The workbook creator can write cross-workspace queries (described above) in the workbook. | This option enables workbook creators to shield the user entirely from the workspace structure. |
-| Add a workspace selector to the workbook | The workbook creator can implement a workspace selector as part of the workbook, as described [here](https://techcommunity.microsoft.com/t5/azure-sentinel/making-your-azure-sentinel-workbooks-multi-tenant-or-multi/ba-p/1402357). | This option provides the user with control over the workspaces shown by the workbook, by means of an easy-to-use dropdown box. |
-| Edit the workbook interactively | An advanced user modifying an existing workbook can edit the queries in it, selecting the target workspaces using the workspace selector in the editor. | This option enables a power user to easily modify existing workbooks to work with multiple workspaces. |
-|
+| Write cross-workspace queries | The workbook creator can write cross-workspace queries (described above) in the workbook. | I want the workbook creator to create a workspace structure that is transparent to the user. |
+| Add a workspace selector to the workbook | The workbook creator can [implement a workspace selector as part of the workbook](https://techcommunity.microsoft.com/t5/azure-sentinel/making-your-azure-sentinel-workbooks-multi-tenant-or-multi/ba-p/1402357). | I want to allow the user to control the workspaces shown by the workbook, with an easy-to-use dropdown box. |
+| Edit the workbook interactively | An advanced user modifying an existing workbook can edit the queries in it, selecting the target workspaces using the workspace selector in the editor. | I want to allow a power user to easily modify existing workbooks to work with multiple workspaces. |
#### Cross-workspace hunting
-Microsoft Sentinel provides preloaded query samples designed to get you started and get you familiar with the tables and the query language. These built-in hunting queries are developed by Microsoft security researchers on a continuous basis, both adding new queries and fine-tuning existing queries, to provide you with an entry point to look for new detections and identify signs of intrusion that may have gone undetected by your security tools.
+Microsoft Sentinel provides preloaded query samples designed to get you started and get you familiar with the tables and the query language. Microsoft security researchers constantly add new built-in queries and fine-tune existing queries. You can use these queries to look for new detections and identify signs of intrusion that your security tools may have missed.
-Cross-workspace hunting capabilities enable your threat hunters to create new hunting queries, or adapt existing ones, to cover multiple workspaces, by using the union operator and the workspace() expression as shown above.
+Cross-workspace hunting capabilities enable your threat hunters to create new hunting queries, or adapt existing ones, to cover multiple workspaces, by using the union operator and the workspace() expression as shown [above](#cross-workspace-querying).
## Cross-workspace management using automation
-To configure and manage multiple Microsoft Sentinel workspaces, you will need to automate the use of the Microsoft Sentinel management API. For more information on how to automate the deployment of Microsoft Sentinel resources, including alert rules, hunting queries, workbooks and playbooks, see [Extending Microsoft Sentinel: APIs, Integration and management automation](https://techcommunity.microsoft.com/t5/azure-sentinel/extending-azure-sentinel-apis-integration-and-management/ba-p/1116885).
+To configure and manage multiple Microsoft Sentinel workspaces, you need to automate the use of the Microsoft Sentinel management API.
+
+- Learn how to [automate the deployment of Microsoft Sentinel resources](https://techcommunity.microsoft.com/t5/azure-sentinel/extending-azure-sentinel-apis-integration-and-management/ba-p/1116885), including alert rules, hunting queries, workbooks and playbooks.
+- Learn how to [deploy custom content from your repository](ci-cd.md). This resource provides a consolidated methodology for managing Microsoft Sentinel as code and for deploying and configuring resources from a private Azure DevOps or GitHub repository.
-See also [Deploy Custom Content from your Repository](ci-cd.md) for a consolidated methodology for managing Microsoft Sentinel as code and for deploying and configuring resources from a private Azure DevOps or GitHub repository.
+## Manage workspaces across tenants using Azure Lighthouse
-## Managing workspaces across tenants using Azure Lighthouse
+As mentioned above, in many scenarios, the different Microsoft Sentinel workspaces can be located in different Azure AD tenants. You can use [Azure Lighthouse](../lighthouse/overview.md) to extend all cross-workspace activities across tenant boundaries, allowing users in your managing tenant to work on Microsoft Sentinel workspaces across all tenants.
-As mentioned above, in many scenarios, the different Microsoft Sentinel workspaces can be located in different Azure AD tenants. You can use [Azure Lighthouse](../lighthouse/overview.md) to extend all cross-workspace activities across tenant boundaries, allowing users in your managing tenant to work on Microsoft Sentinel workspaces across all tenants. Once Azure Lighthouse is [onboarded](../lighthouse/how-to/onboard-customer.md), use the [directory + subscription selector](./multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) on the Azure portal to select all the subscriptions containing workspaces you want to manage, in order to ensure that they will all be available in the different workspace selectors in the portal.
+Once Azure Lighthouse is [onboarded](../lighthouse/how-to/onboard-customer.md), use the [directory + subscription selector](./multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) on the Azure portal to select all the subscriptions containing workspaces you want to manage, in order to ensure that they'll all be available in the different workspace selectors in the portal.
-When using Azure Lighthouse, it is recommended to create a group for each Microsoft Sentinel role and delegate permissions from each tenant to those groups.
+When using Azure Lighthouse, it's recommended to create a group for each Microsoft Sentinel role and delegate permissions from each tenant to those groups.
## Next steps
-In this document, you learned how Microsoft Sentinel's capabilities can be extended across multiple workspaces and tenants. For practical guidance on implementing Microsoft Sentinel's cross-workspace architecture, see the following articles:
+In this article, you learned how Microsoft Sentinel's capabilities can be extended across multiple workspaces and tenants. For practical guidance on implementing Microsoft Sentinel's cross-workspace architecture, see the following articles:
- Learn how to [work with multiple tenants](./multiple-tenants-service-providers.md) in Microsoft Sentinel, using Azure Lighthouse. - Learn how to [view and manage incidents in multiple workspaces](./multiple-workspace-view.md) seamlessly.
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
Title: What is Microsoft Sentinel? | Microsoft Docs description: Learn about Microsoft Sentinel, a scalable, cloud-native security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. -- Previously updated : 11/09/2021 ++ Last updated : 07/14/2022 # What is Microsoft Sentinel?
+Microsoft Sentinel is a scalable, cloud-native solution that provides:
+
+- Security information and event management (SIEM)
+- Security orchestration, automation, and response (SOAR)
-Microsoft Sentinel is a scalable, cloud-native, **security information and event management (SIEM)** and **security orchestration, automation, and response (SOAR)** solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
+Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise. With Microsoft Sentinel, you get a single solution for attack detection, threat visibility, proactive hunting, and threat response.
Microsoft Sentinel is your bird's-eye view across the enterprise alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution time frames.
Microsoft Sentinel is your bird's-eye view across the enterprise alleviating the
- **Respond to incidents rapidly** with built-in orchestration and automation of common tasks.
-![Microsoft Sentinel core capabilities](./media/overview/core-capabilities.png)
-Building on the full range of existing Azure services, Microsoft Sentinel natively incorporates proven foundations, like Log Analytics, and Logic Apps. Microsoft Sentinel enriches your investigation and detection with AI, and provides Microsoft's threat intelligence stream and enables you to bring your own threat intelligence.
+Microsoft Sentinel natively incorporates proven Azure services, like Log Analytics and Logic Apps. Microsoft Sentinel enriches your investigation and detection with AI. It provides Microsoft's threat intelligence stream and enables you to bring your own threat intelligence.
-## Connect to all your data
-To on-board Microsoft Sentinel, you first need to [connect to your security sources](connect-data-sources.md).
+## Collect data by using data connectors
-Microsoft Sentinel comes with a number of connectors for Microsoft solutions, available out of the box and providing real-time integration, including Microsoft 365 Defender (formerly Microsoft Threat Protection) solutions, and Microsoft 365 sources, including Office 365, Azure AD, Microsoft Defender for Identity (formerly Azure ATP), and Microsoft Defender for Cloud Apps, and more. In addition, there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Microsoft Sentinel as well.
+To on-board Microsoft Sentinel, you first need to [connect to your data sources](connect-data-sources.md).
-For more information, see [Find your data connector](data-connectors-reference.md).
+Microsoft Sentinel comes with many connectors for Microsoft solutions that are available out of the box and provide real-time integration. Some of these connectors include:
-![Data collectors](./media/collect-data/collect-data-page.png)
+- Microsoft sources like Microsoft 365 Defender, Microsoft Defender for Cloud, Office 365, Microsoft Defender for IoT, and more.
+- Azure service sources like Azure Active Directory, Azure Activity, Azure Storage, Azure Key Vault, Azure Kubernetes service, and more.
+Microsoft Sentinel has built-in connectors to the broader security and applications ecosystems for non-Microsoft solutions. You can also use common event format, Syslog, or REST-API to connect your data sources with Microsoft Sentinel.
+
+For more information, see [Find your data connector](data-connectors-reference.md).
-## Workbooks
-After you [connected your data sources](quickstart-onboard.md) to Microsoft Sentinel, you can monitor the data using the Microsoft Sentinel integration with Azure Monitor Workbooks, which provides versatility in creating custom workbooks.
+## Create interactive reports by using workbooks
-While Workbooks are displayed differently in Microsoft Sentinel, it may be useful for you to see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md). Microsoft Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source.
+After you [onboard to Microsoft Sentinel](quickstart-onboard.md), monitor your data by using the integration with Azure Monitor workbooks.
-![Dashboards](./media/tutorial-monitor-data/access-workbooks.png)
+Workbooks display differently in Microsoft Sentinel than in Azure Monitor. But it may be useful for you to see how to [create a workbook in Azure Monitor](../azure-monitor/visualize/workbooks-create-workbook.md). Microsoft Sentinel allows you to create custom workbooks across your data. Microsoft Sentinel also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source.
-- Workbooks are intended for SOC engineers and analysts of all tiers to visualize data. -- While Workbooks are best used for high-level views of Microsoft Sentinel data, and require no coding knowledge, you cannot integrate Workbooks with external data.
+Workbooks are intended for SOC engineers and analysts of all tiers to visualize data.
-## Analytics
+Workbooks are best used for high-level views of Microsoft Sentinel data, and don't require coding knowledge. But you can't integrate workbooks with external data.
-To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses [analytics to correlate alerts into incidents](detect-threats-built-in.md). **Incidents** are groups of related alerts that together create an actionable possible-threat that you can investigate and resolve. Use the built-in correlation rules as-is, or use them as a starting point to build your own. Microsoft Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources. These analytics connect the dots, by combining low fidelity alerts about different entities into potential high-fidelity security incidents.
+## Correlate alerts into incidents by using analytics rules
-![Incidents](./media/investigate-cases/incident-severity.png#lightbox)
+To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses [analytics to correlate alerts into incidents](detect-threats-built-in.md). Incidents are groups of related alerts that together indicate an actionable possible-threat that you can investigate and resolve. Use the built-in correlation rules as-is, or use them as a starting point to build your own. Microsoft Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources. These analytics connect the dots, by combining low fidelity alerts about different entities into potential high-fidelity security incidents.
-## Security automation & orchestration
+## Automate and orchestrate common tasks by using playbooks
Automate your common tasks and [simplify security orchestration with playbooks](tutorial-respond-threats-playbook.md) that integrate with Azure services and your existing tools.
-Built on the foundation of Azure Logic Apps, Microsoft Sentinel's automation and orchestration solution provides a highly extensible architecture that enables scalable automation as new technologies and threats emerge. To build playbooks with Azure Logic Apps, you can choose from a growing gallery of built-in playbooks. These include [200+ connectors](../connectors/apis-list.md) for services such as Azure functions. The connectors allow you to apply any custom logic in code, ServiceNow, Jira, Zendesk, HTTP requests, Microsoft Teams, Slack, Windows Defender ATP, and Defender for Cloud Apps.
+Microsoft Sentinel's automation and orchestration solution provides a highly extensible architecture that enables scalable automation as new technologies and threats emerge. To build playbooks with Azure Logic Apps, you can choose from a growing gallery of built-in playbooks. These include [200+ connectors](../connectors/apis-list.md) for services such as Azure functions. The connectors allow you to apply any custom logic in code like:
-For example, if you use the ServiceNow ticketing system, you can use the tools provided to use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular event is detected.
+- ServiceNow
+- Jira
+- Zendesk
+- HTTP requests
+- Microsoft Teams
+- Slack
+- Windows Defender ATP
+- Defender for Cloud Apps
-![Playbooks](./media/tutorial-respond-threats-playbook/logic-app.png)
+For example, if you use the ServiceNow ticketing system, use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular alert or incident is generated.
-- Playbooks are intended for SOC engineers and analysts of all tiers, to automate and simplify tasks, including data ingestion, enrichment, investigation, and remediation. -- Playbooks work best with single, repeatable tasks, and require no coding knowledge. Playbooks are not suitable for ad-hoc or complex task chains, or for documenting and sharing evidence.
+Playbooks are intended for SOC engineers and analysts of all tiers, to automate and simplify tasks, including data ingestion, enrichment, investigation, and remediation.
+Playbooks work best with single, repeatable tasks, and don't require coding knowledge. Playbooks aren't suitable for ad-hoc or complex task chains, or for documenting and sharing evidence.
-## Investigation
+## Investigate the scope and root cause of security threats
-Currently in preview, Microsoft Sentinel [deep investigation](investigate-cases.md) tools help you to understand the scope and find the root cause, of a potential security threat. You can choose an entity on the interactive graph to ask interesting questions for a specific entity, and drill down into that entity and its connections to get to the root cause of the threat.
+Microsoft Sentinel [deep investigation](investigate-cases.md) tools help you to understand the scope and find the root cause of a potential security threat. You can choose an entity on the interactive graph to ask interesting questions for a specific entity, and drill down into that entity and its connections to get to the root cause of the threat.
-![Investigation](./media/investigate-cases/map-timeline.png)
+## Hunt for security threats by using built-in queries
-## Hunting
+Use Microsoft Sentinel's [powerful hunting search-and-query tools](hunting.md), based on the MITRE framework, which enable you to proactively hunt for security threats across your organizationΓÇÖs data sources, before an alert is triggered. Create custom detection rules based on your hunting query. Then, surface those insights as alerts to your security incident responders.
-Use Microsoft Sentinel's [powerful hunting search-and-query tools](hunting.md), based on the MITRE framework, which enable you to proactively hunt for security threats across your organizationΓÇÖs data sources, before an alert is triggered. After you discover which hunting query provides high-value insights into possible attacks, you can also create custom detection rules based on your query, and surface those insights as alerts to your security incident responders. While hunting, you can create bookmarks for interesting events, enabling you to return to them later, share them with others, and group them with other correlating events to create a compelling incident for investigation.
+While hunting, create bookmarks to return to interesting events later. Use a bookmark to share an event with others. Or, group events with other correlating events to create a compelling incident for investigation.
-![Overview of hunting feature](./media/overview/hunting.png)
-## Notebooks
+## Enhance your threat hunting with notebooks
Microsoft Sentinel supports Jupyter notebooks in Azure Machine Learning workspaces, including full libraries for machine learning, visualization, and data analysis.
-[Use notebooks in Microsoft Sentinel](notebooks.md) to extend the scope of what you can do with Microsoft Sentinel data. For example, perform analytics that aren't built in to Microsoft Sentinel, such as some Python machine learning features, create data visualizations that aren't built in to Microsoft Sentinel, such as custom timelines and process trees, or integrate data sources outside of Microsoft Sentinel, such as an on-premises data set.
+[Use notebooks in Microsoft Sentinel](notebooks.md) to extend the scope of what you can do with Microsoft Sentinel data. For example:
+
+- Perform analytics that aren't built in to Microsoft Sentinel, such as some Python machine learning features.
+- Create data visualizations that aren't built in to Microsoft Sentinel, such as custom timelines and process trees.
+- Integrate data sources outside of Microsoft Sentinel, such as an on-premises data set.
+
+Notebooks are intended for threat hunters or Tier 2-3 analysts, incident investigators, data scientists, and security researchers. They require a higher learning curve and coding knowledge. They have limited automation support.
-- Microsoft Sentinel notebooks are intended for threat hunters or Tier 2-3 analysts, incident investigators, data scientists, and security researchers.
+Notebooks in Microsoft Sentinel provide:
-- Notebooks provide queries to both Microsoft Sentinel and external data, features for data enrichment, investigation, visualization, hunting, machine learning, and big data analytics.
+- Queries to both Microsoft Sentinel and external data
+- Features for data enrichment, investigation, visualization, hunting, machine learning, and big data analytics
-- Notebooks are best for more complex chains of repeatable tasks, ad-hoc procedural controls, machine learning and custom analysis, support rich Python libraries for manipulating and visualizing data, and are useful in documenting and sharing analysis evidence.
+Notebooks are best for:
-- Notebooks require a higher learning curve and coding knowledge, and have limited automation support.
+- More complex chains of repeatable tasks
+- Ad-hoc procedural controls
+- Machine learning and custom analysis
+Notebooks support rich Python libraries for manipulating and visualizing data. They're useful to document and share analysis evidence.
-## Community
+## Download security content from the community
-The Microsoft Sentinel community is a powerful resource for threat detection and automation. Our Microsoft security analysts constantly create and add new workbooks, playbooks, hunting queries, and more, posting them to the community for you to use in your environment. You can download sample content from the private community GitHub [repository](https://aka.ms/asicommunity) to create custom workbooks, hunting queries, notebooks, and playbooks for Microsoft Sentinel.
+The Microsoft Sentinel community is a powerful resource for threat detection and automation. Our Microsoft security analysts create and add new workbooks, playbooks, hunting queries, and more. They post these content items to the community for you to use in your environment. Download sample content from the private community GitHub [repository](https://aka.ms/asicommunity) to create custom workbooks, hunting queries, notebooks, and playbooks for Microsoft Sentinel.
-![Explore the user community](./media/overview/community.png)
## Next steps -- To get started with Microsoft Sentinel, you need a subscription to Microsoft Azure. If you do not have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).
+- To get started with Microsoft Sentinel, you need a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).
- Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md), and [get visibility into your data, and potential threats](get-visibility.md).
static-web-apps Apex Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apex-domain-azure-dns.md
The following procedure requires you to copy settings from an Azure DNS zone you
1. Under *Settings*, select **Custom domains**.
-1. Select the **+ Add** button.
+1. Select the **+ Add** button, and select **Custom Domain on Azure DNS** from the drop down.
1. In the *Enter domain* tab, enter your apex domain name.
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
+
+ Title: Manage blob containers using the Azure portal
+
+description: Learn how to manage Azure storage containers using the Azure portal
+++++ Last updated : 07/18/2022++++
+# Manage blob containers using the Azure portal
+
+Azure Blob Storage allows you to store large amounts of unstructured object data. You can use Blob Storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about Blob Storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
+
+In this how-to article, you learn how to work with container objects within the Azure portal.
+
+## Prerequisites
+
+To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+All access to Azure Storage takes place through a storage account. For this how-to article, create a storage account using the [Azure portal](https://portal.azure.com/), Azure PowerShell, or Azure CLI. For help with creating a storage account, see [Create a storage account](../common/storage-account-create.md).
+
+## Create a container
+
+A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
+
+To create a container in the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. In the portal navigation pane on the left side of the screen, select **Storage accounts** and choose a storage account. If the navigation pane isn't visible, select the menu button to toggle its visibility.
+
+ :::image type="content" source="media/blob-containers-portal/menu-expand-sml.png" alt-text="Screenshot of the Azure Portal homepage showing the location of the Menu button in the browser." lightbox="media/blob-containers-portal/menu-expand-lrg.png":::
+
+1. In the navigation pane for the storage account, scroll to the **Data storage** section and select **Containers**.
+1. Within the **Containers** pane, select the **+ Container** button to open the **New container** pane.
+1. Within the **New Container** pane, provide a **Name** for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see [Naming and referencing containers, blobs, and metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
+1. Set the **Public access level** for the container. The default level is **Private (no anonymous access)**. Read the article to learn how to [configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md?tabs=portal).
+1. Select **Create** to create the container.
+
+ :::image type="content" source="media/blob-containers-portal/create-container-sml.png" alt-text="Screenshot showing how to create a container within the Azure portal." lightbox="media/blob-containers-portal/create-container-lrg.png":::
+
+## Read container properties and metadata
+
+A container exposes both system properties and user-defined metadata. System properties exist on each Blob Storage resource. Some properties are read-only, while others can be read or set.
+
+User-defined metadata consists of one or more name-value pairs that you specify for a Blob Storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.
+
+### Container properties
+
+To display the properties of a container within the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. Navigate to the list of containers within your storage account.
+1. Select the checkbox next to the name of the container whose properties you want to view.
+1. Select the container's **More** button (**...**), and select **Container properties** to display the container's **Properties** pane.
+
+ :::image type="content" source="media/blob-containers-portal/select-container-properties-sml.png" alt-text="Screenshot showing how to display container properties within the Azure portal." lightbox="media/blob-containers-portal/select-container-properties-lrg.png":::
+
+### Read and write container metadata
+
+Users that have large numbers of objects within their storage account can organize their data logically within containers using metadata.
+
+To manage a container's metadata within the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. Navigate to the list of containers in your storage account.
+1. Select the checkbox next to the name of the container whose metadata you want to manage.
+1. Select the container's **More** button (**...**), and then select **Edit metadata** to display the **Container metadata** pane.
+
+ :::image type="content" source="media/blob-containers-portal/select-container-metadata-sml.png" alt-text="Screenshot showing how to access container metadata within the Azure portal." lightbox="media/blob-containers-portal/select-container-metadata-lrg.png":::
+
+1. The **Container metadata** pane will display existing metadata key-value pairs. Existing data can be edited by selecting an existing key or value and overwriting the data. You can add additional metadata by and supplying data in the empty fields provided. Finally, select **Save** to commit your data.
+
+ :::image type="content" source="media/blob-containers-portal/add-container-metadata-sml.png" alt-text="Screenshot showing how to update container metadata within the Azure portal." lightbox="media/blob-containers-portal/add-container-metadata-lrg.png":::
+
+## Manage container and blob access
+
+Properly managing access to containers and their blobs is key to ensuring that your data remains safe. The following sections illustrate ways in which you can meet your access requirements.
+
+### Manage Azure RBAC role assignments for the container
+
+Azure Active Directory (Azure AD) offers optimum security for Blob Storage resources. Azure role-based access control (Azure RBAC) determines what permissions a security principal has to a given resource. To grant access to a container, you'll assign an RBAC role at the container scope or above to a user, group, service principal, or managed identity. You may also choose to add one or more conditions to the role assignment.
+
+You can read about the assignment of roles at [Assign Azure roles using the Azure portal](assign-azure-role-data-access.md?tabs=portal).
+
+### Enable anonymous public read access
+
+Although anonymous read access for containers is supported, it's disabled by default. All access requests must require authorization until anonymous access is explicitly enabled. After anonymous access is enabled, any client will be able to read data within that container without authorizing the request.
+
+Read about enabling public access level in the [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md?tabs=portal) article.
+
+### Generate a shared access signature
+
+A shared access signature (SAS) provides temporary, secure, delegated access to a client who wouldn't normally have permissions. A SAS gives you granular control over how a client can access your data. For example, you can specify which resources are available to the client. You can also limit the types of operations that the client can perform, and specify the duration.
+
+Azure supports three types of SAS. A **service SAS** provides access to a resource in just one of the storage
+
+When you create a SAS, you may set access limitations based on permission level, IP address or range, or start and expiry date and time. You can read more in [Grant limited access to Azure Storage resources using shared access signatures](../common/storage-sas-overview.md).
+
+> [!CAUTION]
+> Any client that possesses a valid SAS can access data in your storage account as permitted by that SAS. It's important to protect a SAS from malicious or unintended use. Use discretion in distributing a SAS, and have a plan in place for revoking a compromised SAS.
+
+To generate an SAS token using the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. In the Azure portal, navigate to the list of containers in your storage account.
+1. Select the checkbox next to the name of the container for which you'll generate an SAS token.
+1. Select the container's **More** button (**...**), and select **Generate SAS** to display the **Generate SAS** pane.
+
+ :::image type="content" source="media/blob-containers-portal/select-container-sas-sml.png" alt-text="Screenshot showing how to access container shared access signature settings within the Azure portal" lightbox="media/blob-containers-portal/select-container-sas-lrg.png":::
+
+1. Within the **Generate SAS** pane, select the **Account key** value for the **Signing method** field.
+1. In the **Signing method** field, select **Account key**. Choosing the account key will result in the creation of a service SAS.
+1. In the **Signing key** field, select the desired key to be used to sign the SAS.
+1. In the **Stored access policy** field, select **None**.
+1. Select the **Permissions** field, then select the check boxes corresponding to the desired permissions.
+1. In the **Start and expiry date/time** section, specify the desired **Start** and **Expiry** date, time, and time zone values.
+1. Optionally, specify an IP address or a range of IP addresses from which to accept requests in the **Allowed IP addresses** field. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+1. Optionally, specify the protocol permitted for requests made with the SAS in the **Allowed protocols** field. The default value is HTTPS.
+1. Review your settings for accuracy and then select **Generate SAS token and URL** to display the **Blob SAS token** and **Blob SAS URL** query strings.
+
+ :::image type="content" source="media/blob-containers-portal/generate-container-sas-sml.png" alt-text="Screenshot showing how to generate a SAS for a container within the Azure portal." lightbox="media/blob-containers-portal/generate-container-sas-lrg.png":::
+
+1. Copy and paste the blob SAS token and blob SAS url values in a secure location. They'll only be displayed once and can't be retrieved after the window is closed.
+
+### Create a stored access or immutability policy
+
+A **stored access policy** gives you additional server-side control over one or more shared access signatures. When you associate an SAS with a stored access policy, the SAS inherits the restrictions defined in the policy. These extra restrictions allow you to change the start time, expiry time, or permissions for a signature. You can also revoke it after it has been issued.
+
+**Immutability policies** can be used to protect your data from overwrites and deletes. Immutability policies allow objects to be created and read, but prevents their modification or deletion for a specific duration. Blob Storage supports two types of immutability policies. A **time-based retention policy** prohibits write and delete operations for a defined period of time. A **legal hold** also prohibits write and delete operations, but must be explicitly cleared before those operations can resume.
+
+#### Create a stored access policy
+
+Configuring a stored access policy is a two-step process: the policy must first be defined, and then applied to the container afterward. To configure a stored access policy, follow these steps:
+
+1. In the Azure portal, navigate to the list of containers in your storage account.
+1. Select the checkbox next to the name of the container for which you'll generate an SAS token.
+1. Select the container's **More** button (**...**), and select **Access policy** to display the **Access policy** pane.
+
+ :::image type="content" source="media/blob-containers-portal/select-container-policy-sml.png" alt-text="Screenshot showing how to access container stored access policy settings within the Azure portal." lightbox="media/blob-containers-portal/select-container-policy-lrg.png":::
+
+1. Within the **Access policy** pane, select **+ Add policy** in the **Stored access policies** section to display the **Add policy** pane. Any existing policies will be displayed in either the appropriate section.
+
+ :::image type="content" source="media/blob-containers-portal/select-add-policy-sml.png" alt-text="Screenshot showing how to add a stored access policy settings within the Azure portal." lightbox="media/blob-containers-portal/select-add-policy-lrg.png":::
+
+1. Within the **Add policy** pane, select the **Identifier** box and add a name for your new policy.
+1. Select the **Permissions** field, then select the check boxes corresponding to the permissions desired for your new policy.
+1. Optionally, provide date, time, and time zone values for **Start time** and **Expiry time** fields to set the policy's validity period.
+1. Review your settings for accuracy and then select **OK** to update the **Access policy** pane.
+
+ > [!CAUTION]
+ > Although your policy is now displayed in the **Stored access policy** table, it is still not applied to the container. If you navigate away from the **Access policy** pane at this point, the policy will *not* be saved or applied and you will lose your work.
+
+ :::image type="content" source="media/blob-containers-portal/select-save-policy-sml.png" alt-text="Screenshot showing how to define a stored access policy within the Azure portal." lightbox="media/blob-containers-portal/select-save-policy-lrg.png":::
+
+1. In the **Access policy** pane, select **+ Add policy** to define another policy, or select **Save** to apply your new policy to the container. After creating at least one stored access policy, you'll be able to associate other secure access signatures (SAS) with it.
+
+ :::image type="content" source="media/blob-containers-portal/apply-policy-sml.png" alt-text="Screenshot showing how to apply a stored access policy within the Azure portal." lightbox="media/blob-containers-portal/apply-policy-lrg.png":::
+
+#### Create an immutability policy
+
+Read more about how to [Configure immutability policies for containers](immutable-storage-overview.md). For help with implementing immutability policies, follow the steps outlined in the [Configure a retention policy](immutable-policy-configure-container-scope.md?tabs=azure-portal#configure-a-retention-policy-on-a-container) or [Configure or clear a legal hold](immutable-policy-configure-container-scope.md?tabs=azure-portal#configure-or-clear-a-legal-hold) articles.
+
+## Manage leases
+
+A container lease is used to establish or manage a lock for delete operations. When a lease is acquired within the Azure portal, the lock can only be created with an infinite duration. When created programmatically, the lock duration can range from 15 to 60 seconds, or it can be infinite.
+
+There are five different lease operation modes, though only two are available within the Azure portal:
+
+| | Use case |<nobr>Available in Azure portal</nobr>|
+||-|:--:|
+|<nobr>**Acquire mode**</nobr> | Request a new lease. |&check; |
+|<nobr>**Renew mode**</nobr> | Renew an existing lease. | |
+|<nobr>**Change mode**</nobr> | Change the ID of an existing lease. | |
+|<nobr>**Release mode**</nobr> | End the current lease; allows other clients to acquire a new lease |&check; |
+|<nobr>**Break mode**</nobr> | End the current lease; prevents other clients from acquiring a new lease during the current lease period| |
+
+### Acquire a lease
+
+To acquire a lease using the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the list of containers in your storage account.
+1. Select the checkbox next to the name of the container for which you'll acquire a lease.
+1. Select the container's **More** button (**...**), and select **Acquire lease** to request a new lease and display the details in the **Lease status** pane.
+
+ :::image type="content" source="media/blob-containers-portal/acquire-container-lease-sml.png" alt-text="Screenshot showing how to access container lease settings within the Azure portal." lightbox="media/blob-containers-portal/acquire-container-lease-lrg.png":::
+
+1. The **Container** and **Lease ID** property values of the newly requested lease are displayed within the **Lease status** pane. Copy and paste these values in a secure location. They'll only be displayed once and can't be retrieved after the pane is closed.
+
+ :::image type="content" source="media/blob-containers-portal/view-container-lease-sml.png" alt-text="Screenshot showing how to access container lease status pane within the Azure portal." lightbox="media/blob-containers-portal/view-container-lease-lrg.png":::
+
+### Break a lease
+
+To break a lease using the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the list of containers in your storage account.
+1. Select the checkbox next to the name of the container for which you'll break a lease.
+1. Select the container's **More** button (**...**), and select **Break lease** to break the lease.
+
+ :::image type="content" source="media/blob-containers-portal/break-container-lease-sml.png" alt-text="Screenshot showing how to break a container lease within the Azure portal." lightbox="media/blob-containers-portal/break-container-lease-lrg.png":::
+
+1. After the lease is broken, the selected container's **Lease state** value will update, and a status confirmation will appear.
+
+ :::image type="content" source="media/blob-containers-portal/broken-container-lease-sml.png" alt-text="Screenshot showing a container's broken lease within the Azure portal." lightbox="media/blob-containers-portal/broken-container-lease-lrg.png":::
+
+## Delete containers
+
+When you delete a container within the Azure portal, all blobs within the container will also be deleted.
+
+> [!WARNING]
+> Following the steps below may permanently delete containers and any blobs within them. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-container-overview.md).
+
+To delete a container within the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. In the Azure portal, navigate to the list of containers in your storage account.
+1. Select the container to delete.
+1. Select the **More** button (**...**), and select **Delete**.
+
+ :::image type="content" source="media/blob-containers-portal/delete-container-sml.png" alt-text="Screenshot showing how to delete a container within the Azure portal." lightbox="media/blob-containers-portal/delete-container-lrg.png":::
+
+1. In the **Delete container(s)** dialog, confirm that you want to delete the container.
+
+In some cases, it's possible to retrieve containers that have been deleted. If soft delete data protection option is enabled on your storage account, you can access containers deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for containers](soft-delete-container-overview.md) article.
+
+## View soft-deleted containers
+
+When soft delete is enabled, you can view soft-deleted containers within the Azure portal. Soft-deleted containers are visible during the specified retention period. After the retention period expires, a soft-deleted container is permanently deleted and is no longer visible.
+
+To view soft-deleted containers within the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. Navigate to your storage account within the Azure portal and view the list of your containers.
+1. Toggle the **Show deleted containers** switch to include deleted containers in the list.
+
+ :::image type="content" source="media/blob-containers-portal/soft-delete-container-portal-list.png" alt-text="Screenshot showing how to view soft deleted containers within the Azure portal.":::
+
+## Restore a soft-deleted container
+
+You can restore a soft-deleted container and its contents within the retention period. To restore a soft-deleted container within the [Azure portal](https://portal.azure.com), follow these steps:
+
+1. Navigate to your storage account within the Azure portal and view the list of your containers.
+1. Display the context menu for the container you wish to restore, and choose **Undelete** from the menu.
+
+ :::image type="content" source="media/blob-containers-portal/soft-delete-container-portal-restore.png" alt-text="Screenshot showing how to restore a soft-deleted container in Azure portal.":::
+
+## See also
+
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Manage blob containers using PowerShell](blob-containers-powershell.md)
+
+<!--Point-in-time restore: /azure/storage/blobs/point-in-time-restore-manage?tabs=portal-->
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 06/13/2022 Last updated : 07/14/2022
The items that appear in these tables will change over time as support continues
| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | <sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
The items that appear in these tables will change over time as support continues
| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![No](../media/icons/no-icon.png) |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) |
| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | <sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Here's an example of what a parameter template definition looks like:
}, "Microsoft.Synapse/workspaces/pipelines": { "properties": {
- "activities": [
- {
- "typeProperties": {
- "waitTimeInSeconds": "-::int",
- "headers": "=::object"
- }
+ "activities": [{
+ "typeProperties": {
+ "waitTimeInSeconds": "-::int",
+ "headers": "=::object",
+ "activities": [
+ {
+ "typeProperties": {
+ "url": "-:-webUrl:string"
+ }
+ }
+ ]
}
- ]
+ }]
} }, "Microsoft.Synapse/workspaces/integrationRuntimes": {
Here's an example of what a parameter template definition looks like:
"*": { "properties": { "typeProperties": {
- "*": "="
+ "accountName": "=",
+ "username": "=",
+ "connectionString": "|:-connectionString:secureString",
+ "secretAccessKey": "|"
} } },
Here's an example of what a parameter template definition looks like:
"dataLakeStoreUri": "=" } }
+ },
+ "AzureKeyVault": {
+ "properties": {
+ "typeProperties": {
+ "baseUrl": "|:baseUrl:secureString"
+ },
+ "parameters": {
+ "KeyVaultURL": {
+ "type": "=",
+ "defaultValue": "|:defaultValue:secureString"
+ }
+ }
+ }
} }, "Microsoft.Synapse/workspaces/datasets": {
Here's an example of what a parameter template definition looks like:
"*": "=" } }
+ },
+ "Microsoft.Synapse/workspaces/credentials" : {
+ "properties": {
+ "typeProperties": {
+ "resourceId": "="
+ }
+ }
} } ```
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
Storing service-generated data is currently supported in the following geographi
- Europe (EU) - United Kingdom (UK) - Canada (CA)-- Japan (JP) (preview)-- Australia (AU) (preview)
+- Japan (JP)
+- Australia (AU)
In addition, service-generated data is aggregated from all locations where the service infrastructure is, and sent to the US geography. The data sent to the US includes scrubbed data, but not customer data.
virtual-machines Co Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/co-location.md
Title: Proximity placement groups description: Learn about using proximity placement groups in Azure.--++ Last updated 3/07/2021-+ # Proximity placement groups
virtual-machines Dcasv5 Dcadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasv5-dcadsv5-series.md
Title: Azure DCasv5 and DCadsv5-series confidential virtual machines (preview)
+ Title: Azure DCasv5 and DCadsv5-series confidential virtual machines
description: Specifications for Azure Confidential Computing's DCasv5 and DCadsv5-series confidential virtual machines.
Last updated 11/15/2021
-# DCasv5 and DCadsv5-series confidential VMs (preview)
+# DCasv5 and DCadsv5-series confidential VMs
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-> [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- The DCasv5-series and DCadsv5-series are [confidential VMs](../confidential-computing/confidential-vm-overview.md) for use in Confidential Computing. These confidential VMs use AMD's third-Generation EPYC<sup>TM</sup> 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache. These processors can achieve a boosted maximum frequency of 3.5 GHz. Both series offer Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). SEV-SNP provides hardware-isolated VMs that protect data from other VMs, the hypervisor, and host management code. Confidential VMs offer hardware-based VM memory encryption. These series also offer OS disk pre-encryption before VM provisioning with different key management solutions.
virtual-machines Dedicated Host Compute Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-compute-optimized-skus.md
Title: Compute Optimized Azure Dedicated Host SKUs
description: Specifications for VM packing of Compute Optimized ADH SKUs. -+
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-general-purpose-skus.md
Title: General Purpose Azure Dedicated Host SKUs
description: Specifications for VM packing of General Purpose ADH SKUs. -+
virtual-machines Dedicated Host Gpu Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-gpu-optimized-skus.md
Title: GPU Optimized Azure Dedicated Host SKUs
description: Specifications for VM packing of GPU optimized ADH SKUs. -+
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
Title: Memory Optimized Azure Dedicated Host SKUs
description: Specifications for VM packing of Memory Optimized ADH SKUs. -+
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-migration-guide.md
Title: Azure Dedicated Host SKU Retirement Migration Guide
description: Walkthrough on how to migrate a retiring Dedicated Host SKU -+
virtual-machines Dedicated Host Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-retirement.md
Title: Azure Dedicated Host SKU Retirement
description: Azure Dedicated Host SKU Retirement landing page -+
virtual-machines Dedicated Host Storage Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-storage-optimized-skus.md
Title: Storage Optimized Azure Dedicated Host SKUs
description: Specifications for VM packing of Storage Optimized ADH SKUs. -+
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
Last updated 09/01/2021-+ #Customer intent: As an IT administrator, I want to learn about more about using a dedicated host for my Azure virtual machines
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md
Last updated 12/07/2020 -+ #Customer intent: As an IT administrator, I want to learn about more about using a dedicated host for my Azure virtual machines
virtual-machines Ecasv5 Ecadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasv5-ecadsv5-series.md
Title: Azure ECasv5 and ECadsv5-series (preview)
+ Title: Azure ECasv5 and ECadsv5-series
description: Specifications for Azure Confidential Computing's ECasv5 and ECadsv5-series confidential virtual machines.
Last updated 11/15/2021
-# ECasv5 and ECadsv5-series (preview)
+# ECasv5 and ECadsv5-series
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-> [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- The ECasv5-series and ECadsv5-series are [confidential VMs](../confidential-computing/confidential-vm-overview.md) for use in Confidential Computing. These confidential VMs use AMD's third-Generation EPYC<sup>TM</sup> 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache. This processor can achieve a boosted maximum frequency of 3.5 GHz. Both series offer Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). SEV-SNP provides hardware-isolated VMs that protect data from other VMs, the hypervisor, and host management code. Confidential VMs offer hardware-based VM memory encryption. These series also offer OS disk pre-encryption before VM provisioning with different key management solutions.
virtual-machines Dsc Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-linux.md
$publicConfig = '{
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration, such as onboarding to Azure Automation.
-The sample Resource Manager template is [dsc-linux-azure-storage-on-ubuntu](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/dsc-linux-azure-storage-on-ubuntu) and [dsc-linux-public-storage-on-ubuntu](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/dsc-linux-public-storage-on-ubuntu).
+The sample Resource Manager template is [dsc-linux-azure-storage-on-ubuntu](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/dsc-linux-azure-storage-on-ubuntu) and [dsc-linux-public-storage-on-ubuntu](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/dsc-linux-azure-storage-on-ubuntu).
For more information about the Azure Resource Manager template, see [Authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md).
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Title: Connect to a Linux VM description: Learn how to connect to a Linux VM in Azure.-+ Last updated 04/25/2022-++ # Connect to a Linux VM
virtual-machines Cloud Init Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-deep-dive.md
Title: Understanding cloud-init description: Deep dive for understanding provisioning an Azure VM using cloud-init.-+ Last updated 07/06/2020-+
virtual-machines Cloud Init Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-troubleshooting.md
Title: Troubleshoot using cloud-init description: Troubleshoot provisioning an Azure VM using cloud-init.-+ Last updated 07/06/2020-+
virtual-machines Cloudinit Add User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-add-user.md
Title: Use cloud-init to add a user to a Linux VM on Azure description: How to use cloud-init to add a user to a Linux VM during creation with the Azure CLI-+ Last updated 05/11/2021-+ # Use cloud-init to add a user to a Linux VM in Azure
virtual-machines Cloudinit Bash Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-bash-script.md
Title: Use cloud-init to run a bash script in a Linux VM on Azure description: How to use cloud-init to run a bash script in a Linux VM during creation with the Azure CLI-+ Last updated 11/29/2017-+ # Use cloud-init to run a bash script in a Linux VM in Azure
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
Title: Use cloud-init to configure a swap partition on a Linux VM description: How to use cloud-init to configure a swap partition in a Linux VM during creation with the Azure CLI-+ Last updated 11/29/2017-+ # Use cloud-init to configure a swap partition on a Linux VM
virtual-machines Cloudinit Update Vm Hostname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm-hostname.md
Title: Use cloud-init to set hostname for a Linux VM description: How to use cloud-init to customize a Linux VM during creation with the Azure CLI-+ Last updated 11/29/2017-+
virtual-machines Create Ssh Keys Detailed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-keys-detailed.md
Title: Detailed steps to create an SSH key pair description: Learn detailed steps to create and manage an SSH public and private key pair for Linux VMs in Azure.-+ Last updated 02/17/2021--++ # Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure
virtual-machines Create Ssh Secured Vm From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-secured-vm-from-template.md
Title: Create a Linux VM in Azure from a template description: How to use the Azure CLI to create a Linux VM from a Resource Manager template-+ Last updated 03/22/2019--++ # How to create a Linux virtual machine with Azure Resource Manager templates
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
Last updated 11/10/2021 -+ # Prepare a CentOS-based virtual machine for Azure
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
Last updated 05/13/2022 -+ # Information for community supported and non-endorsed distributions
The mechanism for rebuilding the initrd or initramfs image may vary depending on
sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak ```
-2. Rebuild the initrd with the hv_vmbus and hv_storvsc kernel modules:
+2. Rebuild the `initrd` with the `hv_vmbus` and `hv_storvsc` kernel modules:
``` sudo mkinitrd --preload=hv_storvsc --preload=hv_vmbus -v -f initrd-`uname -r`.img `uname -r`
virtual-machines Create Upload Openbsd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-openbsd.md
Last updated 05/24/2017 -+ # Create and Upload an OpenBSD disk image to Azure
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
Last updated 07/28/2021 + # Prepare an Ubuntu virtual machine for Azure
virtual-machines Debian Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/debian-create-upload-vhd.md
Last updated 11/10/2021 -+ # Prepare a Debian VHD for Azure
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
Last updated 04/06/2021 + # Endorsed Linux distributions on Azure
virtual-machines Flatcar Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/flatcar-create-upload-vhd.md
Title: Create and upload a Flatcar Container Linux VHD for use in Azure description: Learn to create and upload a VHD that contains a Flatcar Container Linux operating system.--++
virtual-machines Mac Create Ssh Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/mac-create-ssh-keys.md
Title: Create and use an SSH key pair for Linux VMs in Azure description: How to create and use an SSH public-private key pair for Linux VMs in Azure to improve the security of the authentication process.-+ Last updated 09/10/2021--++ # Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Last updated 11/09/2021 -+ # Prepare an Oracle Linux virtual machine for Azure
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
Title: Create a proximity placement group using the Azure CLI description: Learn about creating and using proximity placement groups for virtual machines in Azure. -+ Last updated 3/8/2021-+
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
vm-linux
Last updated 11/10/2021 -+ # Prepare a Red Hat-based virtual machine for Azure
virtual-machines Ssh From Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/ssh-from-windows.md
Title: Use SSH keys to connect to Linux VMs description: Learn how to generate and use SSH keys from a Windows computer to connect to a Linux virtual machine on Azure.-+ Last updated 12/13/2021 -+ ms.devlang: azurecli-+ # How to use SSH keys with Windows on Azure
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
Title: Create and upload a SUSE Linux VHD in Azure description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a SUSE Linux operating system.-+ Last updated 12/01/2020-++ # Prepare a SLES or openSUSE Leap virtual machine for Azure
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
VM Generation Support: Generation 1<br>
Xilinx has created the following marketplace images to simplify the deployment of these VMs.
-[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu18.04](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_ubuntu1804_deployment_image)
+Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu18.04
-[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu20.04](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_ubuntu2004_deployment_image)
+Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu20.04
-[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô CentOS7.8](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_centos78_deployment_image)
+Xilinx Alveo U250 2021.1 Deployment VM ΓÇô CentOS7.8
**Q:** Can I deploy my Own Ubuntu / CentOS VMs and install XRT / Deployment Target Platform?
virtual-machines Prepay Dedicated Hosts Reserved Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/prepay-dedicated-hosts-reserved-instances.md
Last updated 02/28/2020 +
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Title: Deploy a trusted launch VM
description: Deploy a VM that uses trusted launch. -+
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Last updated 05/31/2022-+
virtual-machines Proximity Placement Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/proximity-placement-groups-portal.md
Title: Create a proximity placement group using the portal description: Learn how to create a proximity placement group using the Azure portal. -+ Last updated 3/8/2021-+
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/proximity-placement-groups.md
Last updated 3/8/2021--+++
virtual-machines Azure Monitor Sap Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-sap-quickstart.md
Previously updated : 07/08/2021 Last updated : 07/18/2022 # Deploy Azure Monitor for SAP Solutions by using the Azure portal
Sign in to the [Azure portal](https://portal.azure.com).
### SAP NetWeaver provider
-The SAP start service provides a host of services, including monitoring the SAP system. We're using SAPControl, which is a SOAP web service interface that exposes these capabilities. The SAPControl web service interface differentiates between [protected and unprotected](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv) web service methods.
+The SAP start service provides a host of services, including monitoring the SAP system. We're using SAPControl, which is a SOAP web service interface that exposes these capabilities. The SAPControl web service interface differentiates between [protected and unprotected](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv) web service methods. For how to configure authorization for SAPControl, see [SAP Note 1563660](https://launchpad.support.sap.com/#/notes/0001563660).
To fetch specific metrics, you need to unprotect some methods for the current release. Follow these steps for each SAP system:
virtual-machines Dbms_Guide_Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_oracle.md
vm-linux Previously updated : 10/01/2021 Last updated : 07/18/2022
Windows and Oracle Linux are the only operating systems that are supported by Or
Exceptions, according to SAP Note [#2039619](https://launchpad.support.sap.com/#/notes/2039619), are SAP components that don't use the Oracle Database client. Such SAP components are SAP's stand-alone enqueue, message server, Enqueue replication services, WebDispatcher, and SAP Gateway.
-Even if you're running your Oracle DBMS and SAP application instances on Oracle Linux, you can run your SAP Central Services on SLES or RHEL and protect it with a Pacemaker-based cluster. Pacemaker as a high-availability framework isn't supported on Oracle Linux.
+Even if you're running your Oracle DBMS and SAP application instances on Oracle Linux, you can run your SAP Central Services on SLES or RHEL and protect it with a Pacemaker-based cluster. Pacemaker as an high-availability framework has not been approved for support on Oracle Linux by SAP and Oracle.
## Specifics for Oracle Database on Windows
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 06/29/2022 Last updated : 07/18/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- Julu 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md)
- June 29, 2022: Add recommendation and links to Pacemaker usage for Db2 versions 11.5.6 and higher in the documents [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md), [High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker](./dbms-guide-ha-ibm.md), and [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - June 08, 2022: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to adjust timeouts when using NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker configuration - June 02, 2022: Change in the [SAP Deployment Guide](deployment-guide.md) to add a link to RHEL in-place upgrade documentation
virtual-machines Ha Setup With Stonith https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/ha-setup-with-stonith.md
Title: High availability setup with STONITH for SAP HANA on Azure (Large Instanc
description: Learn to establish high availability for SAP HANA on Azure (Large Instances) in SUSE by using the STONITH device. documentationcenter:-+ editor:
vm-linux Last updated 9/01/2021-+
virtual-machines Hana Additional Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-additional-network-requirements.md
Title: Other network requirements for SAP HANA on Azure (Large Instances) | Micr
description: Learn about added network requirements for SAP HANA on Azure (Large Instances) that you might have. documentationcenter: -+ editor:
vm-linux Last updated 6/3/2021-+
virtual-machines Hana Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-architecture.md
Title: Architecture of SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn the architecture for deploying SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 07/21/2021-+
virtual-machines Hana Available Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-available-skus.md
Title: SKUs for SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn about the SKUs available for SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: '' keywords: 'HLI, HANA, SKUs, S896, S224, S448, S672, Optane, SAP'
vm-linux Last updated 02/11/2022-+
virtual-machines Hana Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-backup-restore.md
Title: HANA backup and restore on SAP HANA on Azure (Large Instances) | Microsof
description: Learn how to back up and restore SAP HANA on HANA Large Instances. documentationcenter:-+ editor:
vm-linux Last updated 7/02/2021-+
virtual-machines Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-certification.md
Title: Certification of SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn about certification of SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 02/11/2022-+
virtual-machines Hana Concept Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-concept-preparation.md
Title: Disaster recovery principles and preparation on SAP HANA on Azure (Large
description: Become familiar with disaster recovery principles and preparation on SAP HANA on Azure (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 7/01/2021-+
virtual-machines Hana Connect Azure Vm Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-azure-vm-large-instances.md
Title: Connectivity setup from virtual machines to SAP HANA on Azure (Large Inst
description: Connectivity setup from virtual machines for using SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: '' tags: azure-resource-manager
vm-linux Last updated 05/28/2021-+
virtual-machines Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-vnet-express-route.md
Title: Connect a virtual network to SAP HANA on Azure (Large Instances) | Micros
description: Learn how to connect a virtual network to SAP HANA on Azure (Large Instances). documentationcenter: -+ editor:
vm-linux Last updated 6/1/2021-+
virtual-machines Hana Data Tiering Extension Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-data-tiering-extension-nodes.md
Title: Data tiering and extension nodes for SAP HANA on Azure (Large Instances)
description: Learn about data tiering and extension nodes for SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 05/17/2021-+
virtual-machines Hana Failover Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-failover-procedure.md
Title: HANA failover procedure to a disaster site for SAP HANA on Azure (Large I
description: Learn how to fail over to a disaster recovery site for SAP HANA on Azure (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 6/16/2021-+
virtual-machines Hana Large Instance Enable Kdump https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-enable-kdump.md
Title: Script to enable kdump in SAP HANA (Large Instances)| Microsoft Docs
description: Learn how to enable the kdump service on Azure HANA Large Instances Type I and Type II. documentationcenter:-+ editor:
vm-linux Last updated 06/22/2021-+
virtual-machines Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-monitor-troubleshoot.md
Title: Monitoring and troubleshooting from HANA side on SAP HANA on Azure (Large
description: Learn how to monitor and troubleshoot your SAP HANA on Azure (Large Instances) using resources provided by SAP HANA. documentationcenter: -+
vm-linux Last updated 6/18/2021-+ # Monitoring and troubleshooting from HANA side
virtual-machines Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-network-architecture.md
Title: Network architecture of SAP HANA on Azure (Large Instances) | Microsoft D
description: Learn about the network architecture for deploying SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 07/21/2021-+
virtual-machines Hana Onboarding Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-onboarding-requirements.md
Title: Onboarding requirements for SAP HANA on Azure (Large Instances) | Microso
description: Learn about onboarding requirements for SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 05/14/2021-+
This article lists the requirements for running SAP HANA on Azure Large Instance
## Operating system -- Licenses for SUSE Linux Enterprise Server 12 for SAP Applications.
+- Licenses for SUSE Linux Enterprise Server 12 and SUSE Linux Enterprise Server 15 for SAP Applications.
> [!NOTE] > The operating system delivered by Microsoft isn't registered with SUSE. It isn't connected to a Subscription Management Tool instance. - SUSE Linux Subscription Management Tool deployed in Azure on a VM. This tool provides the capability for SAP HANA on Azure (Large Instances) to be registered and respectively updated by SUSE. (There's no internet access within the HANA Large Instance data center.) -- Licenses for Red Hat Enterprise Linux 6.7 or 7.x for SAP HANA.+
+- Licenses for Red Hat Enterprise Linux 7.9 and 8.2 for SAP HANA.
> [!NOTE] > The operating system delivered by Microsoft isn't registered with Red Hat. It isn't connected to a Red Hat Subscription Manager instance.
For the compatibility matrix of the operating system and HLI firmware/driver ver
> [!IMPORTANT]
-> For Type II units only the SLES 12 SP2 OS version is supported at this point.
+> For Type II units SLES 12 SP5, SLES 15 SP2 and SLES 15 SP3 OS versions are supported at this point.
## Database
virtual-machines Hana Operations Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-operations-model.md
Title: Operations model of SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn about the SAP HANA (Large Instances) operations model and your responsibilities. documentationcenter: -+ editor: ''
vm-linux Last updated 05/17/2021-+
virtual-machines Hana Overview High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md
Title: High availability and disaster recovery of SAP HANA on Azure (Large Insta
description: Learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 03/01/2021-+
virtual-machines Hana Overview Infrastructure Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md
Title: Infrastructure and connectivity to SAP HANA on Azure (large instances) |
description: Configure required connectivity infrastructure to use SAP HANA on Azure (large instances). documentationcenter: -+ editor:
vm-linux Last updated 6/1/2021-+
virtual-machines Hana Setup Smt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-setup-smt.md
Title: How to set up SMT server for SAP HANA on Azure (Large Instances) | Micros
description: Learn how to set up SMT server for SAP HANA on Azure (Large Instances). documentationcenter: -+ editor:
vm-linux Last updated 06/25/2021-+
virtual-machines Hana Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-sizing.md
Title: Sizing of SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn about sizing of SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 07/16/2021-+
virtual-machines Hana Storage Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-storage-architecture.md
Title: Storage architecture of SAP HANA on Azure (Large Instances) | Microsoft D
description: Learn about the storage architecture for SAP HANA on Azure (Large Instances). documentationcenter: -+ editor: ''
vm-linux Last updated 07/22/2021-+
virtual-machines Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-supported-scenario.md
Title: Supported scenarios for SAP HANA on Azure (Large Instances)| Microsoft Do
description: Learn about scenarios supported for SAP HANA on Azure (Large Instances) and their architectural details. documentationcenter:-+ editor:
vm-linux Last updated 07/19/2021-+
virtual-machines Large Instance Os Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-os-backup.md
Title: Operating system backup and restore of SAP HANA on Azure (Large Instances
description: Learn how to do operating system backup and restore for SAP HANA on Azure (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 06/22/2021-+
virtual-machines Monitor Sap On Azure Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure-reference.md
Title: Monitor SAP on Azure data reference description: Important reference material needed when you monitor SAP on Azure. -+ -+
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Title: Monitor SAP on Azure (preview) | Microsoft Docs description: Start here to learn how to monitor SAP on Azure.-+ Last updated 10/13/2021-+
virtual-machines Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance.md
Title: Operating system compatibility matrix for SAP HANA (Large Instances)| Mic
description: The compatibility matrix represents the compatibility of different versions of operating system with different hardware types (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 05/18/2021-+
virtual-machines Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md
Title: Operating system upgrade for the SAP HANA on Azure (Large Instances)| Mic
description: Learn to do an operating system upgrade for SAP HANA on Azure (Large Instances). documentationcenter:-+ editor:
vm-linux Last updated 06/24/2021-+
There are a couple of known issues with the upgrade:
The OS configuration can drift from the recommended settings over time. This drift can occur because of patching, system upgrades, and other changes you may make. Microsoft identifies updates needed to ensure HANA Large Instances are optimally configured for the best performance and resiliency. The following instructions outline recommendations that address network performance, system stability, and optimal HANA performance. ### Compatible eNIC/fNIC driver versions
- To have proper network performance and system stability, ensure the appropriate OS-specific version of eNIC and fNIC drivers are installed per the following compatibility table. Servers are delivered to customers with compatible versions. However, drivers can get rolled back to default versions during OS/kernel patching. Ensure the appropriate driver version is running post OS/kernel patching operations.
+ To have proper network performance and system stability, ensure the appropriate OS-specific version of eNIC and fNIC drivers are installed per the following compatibility table (This table has the latest compatible driver version). Servers are delivered to customers with compatible versions. However, drivers can get rolled back to default versions during OS/kernel patching. Ensure the appropriate driver version is running post OS/kernel patching operations.
| OS Vendor | OS Package Version | Firmware Version | eNIC Driver | fNIC Driver | ||-|--|--|--|
- | SuSE | SLES 12 SP2 | 3.1.3h | 2.3.0.40 | 1.6.0.34 |
- | SuSE | SLES 12 SP3 | 3.1.3h | 2.3.0.44 | 1.6.0.36 |
| SuSE | SLES 12 SP2 | 3.2.3i | 2.3.0.45 | 1.6.0.37 | | SuSE | SLES 12 SP3 | 3.2.3i | 2.3.0.43 | 1.6.0.36 |
- | SuSE | SLES 12 SP4 | 3.2.3i | 4.0.0.6 | 2.0.0.60 |
+ | SuSE | SLES 12 SP4 | 3.2.3i | 4.0.0.14 | 2.0.0.63 |
+ | SuSE | SLES 12 SP5 | 3.2.3i | 4.0.0.14 | 2.0.0.63 |
+ | Red Hat | RHEL 7.6 | 3.2.3i | 3.1.137.5 | 2.0.0.50 |
| SuSE | SLES 12 SP4 | 4.1.1b | 4.0.0.6 | 2.0.0.60 |
- | SuSE | SLES 12 SP5 | 3.2.3i | 4.0.0.8 | 2.0.0.60 |
| SuSE | SLES 12 SP5 | 4.1.1b | 4.0.0.6 | 2.0.0.59 | | SuSE | SLES 15 SP1 | 4.1.1b | 4.0.0.8 | 2.0.0.60 |
- | Red Hat | RHEL 7.2 | 3.1.3h | 2.3.0.39 | 1.6.0.34 |
- | Red Hat | RHEL 7.6 | 3.2.3i | 3.1.137.5 | 2.0.0.50 |
+ | SuSE | SLES 15 SP2 | 4.1.1b | 4.0.0.8 | 2.0.0.60 |
| Red Hat | RHEL 7.6 | 4.1.1b | 4.0.0.8 | 2.0.0.60 |
+ | SuSE | SLES 12 SP4 | 4.1.3d | 4.0.0.13 | 2.0.0.69 |
+ | SuSE | SLES 12 SP5 | 4.1.3d | 4.0.0.13 | 2.0.0.69 |
+ | SuSE | SLES 15 SP1 | 4.1.3d | 4.0.0.13 | 2.0.0.69 |
+
### Commands for driver upgrade and to clean old rpm packages
grub2-mkconfig -o /boot/grub2/grub.cfg
Learn to set up an SMT server for SUSE Linux. > [!div class="nextstepaction"]
-> [Set up SMT server for SUSE Linux](hana-setup-smt.md)
+> [Set up SMT server for SUSE Linux](hana-setup-smt.md)
virtual-machines Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/troubleshooting-monitoring.md
Title: Monitoring SAP HANA on Azure (Large Instances) | Microsoft Docs
description: Learn about monitoring SAP HANA on an Azure (Large Instances). documentationcenter: -+
vm-linux Last updated 06/23/2021-+