Updates from: 04/13/2022 01:11:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 03/31/2022 Last updated : 04/12/2022
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZn
``` ## 4. Refresh the token
-Access tokens and ID tokens are short-lived. After they expire, you must refresh them to continue to access resources. To do this, submit another POST request to the `/token` endpoint. This time, provide the `refresh_token` instead of the `code`:
+
+Access tokens and ID tokens are short-lived. After they expire, you must refresh them to continue to access resources. When you refresh the access token, Azure AD B2C returns a new token. The refreshed access token will have updated `nbf` (not before), `iat` (issued at), and `exp` (expiration) claim values. All other claim values will be the same as the originally issued access token.
++
+To refresh the toke, submit another POST request to the `/token` endpoint. This time, provide the `refresh_token` instead of the `code`:
```http POST https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/token HTTP/1.1
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
If you don't already have one, start by setting up a Mailjet account (Azure cust
1. Follow the setup instructions at [Create a Mailjet Account](https://www.mailjet.com/guides/azure-mailjet-developer-resource-user-guide/enabling-mailjet/). 1. To be able to send email, [register and validate](https://www.mailjet.com/guides/azure-mailjet-developer-resource-user-guide/enabling-mailjet/#how-to-configure-mailjet-for-use) your Sender email address or domain.
-2. Navigate to the [API Key Management page](https://app.mailjet.com/account/api_keys). Record the **API Key** and **Secret Key** for use in a later step. Both keys are generated automatically when your account is created.
+2. Navigate to the [API Key Management page](https://dev.mailjet.com/email/guides/senders-and-domains/#use-a-sender-on-all-api-keys-(metasender)). Record the **API Key** and **Secret Key** for use in a later step. Both keys are generated automatically when your account is created.
> [!IMPORTANT] > Mailjet offers customers the ability to send emails from shared IP and [dedicated IP addresses](https://documentation.mailjet.com/hc/articles/360043101973-What-is-a-dedicated-IP). When using dedicated IP addresses, you need to build your own reputation properly with an IP address warm-up. For more information, see [How do I warm up my IP ?](https://documentation.mailjet.com/hc/articles/1260803352789-How-do-I-warm-up-my-IP-).
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
Previously updated : 03/10/2021 Last updated : 04/12/2022
The following are the IDs for a [Verification display control](display-control-v
</LocalizedResources> ```
+## TOTP MFA controls display control user interface elements
+
+The following are the IDs for a [time-based one-time password (TOTP) display control](display-control-time-based-one-time-password.md) with [page layout version](page-layout.md) 2.1.9 and later.
+
+| ID | Default value |
+| | - |
+|title_text |Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice. |
+| DN |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+|DisplayName |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+|title_text |Scan the QR code |
+|info_msg |You can download the Microsoft Authenticator app or use any other authenticator app of your choice. |
+|link_text |Can't scan? Try this |
+|title_text| Enter the account details manually. |
+|account_name | Account Name: |
+|display_prefix | Secret |
+|collapse_text | Still having trouble? |
+|DisplayName | Enter the verification code from your authenticator appΓÇï.|
+|DisplayName | Enter your code. |
+| button_continue | Verify |
+
+### TOTP MFA controls display control example
+
+```xml
+ <LocalizedResources Id="api.selfasserted.totp.en">
+ <LocalizedStrings>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorAppIconControl" StringId="title_text">Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice.</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorAppIconControl" StringId="DN">Once you&#39;ve downloaded the Authenticator app, you can use any of the methods below to continue with enrollment.</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="QrCodeScanInstruction" StringId="DisplayName">Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment.</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="totpQrCodeControl" StringId="title_text">Scan the QR code</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="totpQrCodeControl" StringId="info_msg">You can download the Microsoft Authenticator app or use any other authenticator app of your choice.</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="totpQrCodeControl" StringId="link_text">Can&#39;t scan? Try this</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorInfoControl" StringId="title_text">Enter the account details manually</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorInfoControl" StringId="account_name">Account Name:</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorInfoControl" StringId="display_prefix">Secret</LocalizedString>
+ <LocalizedString ElementType="DisplayControl" ElementId="authenticatorInfoControl" StringId="collapse_text">Still having trouble?</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="QrCodeVerifyInstruction" StringId="DisplayName">Enter the verification code from your authenticator appΓÇï.</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="otpCode" StringId="DisplayName">Enter your code.</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="button_continue">Verify</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedResources>
+```
+ ## Restful service error messages The following are the IDs for [Restful service technical profile](restful-technical-profile.md) error messages:
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
Previously updated : 02/07/2022 Last updated : 04/12/2022
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZn
## Refresh the token
-ID tokens expire in a short period of time. Refresh the tokens after they expire to continue being able to access resources. You can refresh a token by submitting another `POST` request to the `/token` endpoint. This time, provide the `refresh_token` parameter instead of the `code` parameter:
+Access tokens and ID tokens are short-lived. After they expire, you must refresh them to continue to access resources. When you refresh the access token, Azure AD B2C returns a new token. The refreshed access token will have updated `nbf` (not before), `iat` (issued at), and `exp` (expiration) claim values. All other claim values will be the same as the originally issued access token.
+
+Refresh a token by submitting another `POST` request to the `/token` endpoint. This time, provide the `refresh_token` parameter instead of the `code` parameter:
```http POST https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/token HTTP/1.1
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 04/08/2022 Last updated : 04/12/2022
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.10**
+
+- Correcting to the tab index
+- Fixing WCAG 2.1 accessibility and screen reader issues
+ **2.1.9** - TOTP multifactor authentication support. Adding links that allows users to download and install the Microsoft authenticator app to complete the enrollment of the TOTP on the authenticator.
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Previously updated : 12/10/2021 Last updated : 04/11/2022
Scoping filters can be used differently depending on the type of provisioning co
* **Outbound provisioning from Azure AD to SaaS applications**. When Azure AD is the source system, [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) are the most common method for determining which users are in scope for provisioning. These assignments also are used for enabling single sign-on and provide a single method to manage access and provisioning. Scoping filters can be used optionally, in addition to assignments or instead of them, to filter users based on attribute values. >[!TIP]
- > You can disable provisioning based on assignments for an enterprise application by changing settings in the [Scope](../app-provisioning/user-provisioning.md#how-do-i-set-up-automatic-provisioning-to-an-application) menu under the provisioning settings to **Sync all users and groups**.
+ > The more users and groups in scope for provisioning, the longer the synchronization process can take. Setting the scope to sync assigned users and groups, limiting the number of groups assigned to the app, and limiting the size of the groups will reduce the time it takes to synchronize everyone that is in scope.
* **Inbound provisioning from HCM applications to Azure AD and Active Directory**. When an [HCM application such as Workday](../saas-apps/workday-tutorial.md) is the source system, scoping filters are the primary method for determining which users should be provisioned from the HCM application to Active Directory or Azure AD.
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Previously updated : 04/04/2022 Last updated : 04/11/2022
You can also check whether all the required ports are open.
- Microsoft Azure AD Connect Agent Updater - Microsoft Azure AD Connect Provisioning Agent Package
+### Provisioning agent history
+This article lists the versions and features of Azure Active Directory Connect Provisioning Agent that have been released. The Azure AD team regularly updates the Provisioning Agent with new features and functionality. Please ensure that you do not use the same agent for on-prem provisioning and Cloud Sync / HR-driven provisioning.
+Microsoft provides direct support for the latest agent version and one version before.
+## Download link
+You can download the latest version of the agent using [this link](https://aka.ms/onpremprovisioningagent).
+
+## 1.1.846.0
+
+April 11th, 2022 - released for download
+
+### Fixed issues
+
+- We added support for ObjectGUID as an anchor for the generic LDAP connector when provisioning users into AD LDS.
## Next steps
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Here are the least privileged roles required for this deployment:
| Azure AD Role| Description | | - | -|
-| Global Administrator| To implement combined registration experience. |
+| User Administrator or Global Administrator| To implement combined registration experience. |
| Authentication Administrator| To implement and manage authentication methods. | | User| To configure Authenticator app on device, or to enroll security key device for web or Windows 10 sign-in. |
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
An Azure AD Kerberos Server object is created in your on-premises Active Directo
Azure AD generates a Kerberos TGT for the user's on-premises Active Directory domain. The TGT includes the user's SID only, and no authorization data. 1. The TGT is returned to the client along with the user's Azure AD Primary Refresh Token (PRT).
-1. The client machine contacts an on-premises Azure AD DC and trades the partial TGT for a fully formed TGT.
+1. The client machine contacts an on-premises Active Directory Domain Controller and trades the partial TGT for a fully formed TGT.
1. The client machine now has an Azure AD PRT and a full Active Directory TGT and can access both cloud and on-premises resources. ## Prerequisites
You must also meet the following system requirements:
- [Windows Server 2016](https://support.microsoft.com/help/4534307/windows-10-update-kb4534307) - [Windows Server 2019](https://support.microsoft.com/help/4534321/windows-10-update-kb4534321)
+- AES256_HMAC_SHA1 must be enabled when **Network security: Configure encryption types allowed for Kerberos** policy is [configured](https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) on domain controllers.
+ - Have the credentials required to complete the steps in the scenario: - An Active Directory user who is a member of the Domain Admins group for a domain and a member of the Enterprise Admins group for a forest. Referred to as **$domainCred**. - An Azure Active Directory user who is a member of the Global Administrators role. Referred to as **$cloudCred**.
For information about compliant security keys, see [FIDO2 security keys](concept
### What can I do if I lose my security key?
-To retrieve a security key, sign in to the Azure portal, and then go to the **Security info** page.
+To delete an enrolled security key, sign in to the Azure portal, and then go to the **Security info** page.
### What can I do if I'm unable to use the FIDO security key immediately after I create a hybrid Azure AD-joined machine?
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Organizations may choose to use one or more of the following methods to enable t
To enable the use of security keys using Intune, complete the following steps: 1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
-1. Browse to **Microsoft Intune** > **Device enrollment** > **Windows enrollment** > **Windows Hello for Business** > **Properties**.
-1. Under **Settings**, set **Use security keys for sign-in** to **Enabled**.
+1. Browse to **Devices** > **Enroll Devices** > **Windows enrollment** > **Windows Hello for Business**.
+1. Set **Use security keys for sign-in** to **Enabled**.
Configuration of security keys for sign-in isn't dependent on configuring Windows Hello for Business.
Configuration of security keys for sign-in isn't dependent on configuring Window
To target specific device groups to enable the credential provider, use the following custom settings via Intune: 1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
-1. Browse to **Device** > **Windows** > **Configuration Profiles** > **Create profile**.
+1. Browse to **Devices** > **Windows** > **Configuration Profiles** > **Create profile**.
1. Configure the new profile with the following settings:
- - Name: Security Keys for Windows Sign-In
- - Description: Enables FIDO Security Keys to be used during Windows Sign In
- Platform: Windows 10 and later - Profile type: Template > Custom
- - Custom OMA-URI Settings:
+ - Name: Security Keys for Windows Sign-In
+ - Description: Enables FIDO Security Keys to be used during Windows Sign In
+1. Click **Add* and in **Add Row**, add the following Custom OMA-URI Settings:
- Name: Turn on FIDO Security Keys for Windows Sign-In
+ - Description: (Optional)
- OMA-URI: ./Device/Vendor/MSFT/PassportForWork/SecurityKey/UseSecurityKeyForSignin - Data Type: Integer - Value: 1
-1. This policy can be assigned to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/intune/device-profile-assign).
+1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/intune/device-profile-assign).
![Intune custom device configuration policy creation](./media/howto-authentication-passwordless-security-key/intune-custom-profile.png)
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
There are two ways to get your AAGUID. You can either ask your security key prov
1. Click **Security Info**. 1. If the user already has at least one Azure AD Multi-Factor Authentication method registered, they can immediately register a FIDO2 security key. 1. If they don't have at least one Azure AD Multi-Factor Authentication method registered, they must add one.
+ 1. An Administrator can issue a [Temporary Access Pass](howto-authentication-temporary-access-pass.md) to allow the user to register a Passwordless authentication method.
1. Add a FIDO2 Security key by clicking **Add method** and choosing **Security key**. 1. Choose **USB device** or **NFC device**. 1. Have your key ready and choose **Next**.
If a user's UPN changes, you can no longer modify FIDO2 security keys to account
[Learn more about device registration](../devices/overview.md)
-[Learn more about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
+[Learn more about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Keep these limitations in mind:
- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration. - A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE), Autopilot, or to deploy Windows Hello for Business. -- When Seamless SSO is enabled on the tenant, the users are prompted to enter a password. The **Use your Temporary Access Pass instead** link will be available for the user to sign-in with a Temporary Access Pass.-
- ![Screenshot of Use a Temporary Access Pass instead](./media/how-to-authentication-temporary-access-pass/alternative.png)
## Troubleshooting
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
In this scenario, you use the following credentials:
By default, users can't create app passwords. The app passwords feature must be enabled before users can use them. To give users the ability to create app passwords, **admin needs** to complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Search for and select **Azure Active Directory**, then choose **Users**.
-3. Select **Multi-Factor Authentication** from the navigation bar across the top of the *Users* window.
-4. Under Multi-Factor Authentication, select **service settings**.
-5. On the **Service Settings** page, select the **Allow users to create app passwords to sign in to non-browser apps** option.
+2. Search for and select **Azure Active Directory**, then choose **Security**.
+3. Select **Conditional Access** from the left navigation blade.
+4. Selet **Named location** from the left navigation blade.
+5. Click on **"Configure MFA trusted IPs"** in the bar across the top of the *Conditional Access | Named Locations* window.
+6. On the **multi-factor authentication** page, select the **Allow users to create app passwords to sign in to non-browser apps** option.
![Screenshot of the Azure portal that shows the service settings for multi-factor authentication to allow the user of app passwords](media/concept-authentication-methods/app-password-authentication-method.png)
Users can also create app passwords after registration. For more information and
## Next steps
-For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
+For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Deploying the configuration change to enable SSPR from the login screen using In
1. Under *Configuration settings*, select **Add** and provide the following OMA-URI setting to enable the reset password link: - Provide a meaningful name to explain what the setting is doing, such as *Add SSPR link*. - Optionally provide a meaningful description of the setting.
- - **OMA-URI** set to `./Vendor/MSFT/Policy/Config/Authentication/AllowAadPasswordReset`
+ - **OMA-URI** set to `./Device/Vendor/MSFT/Policy/Config/Authentication/AllowAadPasswordReset`
- **Data type** set to **Integer** - **Value** set to **1**
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
In your cleanup policy, select accounts that have the required roles assigned.
### Timeframe
-Define a timeframe that is your indicator for a stale device. When defining your timeframe, factor the window noted for updating the activity timestamp into your value. For example, you shouldn't consider a timestamp that is younger than 21 days (includes variance) as an indicator for a stale device. There are scenarios that can make a device look like stale while it isn't. For example, the owner of the affected device can be on vacation or on a sick leave. that exceeds your timeframe for stale devices.
+Define a timeframe that is your indicator for a stale device. When defining your timeframe, factor the window noted for updating the activity timestamp into your value. For example, you shouldn't consider a timestamp that is younger than 21 days (includes variance) as an indicator for a stale device. There are scenarios that can make a device look like stale while it isn't. For example, the owner of the affected device can be on vacation or on a sick leave that exceeds your timeframe for stale devices.
### Disable devices
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Expired (30 days) | Data accessible to all| Users have normal access to Microsof
Disabled (30 days) | Data accessible to admin only | Users canΓÇÖt access Microsoft 365 files, or apps<br>Admins can access the Microsoft 365 admin center but canΓÇÖt assign licenses to or update users Deprovisioned (30 days after Disabled) | Data deleted (automatically deleted if no other services are in use) | Users canΓÇÖt access Microsoft 365 files, or apps<br>Admins can access the Microsoft 365 admin center to purchase and manage other subscriptions
-## Delete a subscription
+## Delete a Office/Microsoft 365 subscription
You can put a subscription into the **Deprovisioned** state to be deleted in three days using the Microsoft 365 admin center.
You can put a subscription into the **Deprovisioned** state to be deleted in thr
1. Once you have deleted a subscription in your organization and 72 hours have elapsed, you can sign back into the Azure AD admin center again and there should be no required action and no subscriptions blocking your organization deletion. You should be able to successfully delete your Azure AD organization. ![pass subscription check at deletion screen](./media/directory-delete-howto/delete-checks-passed.png)
+
+## Delete an Azure subscription
+
+If you have an Active or Cancelled Azure Subscription associated to your Azure AD Tenant then you would not be able to delete Azure AD Tenant. After you cancel, billing is stopped immediately. However, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data.
+
+- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](https://docs.microsoft.com/azure/cost-management-billing/manage/cancel-azure-subscription#delete-free-trial-or-pay-as-you-go-subscriptions).
+- All other subscription types are deleted only through the [subscription cancellation](https://docs.microsoft.com/azure/cost-management-billing/manage/cancel-azure-subscription#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.
+- Alternatively, you can also move/transfer the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Additionally, perfoming Switch Directory on the subscription would not help as the billing would still be aligned with Azure AD Tenant which was used to sign up for the subscription. For more information review [Transfer a subscription to another Azure AD tenant account](https://docs.microsoft.com/azure/cost-management-billing/manage/billing-subscription-transfer#transfer-a-subscription-to-another-azure-ad-tenant-account)
+
+Once you have all the Azure and Office/Microsoft 365 Subscriptions cancelled and deleted you can proceed with cleaning up rest of the things within Azure AD Tenant before actually delete it.
## Enterprise apps with no way to delete
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
To apply published labels to groups, you must first enable the feature. These st
1. Save the changes and apply the settings: ```powershell
- Set-AzureADDirectorySetting -Id $grpUnifiedSetting.Id -DirectorySetting $setting
+ Set-AzureADDirectorySetting -Id $grpUnifiedSetting.Id -DirectorySetting $Setting
``` If youΓÇÖre receiving a Request_BadRequest error, it's because the settings already exist in the tenant, so when you try to create a new property:value pair, the result is an error. In this case, take the following steps:
If you must make a change, use an [Azure AD PowerShell script](https://github.co
- [Use sensitivity labels with Microsoft Teams, Microsoft 365 groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) - [Update groups after label policy change manually with Azure AD PowerShell script](https://github.com/microsoftgraph/powershell-aad-samples/blob/master/ReassignSensitivityLabelToO365Groups.ps1) - [Edit your group settings](../fundamentals/active-directory-groups-settings-azure-portal.md)-- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
+- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Previously updated : 03/01/2022 Last updated : 04/07/2022
# Security defaults in Azure AD
-Managing security can be difficult with common identity-related attacks like password spray, replay, and phishing becoming more popular. Security defaults make it easier to help protect your organization from these attacks with preconfigured security settings:
+Microsoft is making security defaults available to everyone, because managing security can be difficult. Identity-related attacks like password spray, replay, and phishing are common in today's environment. More than 99.9% of these identity-related attacks are stopped by using multi-factor authentication (MFA) and blocking legacy authentication. The goal is to ensure that all organizations have at least a basic level of security enabled at no extra cost.
-- Requiring all users to register for Azure AD Multi-Factor Authentication.-- Requiring administrators to do multi-factor authentication.-- Blocking legacy authentication protocols.-- Requiring users to do multi-factor authentication when necessary.-- Protecting privileged activities like access to the Azure portal.
+Security defaults make it easier to help protect your organization from these identity-related attacks with preconfigured security settings:
-## Why security defaults?
+- [Requiring all users to register for Azure AD Multi-Factor Authentication](#require-all-users-to-register-for-azure-ad-multi-factor-authentication).
+- [Requiring administrators to do multi-factor authentication](#require-administrators-to-do-multi-factor-authentication).
+- [Requiring users to do multi-factor authentication when necessary](#require-users-to-do-multi-factor-authentication-when-necessary).
+- [Blocking legacy authentication protocols](#block-legacy-authentication-protocols).
+- [Protecting privileged activities like access to the Azure portal](#protect-privileged-activities-like-access-to-the-azure-portal).
-Quoting Alex Weinert, Director of Identity Security at Microsoft:
-
-> ...our telemetry tells us that more than 99.9% of organization account compromise could be stopped by simply using MFA, and that disabling legacy authentication correlates to a 67% reduction in compromise risk (and completely stops password spray attacks, 100% of which come in via legacy authentication)...
-
-More details on why security defaults are being made available can be found in Alex Weinert's blog post, [Introducing security defaults](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/introducing-security-defaults/ba-p/1061414).
-
-Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You turn on security defaults in the Azure portal. If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to new tenants at creation.
-
-### Who's it for?
+## Who's it for?
- Organizations who want to increase their security posture, but don't know how or where to start. - Organizations using the free tier of Azure Active Directory licensing.
Microsoft is making security defaults available to everyone. The goal is to ensu
- If you're an organization currently using Conditional Access policies, security defaults are probably not right for you. - If you're an organization with Azure Active Directory Premium licenses, security defaults are probably not right for you.-- If your organization has complex security requirements, you should consider Conditional Access.
+- If your organization has complex security requirements, you should consider [Conditional Access](#conditional-access).
+
+## Enabling security defaults
+
+If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
+
+To enable security defaults in your directory:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator.
+1. Browse toΓÇ»**Azure Active Directory**ΓÇ»>ΓÇ»**Properties**.
+1. Select **Manage security defaults**.
+1. Set the **Enable security defaults** toggle to **Yes**.
+1. Select **Save**.
+
+![Screenshot of the Azure portal with the toggle to enable security defaults](./media/concept-fundamentals-security-defaults/security-defaults-azure-ad-portal.png)
-## Policies enforced
+## Enforced security policies
-### Unified Multi-Factor Authentication registration
+### Require all users to register for Azure AD Multi-Factor Authentication
All users in your tenant must register for multi-factor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the Microsoft Authenticator app. After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
-### Protecting administrators
+### Require administrators to do multi-factor authentication
+
+Administrators have increased access to your environment. Because of the power these highly privileged accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification for sign-in. In Azure AD, you can get a stronger account verification by requiring multi-factor authentication.
-Users with privileged access have increased access to your environment. Because of the power these accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification for sign-in. In Azure AD, you can get a stronger account verification by requiring multi-factor authentication. We recommend having separate accounts for administration and standard productivity tasks to significantly reduce the number of times your admins are prompted for MFA.
+> [!TIP]
+> We recommend having separate accounts for administration and standard productivity tasks to significantly reduce the number of times your admins are prompted for MFA.
After registration with Azure AD Multi-Factor Authentication is finished, the following Azure AD administrator roles will be required to do extra authentication every time they sign in:
After registration with Azure AD Multi-Factor Authentication is finished, the fo
- SharePoint administrator - User administrator
-### Protecting all users
+### Require users to do multi-factor authentication when necessary
We tend to think that administrator accounts are the only accounts that need extra layers of authentication. Administrators have broad access to sensitive information and can make changes to subscription-wide settings. But attackers frequently target end users.
After these attackers gain access, they can request access to privileged informa
One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for another authentication whenever necessary. Azure AD decides when a user will be prompted for Multi-Factor Authentication, based on factors such as location, device, role and task. This functionality protects all applications registered with Azure AD including SaaS applications.
-### Blocking legacy authentication
+### Block legacy authentication protocols
To give your users easy access to your cloud apps, Azure AD supports various authentication protocols, including legacy authentication. *Legacy authentication* is a term that refers to an authentication request made by:
After security defaults are enabled in your tenant, all authentication requests
- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
-### Protecting privileged actions
+### Protect privileged activities like access to the Azure portal
Organizations use various Azure services managed through the Azure Resource Manager API, including:
This policy applies to all users who are accessing Azure Resource Manager servic
## Deployment considerations
-The following extra considerations are related to deployment of security defaults.
+### Authentication methods
+
+Security defaults allow registration and use of Azure AD Multi-Factor Authentication **using only the Microsoft Authenticator app using notifications**. Conditional Access allows the use of any authentication method the administrator chooses to enable.
+
+| Method | Security defaults | Conditional Access |
+| | | |
+| Notification through mobile app | X | X |
+| Verification code from mobile app or hardware token | X** | X |
+| Text message to phone | | X |
+| Call to phone | | X |
+| App passwords | | X*** |
-### Emergency access accounts
+- ** Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.
+- *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators.
-Every organization should have at least two emergency access account configured.
+> [!WARNING]
+> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
+
+### Backup administrator accounts
+
+Every organization should have at least two backup administrator accounts configured. We call these emergency access accounts.
These accounts may be used in scenarios where your normal administrator accounts can't be used. For example: The person with the most recent Global Administrator access has left the organization. Azure AD prevents the last Global Administrator account from being deleted, but it doesn't prevent the account from being deleted or disabled on-premises. Either situation might make the organization unable to recover the account. Emergency access accounts are: -- Assigned Global Administrator rights in Azure AD-- Aren't used on a daily basis-- Are protected with a long complex password
+- Assigned Global Administrator rights in Azure AD.
+- Aren't used on a daily basis.
+- Are protected with a long complex password.
The credentials for these emergency access accounts should be stored offline in a secure location such as a fireproof safe. Only authorized individuals should have access to these credentials.
To create an emergency access account:
1. Under **Usage location**, select the appropriate location. 1. Select **Create**.
-You may choose [disable password expiration](../authentication/concept-sspr-policy.md#set-a-password-to-never-expire) to for these accounts using Azure AD PowerShell.
+You may choose to [disable password expiration](../authentication/concept-sspr-policy.md#set-a-password-to-never-expire) for these accounts using Azure AD PowerShell.
For more detailed information about emergency access accounts, see the article [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-### Authentication methods
-
-These free security defaults allow registration and use of Azure AD Multi-Factor Authentication **using only the Microsoft Authenticator app using notifications**. Conditional Access allows the use of any authentication method the administrator chooses to enable.
-
-| Method | Security defaults | Conditional Access |
-| | | |
-| Notification through mobile app | X | X |
-| Verification code from mobile app or hardware token | X** | X |
-| Text message to phone | | X |
-| Call to phone | | X |
-| App passwords | | X*** |
--- ** Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.-- *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators.-
-> [!WARNING]
-> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
- ### Disabled MFA status If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication. ### Conditional Access
-You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access in your environment today, security defaults won't be available to you.
![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png)
-Here are step-by-step guides for Conditional Access to configure a set of policies, which form a good starting point for protecting your identities:
+If you want to enable Conditional Access to configure a set of policies, which form a good starting point for protecting your identities:
- [Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) - [Require MFA for Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md) - [Block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md) - [Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)
-## Enabling security defaults
-
-To enable security defaults in your directory:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator.
-1. Browse toΓÇ»**Azure Active Directory**ΓÇ»>ΓÇ»**Properties**.
-1. Select **Manage security defaults**.
-1. Set the **Enable security defaults** toggle to **Yes**.
-1. Select **Save**.
-
-![Screenshot of the Azure portal with the toggle to enable security defaults](./media/concept-fundamentals-security-defaults/security-defaults-azure-ad-portal.png)
-
-## Disabling security defaults
+### Disabling security defaults
Organizations that choose to implement Conditional Access policies that replace security defaults must disable security defaults.
To disable security defaults in your directory:
## Next steps
-[Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+- [Blog: Introducing security defaults](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/introducing-security-defaults/ba-p/1061414)
+- [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+- More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
For more information on consent operations, see the following resources:
|-|-|-|-|-| | End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for:<li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
+## Application Authentication Flows
+There are several flows defined in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application that is being built. In some cases, there is a choice of flows available to the application, and in this case, some authentication flows are recommended over others. Specifically, resource owner password credentials (ROPC) should be avoided if at all possible as this requires the user to expose their current password credentials to the application directly. The application then uses those credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), as this flow is highly recommended.
++
+The only scenario where ROPC is suggested is for automated testing of applications. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details.
+
+
+Device code flow is another OAuth 2.0 protocol flow specifically for input constrained devices and is not used in all environments. If this type of flow is seen in the environment and not being used in an input constrained device scenario further investigation is warranted. This can be a misconfigured application or potentially something malicious.
+
+Monitor application authentication using the following formation:
+
+| What to monitor| Risk level| Where| Filter/sub-filter| Notes |
+| - | - | - | - | - |
+| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow.This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)|
+|Applications that are using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices which may not be present in all environments. If successful device code flows are seen without an environment need for them they should be further investigated for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)|
## Application configuration changes Monitor changes to any applicationΓÇÖs configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and logout URL.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
From the Azure portal you can view the Azure AD Audit logs and download as comma
* [Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
-* [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
The remainder of this article describes what you should monitor and alert on and is organized by the type of threat. Where there are specific pre-built solutions, you will find links to them following the table. Otherwise, you can build alerts using the preceding tools.
The DC agent Admin log is the primary source of information for how the software
Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md).
+## Conditional Access
+In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure that your Conditional Access policies work as expected to ensure that your resources are properly protected. Monitoring and alerting on changes to the Conditional Access service is critical to ensure that polices defined by your organization for access to data are enforced correctly. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage.
+
+**Workbook Links**
+
+* [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md)
+
+* [Conditional Access gap analysis workbook](../reports-monitoring/workbook-conditional-access-gap-analyzer.md)
+
+Monitor changes to Conditional Access policies using the following information:
+
+| What to monitor| Risk level| Where| Filter/sub-filter| Notes |
+| - | - | - | - | - |
+| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
+|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
+|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value|
+|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.|
+|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.|
+ ## Next steps
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
From the Azure portal you can view the Azure AD Audit logs and download as comma
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **Risky sign-ins**: Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). * **Risk detections**: Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ Although we discourage the practice, privileged accounts can have standing administration rights. If you choose to use standing privileges, and the account is compromised, it can have a strongly negative effect. We recommend you prioritize monitoring privileged accounts and include the accounts in your Privileged Identity Management (PIM) configuration. For more information on PIM, see [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md). Also, we recommend you validate that admin accounts: * Are required.
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack. | | New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) | | Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected. |-
+| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users this detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID is not equal to Home Tenant ID |
+|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this expected?
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
## Changes by privileged accounts Monitor all completed and attempted changes by a privileged account. This data enables you to establish what's normal activity for each privileged account and alert on activity that deviates from the expected. The Azure AD Audit logs are used to record this type of event. For more information on Azure AD Audit logs, see [Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md).
Investigate changes to privileged accounts' authentication rules and privileges,
| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-and-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities. | | Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts. | | Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts in a subsequent section of this article.|
+| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target:User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.
For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
In the Azure portal you can view the Azure AD Audit logs and download them as co
* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ The rest of this article provides recommendations for setting a baseline to monitor and alert on, organized using a tier model. Links to pre-built solutions are listed following the table. You can also build alerts using the preceding tools. The content is organized into the following topic areas of PIM: * Baselines
The following are recommended baseline settings:
| Azure AD roles assignment| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure Active Directory Multi-Factor Authentication (MFA).<li>Set maximum elevation duration to 8 hrs.| <li>Privileged Role Administration<li>Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. | | Azure Resource Role Configuration| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure MFA.<li>Set maximum elevation duration to 8 hrs.| <li>Owner<li>Resource Administrator<li>User Access <li>Administrator<li>Global Administrator<li>Security Administrator| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment. | - ## Azure AD roles assignment A privileged role administrator can customize PIM in their Azure AD organization. This includes changing the experience for a user who is activating an eligible role assignment as follows:
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
From the Azure portal you can view the Azure AD Audit logs and download as comma
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+ Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
Configure Identity Protection to help ensure protection is in place that support
The following are listed in order of importance based on the impact and severity of the entries.
+### Monitoring external user sign ins
+
+| What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
+| - |- |- |- |- |
+| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has sucessfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID is not equal to Home Tenant ID |
+|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
### Monitoring for failed unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information, see [Automate user provisioning to SaaS applications with
In June 2021, we have added following 42 new applications in our App gallery with Federation support
-[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://mx3www.playground.dynuddns.com/signin-oidc), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://onboard.syncloud.io/), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/)
+[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://www.mx3diagnostics.com/), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://onboard.syncloud.io/), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
B2C Phone Sign-up and Sign-in using a built-in policy enable IT administrators a
In April 2021, we have added following 31 new applications in our App gallery with Federation support
-[Zii Travel Azure AD Connect](https://azuremarketplace.microsoft.com/marketplace/apps/aad.ziitravelazureadconnect?tab=Overview), [Cerby](../saas-apps/cerby-tutorial.md), [Selflessly](https://app.selflessly.io/sign-in), [Apollo CX](https://apollo.cxlabs.de/sso/aad), [Pedagoo](https://account.pedagoo.com/), [Measureup](https://account.measureup.com/), [Wistec Education](https://wisteceducation.fi/login/index.php), [ProcessUnity](../saas-apps/processunity-tutorial.md), [Cisco Intersight](../saas-apps/cisco-intersight-tutorial.md), [Codility](../saas-apps/codility-tutorial.md), [H5mag](https://account.h5mag.com/auth/request-access/ms365), [Check Point Identity Awareness](../saas-apps/check-point-identity-awareness-tutorial.md), [Jarvis](https://jarvis.live/login), [desknet's NEO](../saas-apps/desknets-neo-tutorial.md), [SDS & Chemical Information Management](../saas-apps/sds-chemical-information-management-tutorial.md), [W├║ru App](../saas-apps/wuru-app-tutorial.md), [Holmes](../saas-apps/holmes-tutorial.md), [Tide Multi Tenant](https://gallery.tideapp.co.uk/), [Telenor](https://admin.smartansatt.telenor.no/), [Yooz US](https://us1.getyooz.com/?kc_idp_hint=microsoft), [Mooncamp](https://app.mooncamp.com/#/login), [inwise SSO](https://app.inwise.com/defaultsso.aspx), [Ecolab Digital Solutions](https://ecolabb2c.b2clogin.com/account.ecolab.com/oauth2/v2.0/authorize?p=B2C_1A_Connect_OIDC_SignIn&client_id=01281626-dbed-4405-a430-66457825d361&nonce=defaultNonce&redirect_uri=https://jwt.ms&scope=openid&response_type=id_token&prompt=login), [Taguchi Digital Marketing System](https://login.taguchi.com.au/), [XpressDox EU Cloud](https://test.xpressdox.com/Authentication/Login.aspx), [EZSSH](https://docs.keytos.io/getting-started/registering-a-new-tenant/registering_app_in_tenant/), [EZSSH Client](https://portal.ezssh.io/signup), [Verto 365](https://www.vertocloud.com/Login/), [KPN Grip](https://www.grip-on-it.com/), [AddressLook](https://portal.bbsonlineservices.net/Manage/AddressLook), [Cornerstone Single Sign-On](../saas-apps/cornerstone-ondemand-tutorial.md)
+[Zii Travel Azure AD Connect](https://azuremarketplace.microsoft.com/marketplace/apps/aad.ziitravelazureadconnect?tab=Overview), [Cerby](../saas-apps/cerby-tutorial.md), [Selflessly](https://app.selflessly.io/sign-in), [Apollo CX](https://apollo.cxlabs.de/sso/aad), [Pedagoo](https://account.pedagoo.com/), [Measureup](https://account.measureup.com/), [Wistec Education](https://wisteceducation.fi/login/index.php), [ProcessUnity](../saas-apps/processunity-tutorial.md), [Cisco Intersight](../saas-apps/cisco-intersight-tutorial.md), [Codility](../saas-apps/codility-tutorial.md), [H5mag](https://account.h5mag.com/auth/request-access/ms365), [Check Point Identity Awareness](../saas-apps/check-point-identity-awareness-tutorial.md), [Jarvis](https://jarvis.live/login), [desknet's NEO](../saas-apps/desknets-neo-tutorial.md), [SDS & Chemical Information Management](../saas-apps/sds-chemical-information-management-tutorial.md), [W├║ru App](../saas-apps/wuru-app-tutorial.md), [Holmes](../saas-apps/holmes-tutorial.md), [Tide Multi Tenant](https://gallery.tideapp.co.uk/), [Telenor](https://www.telenor.no/kundeservice/internett/wifi/administrere-ruter/), [Yooz US](https://us1.getyooz.com/?kc_idp_hint=microsoft), [Mooncamp](https://app.mooncamp.com/#/login), [inwise SSO](https://app.inwise.com/defaultsso.aspx), [Ecolab Digital Solutions](https://ecolabb2c.b2clogin.com/account.ecolab.com/oauth2/v2.0/authorize?p=B2C_1A_Connect_OIDC_SignIn&client_id=01281626-dbed-4405-a430-66457825d361&nonce=defaultNonce&redirect_uri=https://jwt.ms&scope=openid&response_type=id_token&prompt=login), [Taguchi Digital Marketing System](https://login.taguchi.com.au/), [XpressDox EU Cloud](https://test.xpressdox.com/Authentication/Login.aspx), [EZSSH](https://docs.keytos.io/getting-started/registering-a-new-tenant/registering_app_in_tenant/), [EZSSH Client](https://portal.ezssh.io/signup), [Verto 365](https://www.vertocloud.com/Login/), [KPN Grip](https://www.grip-on-it.com/), [AddressLook](https://portal.bbsonlineservices.net/Manage/AddressLook), [Cornerstone Single Sign-On](../saas-apps/cornerstone-ondemand-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
Organizations in the Microsoft Azure Government cloud can now enable their guest
In March 2021 we have added following 37 new applications in our App gallery with Federation support:
-[Bambuser Live Video Shopping](https://lcx.bambuser.com/), [DeepDyve Inc](https://www.deepdyve.com/azure-sso), [Moqups](../saas-apps/moqups-tutorial.md), [RICOH Spaces Mobile](https://ricohspaces.app/welcome), [Flipgrid](https://auth.flipgrid.com/), [hCaptcha Enterprise](../saas-apps/hcaptcha-enterprise-tutorial.md), [SchoolStream ASA](https://jsd.schoolstreamk12.com/AS)
+[Bambuser Live Video Shopping](https://lcx.bambuser.com/), [DeepDyve Inc](https://www.deepdyve.com/azure-sso), [Moqups](../saas-apps/moqups-tutorial.md), [RICOH Spaces Mobile](https://ricohspaces.app/welcome), [Flipgrid](https://auth.flipgrid.com/), [hCaptcha Enterprise](../saas-apps/hcaptcha-enterprise-tutorial.md), [SchoolStream ASA](https://www.ssk12.com/), [TransPerfect GlobalLink Dashboard](../saas-apps/transperfect-globallink-dashboard-tutorial.md), [SimplificaCI](https://app.simplificaci.com.br/), [Thrive LXP](../saas-apps/thrive-lxp-tutorial.md), [Lexonis TalentScape](../saas-apps/lexonis-talentscape-tutorial.md), [Exium](../saas-apps/exium-tutorial.md), [Sapient](../saas-apps/sapient-tutorial.md), [TrueChoice](../saas-apps/truechoice-tutorial.md), [RICOH Spaces](https://ricohspaces.app/welcome), [Saba Cloud](../saas-apps/learning-at-work-tutorial.md), [Acunetix 360](../saas-apps/acunetix-360-tutorial.md), [Exceed.ai](../saas-apps/exceed-ai-tutorial.md), [GitHub Enterprise Managed User](../saas-apps/github-enterprise-managed-user-tutorial.md), [Enterprise Vault.cloud for Outlook](https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=id_token&scope=openid%20profile%20User.Read&client_id=7176efe5-e954-4aed-b5c8-f5c85a980d3a&nonce=4b9e1981-1bcb-4938-a283-86f6931dc8cb), [Smartlook](../saas-apps/smartlook-tutorial.md), [Accenture Academy](../saas-apps/accenture-academy-tutorial.md), [Onshape](../saas-apps/onshape-tutorial.md), [Tradeshift](../saas-apps/tradeshift-tutorial.md), [JuriBlox](../saas-apps/juriblox-tutorial.md), [SecurityStudio](../saas-apps/securitystudio-tutorial.md), [ClicData](https://app.clicdata.com/), [Evergreen](../saas-apps/evergreen-tutorial.md), [Patchdeck](https://patchdeck.com/ad_auth/authenticate/), [FAX.PLUS](../saas-apps/fax-plus-tutorial.md), [ValidSign](../saas-apps/validsign-tutorial.md), [AWS Single Sign-on](../saas-apps/aws-single-sign-on-tutorial.md), [Nura Space](https://dashboard.nuraspace.com/login), [Broadcom DX SaaS](../saas-apps/broadcom-dx-saas-tutorial.md), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [SendPro Enterprise](../saas-apps/sendpro-enterprise-tutorial.md), [FortiSASE SIA](../saas-apps/fortisase-sia-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
Clients can now track changes to those resources efficiently and provides the be
In August 2020 we have added following 25 new applications in our App gallery with Federation support:
-[Backup365](https://portal.backup365.io/login), [Soapbox](https://app.soapboxhq.com/create?step=auth&provider=azure-ad2-oauth2), [Alma SIS](https://almau.getalma.com/), [Enlyft Dynamics 365 Connector](http://enlyft.com/), [Serraview Space Utilization Software Solutions](../saas-apps/serraview-space-utilization-software-solutions-tutorial.md), [Uniq](https://web.uniq.app/), [Visibly](../saas-apps/visibly-tutorial.md), [Zylo](../saas-apps/zylo-tutorial.md), [Edmentum - Courseware Assessments Exact Path](https://auth.edmentum.com/elf/login), [CyberLAB](https://cyberlab.evolvesecurity.com/#/welcome), [Altamira HRM](../saas-apps/altamira-hrm-tutorial.md), [WireWheel](../saas-apps/wirewheel-tutorial.md), [Zix Compliance and Capture](https://sminstall.zixcorp.com/teams/teams.php?install_request=true&tenant_id=common), [Greenlight Enterprise Business Controls Platform](../saas-apps/greenlight-enterprise-business-controls-platform-tutorial.md), [Genetec Clearance](https://www.clearance.network/), [iSAMS](../saas-apps/isams-tutorial.md), [VeraSMART](../saas-apps/verasmart-tutorial.md), [Amiko](https://amiko.web.rivero.app/), [Twingate](https://auth.twingate.com/signup), [Funnel Leasing](https://nestiolistings.com/sso/oidc/azure/authorize/), [Scalefusion](https://scalefusion.com/users/sign_in/), [Bpanda](https://goto.bpanda.com/login), [Vivun Calendar Connect](https://app.vivun.com/dashboard/calendar/connect), [FortiGate SSL VPN](../saas-apps/fortigate-ssl-vpn-tutorial.md), [Wandera End User](https://www.wandera.com/)
+[Backup365](https://portal.backup365.io/login), [Soapbox](https://app.soapboxhq.com/create?step=auth&provider=azure-ad2-oauth2), [Alma SIS](https://almau.getalma.com/), [Enlyft Dynamics 365 Connector](http://enlyft.com/), [Serraview Space Utilization Software Solutions](../saas-apps/serraview-space-utilization-software-solutions-tutorial.md), [Uniq](https://web.uniq.app/), [Visibly](../saas-apps/visibly-tutorial.md), [Zylo](../saas-apps/zylo-tutorial.md), [Edmentum - Courseware Assessments Exact Path](https://auth.edmentum.com/elf/login), [CyberLAB](https://cyberlab.evolvesecurity.com/#/welcome), [Altamira HRM](../saas-apps/altamira-hrm-tutorial.md), [WireWheel](../saas-apps/wirewheel-tutorial.md), [Zix Compliance and Capture](https://sminstall.zixcorp.com/teams/teams.php?install_request=true&tenant_id=common), [Greenlight Enterprise Business Controls Platform](../saas-apps/greenlight-enterprise-business-controls-platform-tutorial.md), [Genetec Clearance](https://www.clearance.network/), [iSAMS](../saas-apps/isams-tutorial.md), [VeraSMART](../saas-apps/verasmart-tutorial.md), [Amiko](https://amiko.io/), [Twingate](https://auth.twingate.com/signup), [Funnel Leasing](https://nestiolistings.com/sso/oidc/azure/authorize/), [Scalefusion](https://scalefusion.com/users/sign_in/), [Bpanda](https://goto.bpanda.com/login), [Vivun Calendar Connect](https://app.vivun.com/dashboard/calendar/connect), [FortiGate SSL VPN](../saas-apps/fortigate-ssl-vpn-tutorial.md), [Wandera End User](https://www.wandera.com/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
For more information about users flows, see [User flow versions in Azure Active
In July 2020 we have added following 55 new applications in our App gallery with Federation support:
-[Clap Your Hands](http://www.rmit.com.ar/), [Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://appfusions.alohacloud.com/auth), [Control Tower](https://bpm.tnxcorp.com/sso/microsoft), [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
+[Clap Your Hands](http://www.rmit.com.ar/), [Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://appfusions.alohacloud.com/auth), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
For more information, see the [Risk detection API reference documentation](/grap
### New Federated Apps available in Azure AD app gallery - June 2019
-**Type:** New feature
+**Type:** New feature
**Service category:** Enterprise Apps **Product capability:** 3rd Party Integration In June 2019, we've added these 22 new apps with Federation support to the app gallery:
-[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://proptimise.co.uk/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
+[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://www.proptimise.com/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
For more information about the apps, see [SaaS application integration with Azur
### Automate user account provisioning for these newly supported SaaS apps
-**Type:** New feature
+**Type:** New feature
**Service category:** Enterprise Apps **Product capability:** Monitoring & Reporting
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
#Customer intent: As a global administrator or access package manager, I want to configure that a user cannot request an access package if they already have incompatible access.
-# Configure separation of duties checks for an access package in Azure AD entitlement management (Preview)
+# Configure separation of duties checks for an access package in Azure AD entitlement management
In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. For example, employees might only need manager approval to get access to certain apps, but guests coming in from other organizations may require both a sponsor and a resource team departmental manager to approve. In a policy for users already in the directory, you can specify a particular group of users for who can request access. However, you may have a requirement to avoid a user obtaining excessive access. To meet this requirement, you will want to further restrict who can request access, based on the access the requestor already has.
Follow these steps to change the list of incompatible groups or other access pac
1. In the left menu, click **Access packages** and then open the access package which users will request.
-1. In the left menu, click **Separation of duties (preview)**.
+1. In the left menu, click **Separation of duties**.
1. If you wish to prevent users who have another access package assignment already from requesting this access package, click on **Add access package** and select the access package that the user would already be assigned.
Follow these steps to view the list of other access packages that have indicated
1. In the left menu, click **Access packages** and then open the access package.
-1. In the left menu, click **Separation of duties (preview)**.
+1. In the left menu, click **Separation of duties**.
1. Click on **Incompatible With**.
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
In this article, you'll learn how to grant consent on behalf of a single user by using PowerShell.
-When a user grants consent on his or her own behalf, the following events occur:
+When a user grants consent for themselves, the following events occur
1. A service principal for the client application is created, if it doesn't already exist. A service principal is the instance of an application or a service in your Azure Active Directory (Azure AD) tenant. Access that's granted to the app or service is associated with this service principal object.
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Log Analytics cluster | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md) | Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md?tabs=python) | | Azure Managed Disk | [Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks](../../virtual-machines/disks-enable-customer-managed-keys-portal.md) |
-| Azure Media services | [Managed identities](/media-services/latest/concept-managed-identities) |
+| Azure Media services | [Managed identities](/azure/media-services/latest/concept-managed-identities) |
| Azure Monitor | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md?tabs=portal) | | Azure Policy | [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) | | Azure Purview | [Credentials for source authentication in Azure Purview](../../purview/manage-credentials.md) |
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
The following services support Azure AD authentication. New services are added t
| Azure Kubernetes Service (AKS) | [Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service](../../aks/azure-ad-rbac.md) | | Azure Machine Learning Services | [Set up authentication for Azure Machine Learning resources and workflows](../../machine-learning/how-to-setup-authentication.md) | | Azure Maps | [Manage authentication in Azure Maps](../../azure-maps/how-to-manage-authentication.md) |
-| Azure Media services | [Access the Azure Media Services API with Azure AD authentication](/media-services/previous/media-services-use-aad-auth-to-access-ams-api) |
+| Azure Media services | [Access the Azure Media Services API with Azure AD authentication](/azure/media-services/previous/media-services-use-aad-auth-to-access-ams-api) |
| Azure Monitor | [Azure AD authentication for Application Insights (Preview)](../../azure-monitor/app/azure-ad-authentication.md?tabs=net) | | Azure Resource Manager | [Azure security baseline for Azure Resource Manager](/security/benchmark/azure/baselines/resource-manager-security-baseline?toc=/azure/azure-resource-manager/management/toc.json) | Azure Service Fabric | [Set up Azure Active Directory for client authentication](../../service-fabric/service-fabric-cluster-creation-setup-aad.md) |
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Based on your selections in **Upon completion settings**, auto-apply will be exe
> [!NOTE] > It is possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role. + ## Update the access review After one or more access reviews have been started, you may want to modify or update the settings of your existing access reviews. Here are some common scenarios that you might want to consider:
active-directory Smartsheet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL** of https://scim.smartsheet.com/v2 and **Access Token** value retrieved earlier from Smartsheet in **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
+5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL** of `https://scim.smartsheet.com/v2` and **Access Token** value retrieved earlier from Smartsheet in **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
Once you've configured provisioning, use the following resources to monitor your
* 06/16/2020 - Added support for enterprise extension attributes "Cost Center", "Division", "Manager" and "Department" for users. * 02/10/2021 - Added support for core attributes "emails[type eq "work"]" for users.
-* 02/12/2022 - Added SCIM base/tenant URL of https://scim.smartsheet.com/v2 for SmartSheet integration under Admin Credentials section.
+* 02/12/2022 - Added SCIM base/tenant URL of `https://scim.smartsheet.com/v2` for SmartSheet integration under Admin Credentials section.
## Additional resources
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
Microsoft makes available several tools to help with your discovery of applicati
| Tool| Usage | | - | - | | [Usage Analytics for AD FS](../hybrid/how-to-connect-health-adfs.md)| Analyzes the authentication traffic of your federated servers. |
-| [Microsoft Defender for Cloud Apps](%20/defender-cloud-apps/what-is-defender-for-cloud-apps) (MDCA)| Previously known as Microsoft Cloud App Security (MCAS), Defender for Cloud Apps scans firewall logs to detect cloud apps, IaaS and PaaS services used by your organization. Integrating MDCA with Defender for Endpoint allows discovery to happen from data analyzed from window client devices. |
+| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) (MDCA)| Previously known as Microsoft Cloud App Security (MCAS), Defender for Cloud Apps scans firewall logs to detect cloud apps, IaaS and PaaS services used by your organization. Integrating MDCA with Defender for Endpoint allows discovery to happen from data analyzed from window client devices. |
| [Application Documentation worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx)| Helps you document the current states of your applications | We recognize that your apps may be in systems other than Microsoft, and that our tools may not discover those apps. Ensure you do a complete inventory. All providers should have mechanisms for discovering applications using their services.
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
This article addresses the following cross-cutting themes:
* Governance ## Visibility
-It's important to monitor your Azure AD tenant. You must adopt an "assume breach" mindset and meet compliance standards set forth in [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) and [Memorandum M-21-31](https://www.whitehouse.gov/wp-content/uploads/2021/M-21-31). There are three primary log types used for security analysis and ingestion:
+It's important to monitor your Azure AD tenant. You must adopt an "assume breach" mindset and meet compliance standards set forth in [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) and [Memorandum M-21-31](https://www.whitehouse.gov/wp-content/uploads/2021/08/M-21-31-Improving-the-Federal-Governments-Investigative-and-Remediation-Capabilities-Related-to-Cybersecurity-Incidents.pdf). There are three primary log types used for security analysis and ingestion:
* [Azure Audit Log.](../reports-monitoring/concept-audit-logs.md) Used to monitor operational activities of the directory itself such as creating, deleting, updating objects like users or groups, as well as making changes to configurations of Azure AD like modifications to a conditional access policy.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
-Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](/media-services/latest/limits-quotas-constraints-reference).
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](/azure/media-services/latest/limits-quotas-constraints-reference).
## Networking
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
For agent nodes, which are expected to handle very large numbers of concurrent s
| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. | | `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. | | `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
-| `net.netfilter.nf_conntrack_max` | 131072 - 589824 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
+| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. | #### Worker limits
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
+
+ Title: Use KMS etcd encryption in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to use kms etcd encryption with Azure Kubernetes Service (AKS)
++ Last updated : 04/11/2022+++
+# Add KMS etcd encryption to an Azure Kubernetes Service (AKS) cluster (Preview)
+
+This article shows you how to enable encryption at rest for your Kubernetes data in etcd using Azure Key Vault with Key Management Service (KMS) plugin. The KMS plugin allows you to:
+
+* Use a key in Key Vault for etcd encryption
+* Bring your own keys
+* Provide encryption at rest for secrets stored in etcd
+
+For more details on using the KMS plugin, see [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/).
++
+## Before you begin
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.58 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `AzureKeyVaultKmsPreview` preview feature
+
+To use the feature, you must also enable the `AzureKeyVaultKmsPreview` feature flag on your subscription.
+
+Register the `AzureKeyVaultKmsPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureKeyVaultKmsPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AzureKeyVaultKmsPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Limitations
+
+The following limitations apply when you integrate KMS etcd encryption with AKS:
+
+* Disabling of the KMS etcd encryption feature.
+* Changing of key Id, including key name and key version.
+* Deletion of the key, Key Vault, or the associated identity.
+* KMS etcd encryption does not work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity is not available until cluster creation, thus there is a cycle dependency.
+* Using Azure Key Vault with PrivateLink enabled.
+* Using more than 2000 secrets in a cluster.
+* Managed HSM Support
+* Bring your own (BYO) Azure Key Vault from another tenant.
++
+## Create a KeyVault and key
+
+> [!WARNING]
+> Deleting the key or the Azure Key Vault is not supported and will cause your cluster to become unstable.
+>
+> If you need to recover your Key Vault or key, see the [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli) documentation.
+
+Use `az keyvault create` to create a KeyVault.
+
+```azurecli
+az keyvault create --name MyKeyVault --resource-group MyResourceGroup
+```
+
+Use `az keyvault key create` to create a key.
+
+```azurecli
+az keyvault key create --name MyKeyName --vault-name MyKeyVault
+```
+
+Use `az keyvault key show` to export the Key ID.
+
+```azurecli
+export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --query 'key.kid' -o tsv)
+echo $KEY_ID
+```
+
+The above example stores the Key ID in *KEY_ID*.
+
+## Create a user-assigned managed identity
+
+Use `az identity create` to create a User-assigned managed identity.
+
+```azurecli
+az identity create --name MyIdentity --resource-group MyResourceGroup
+```
+
+Use `az identity show` to get Identity Object Id.
+
+```azurecli
+IDENTITY_OBJECT_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'principalId' -o tsv)
+echo $IDENTITY_OBJECT_ID
+```
+
+The above example stores the value of the Identity Object Id in *IDENTITY_OBJECT_ID*.
+
+Use `az identity show` to get Identity Resource Id.
+
+```azurecli
+IDENTITY_RESOURCE_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'id' -o tsv)
+echo $IDENTITY_RESOURCE_ID
+```
+
+The above example stores the value of the Identity Resource Id in *IDENTITY_RESOURCE_ID*.
+
+## Assign permissions (decrypt and encrypt) to access key vault
+
+Use `az keyvault set-policy` to create an Azure KeyVault policy.
+
+```azurecli-interactive
+az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-id $IDENTITY_OBJECT_ID
+```
+
+## Create an AKS cluster with KMS etcd encryption enabled
+
+Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-azure-keyvault-kms` and `--azure-keyvault-kms-key-id` parameters to enable KMS etcd encryption.
+
+```azurecli-interactive
+az aks create --name myAKSCluster --resource-group MyResourceGroup --assign-identity $IDENTITY_RESOURCE_ID --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID
+```
+
+## Update an exiting AKS cluster to enable KMS etcd encryption
+
+Use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms` and `--azure-keyvault-kms-key-id` parameters to enable KMS etcd encryption on an existing cluster.
+
+```azurecli-interactive
+az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID
+```
+
+<!-- LINKS - Internal -->
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az extension add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[azure-cli-install]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-aks-update]: /cli/azure/aks#az_aks_update
app-service App Service Web Restore Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-restore-snapshots.md
The following table shows which app configuration is restored:
az webapp config snapshot restore --name <target-app-name> --resource-group <target-group-name> --source-name <source-app-name> --source-resource-group <source-group-name> --time <source-snapshot-timestamp> ```
- To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/webapp/config/snapshot#az-webapp-config-snapshot-restore).
+ To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/azure/webapp/config/snapshot#az-webapp-config-snapshot-restore).
+--
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
App settings are always encrypted when stored (encrypted-at-rest).
![Application Settings](./media/configure-common/open-ui.png)
- By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click its **Value** field. To see the hidden values of all app settings, click the **Show value** button.
+ By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click its **Value** field. To see the hidden values of all app settings, click the **Show values** button.
1. To add a new app setting, click **New application setting**. To edit a setting, click the **Edit** button on the right side.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
Title: Mount Azure Storage as a local share (container)
-description: Learn how to attach custom network share in a containerized app in Azure App Service. Share files between apps, manage static content remotely and access locally, etc.
+ Title: Mount Azure Storage as a local share
+description: Learn how to attach custom network share in Azure App Service. Share files between apps, manage static content remotely and access locally, etc.
Previously updated : 3/10/2022 Last updated : 4/12/2022 zone_pivot_groups: app-service-containers-code
-# Mount Azure Storage as a local share in a custom container in App Service
+# Mount Azure Storage as a local share in App Service
::: zone pivot="code-windows" > [!NOTE]
To validate that the Azure Storage is mounted successfully for the app:
::: zone pivot="code-windows" -- [Migrate custom software to Azure App Service using a custom container](tutorial-custom-container.md?pivots=container-windows).
+- [Migrate .NET apps to Azure App Service](app-service-asp-net-migration.md).
::: zone-end
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath
The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<username>`, `<zip-package-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip
+curl -X POST -u <username:password> --data-binary "@<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip
``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)]
curl -X POST -u <username> --data-binary @"<zip-package-path>" https://<app-name
The following example uses the `packageUri` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from. ```bash
-curl -X POST -u <username> https://<app-name>.scm.azurewebsites.net/api/publish -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}'
+curl -X POST -u <username:password> https://<app-name>.scm.azurewebsites.net/api/publish -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}'
``` # [Kudu UI](#tab/kudu-ui)
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
target cross-platform with .NET 6.0.
In this quickstart, you'll learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you'll have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
-<!-- markdownlint-disable MD044 -->
-<!-- markdownlint-enable MD044 -->
-
-> [!NOTE]
-> Azure PowerShell is recommended for creating apps on the Windows hosting platform. To create apps on Linux, use a different tool, such as [Azure CLI](quickstart-dotnetcore.md?pivots=development-environment-cli)
-- ## Prerequisites :::zone target="docs" pivot="development-environment-vs"
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
To complete this quickstart:
In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az-webapp-list-runtimes). ```azurecli-interactive
- az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP|7.4' --deployment-local-git
+ az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP:8.0' --deployment-local-git
``` When the web app has been created, the Azure CLI shows output similar to the following example:
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
In the Azure portal:
Run the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule#az-sql-server-firewall-rule-create) command to add a firewall rule to your SQL Server instance. ```azurecli-interactive
-az sql server firewall-rule create -resource-group msdocs-core-sql --server <yoursqlserver> --name LocalAccess --start-ip-address <your-ip> --end-ip-address <your-ip>
+az sql server firewall-rule create --resource-group msdocs-core-sql --server <yoursqlserver> --name LocalAccess --start-ip-address <your-ip> --end-ip-address <your-ip>
```
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
-If you are working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.zure.com) and run the Azure CLI commands.
+If you are working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands.
### [Azure portal](#tab/azure-portal-access) | Instructions | Screenshot |
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
Previously updated : 04/04/2022 Last updated : 04/12/2022
Custom error pages can be defined at the global level and the listener level:
To create a custom error page, you must have: - an HTTP response status code.-- the corresponding location for the error page. -- a publicly accessible Azure storage blob for the location.-- an *.htm or *.html extension type.
+- corresponding location for the error page.
+- error page should be internet accessible.
+- error page should be in \*.htm or \*.html extension type.
+- error page size must be less than 1 MB.
-The size of the error page must be less than 1 MB. You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same blob location are currently not supported.
+You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same location are currently not supported.
After you specify an error page, the application gateway downloads it from the storage blob location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. In the **API key** field, paste the subscription key you obtained from your Form Recognizer resource.
-1. In the **Source: URL** field, paste paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` and select the **Fetch** button.
+1. In the **Source: URL** field, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` and select the **Fetch** button.
1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
To recover from this issue, follow these steps:
3. [Install a stable version](https://helm.sh/docs/intro/install/) of Helm 3 on your machine instead of the release candidate version. 4. Run the `az connectedk8s connect` command with the appropriate values to connect the cluster to Azure Arc.
+### CryptoHash module error
+
+When attempting to onboard Kubernetes clusters to the Azure Arc platform, the local environment (for example, your client console) may return the following error message:
+
+```output
+Cannot load native module 'Crypto.Hash._MD5'
+```
+
+Sometimes, dependent modules fail to download successfully when adding the extensions `connectedk8s` and `k8s-configuration` through Azure CLI or Azure Powershell. To fix this problem, manually remove and then add the extensions in the local environment.
+
+To remove the extensions, use:
+
+```azurecli
+az extension remove --name connectedk8s
+
+az extension remove --name k8s-configuration
+```
+
+To add the extensions, use:
+
+```azurecli
+az extension add --name connectedk8s
+
+az extension add --name k8s-configuration
+```
+ ## GitOps management ### Flux v1 - General
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 03/09/2022 Last updated : 04/11/2022
-# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters (public preview)
+# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters (preview)
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clusters or Azure Arc-enabled Kubernetes connected clusters as a cluster extension. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment. This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
-General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Flux v2 is the way forward, and Flux v1 will eventually be deprecated.
+General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
>[!IMPORTANT]
->GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One important feature, multi-tenancy, could affect some users when it is released. To prepare yourself for the release of multi-tenancy, [please review these details](#multi-tenancy).
+>GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One recently-released feature, multi-tenancy, could affect some users. To understand how to work with multi-tenancy, [please review these details](#multi-tenancy).
+>
+>The `microsoft.flux` extension released major version 1.0.0. This includes the multi-tenancy feature. If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension you can upgrade to the latest extension manually using the Azure CLI: "az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>" (use "-t connectedClusters" for Arc clusters and "-t managedClusters" for AKS clusters).
+ ## Prerequisites
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
### Supported regions
-GitOps is currently supported in the regions that Azure Arc-enabled Kubernetes supports. These regions are a subset of the regions that AKS supports. GitOps is currently not supported in all AKS regions. [See the supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). The GitOps service is adding new supported regions on a regular cadence.
+GitOps is currently supported in all regions that Azure Arc-enabled Kubernetes supports. [See the supported regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=kubernetes-service,azure-arc). GitOps (preview) is currently supported in a subset of the regions that AKS supports. The GitOps service is adding new supported regions on a regular cadence.
### Network requirements
-The GitOps agents require TCP on port 443 (`https://:443`) to function. The agents also require the following outbound URLs:
+The GitOps agents require outbound (egress) TCP to the repo source on either port 22 (SSH) or port 443 (HTTPS) to function. The agents also require the following outbound URLs:
| Endpoint (DNS) | Description | | | |
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
## Enable CLI extensions >[!NOTE]
->The `k8s-configuration` CLI extension has been upgraded to manage either Flux v2 or Flux v1 configurations. Flux v2 is an important upgrade to Flux v1, and eventually Azure will stop supporting GitOps with Flux v1. Begin using Flux v2 as soon as possible.
+>The `k8s-configuration` CLI extension manages either Flux v2 or Flux v1 configurations. Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
The Azure portal is useful for managing GitOps configurations and the Flux exten
The portal provides the overall compliance state of the cluster. The Flux objects that have been deployed to the cluster are also shown, along with their installation parameters, compliance state, and any errors.
-You can also use the portal to create and delete GitOps configurations.
+You can also use the portal to create, update, and delete GitOps configurations.
## Manage cluster configuration by using the Flux Kustomize controller
By using this annotation, the HelmRelease that is deployed will be patched with
## Multi-tenancy
-Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability will be integrated into Azure GitOps with Flux v2 prior to general availability.
+Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability has been integrated into Azure GitOps with Flux v2.
>[!NOTE]
->You need to prepare for the multi-tenancy feature release if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
+>For the multi-tenancy feature you need to know if your manifests contain any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
> > * Upgrade to Kubernetes version 1.20.6 or greater. > * In your Kubernetes manifests assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
spec:
### Opt out of multi-tenancy
-Multi-tenancy will be enabled by default to assure security by default in your clusters. However, if you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with "--configuration-settings multiTenancy.enforce=false".
+When the `microsoft.flux` extension is installed, multi-tenancy is enabled by default to assure security by default in your clusters. However, if you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with "--configuration-settings multiTenancy.enforce=false".
```console az k8s-extension create --extension-type microsoft.flux --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
az k8s-extension update --configuration-settings multiTenancy.enforce=false -c C
## Migrate from Flux v1
-If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't be installed if there are `sourceControlConfigurations` resources installed in the cluster.
+If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources installed in the cluster.
Use these az CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
Title: Azure SQL input binding for Functions
description: Learn to use the Azure SQL input binding in Azure Functions. Previously updated : 12/15/2021 Last updated : 4/1/2022 ms.devlang: csharp
This section contains the following examples:
* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c) * [HTTP trigger, get multiple docs from route data](#http-trigger-get-multiple-items-from-route-data-c)
-The examples refer to a `ToDoItem` type and a corresponding database table:
+The examples refer to a `ToDoItem` class and a corresponding database table:
-```cs
-namespace AzureSQLSamples
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public int Priority { get; set; }
- public string Description { get; set; }
- }
-}
-```
+
-```sql
-CREATE TABLE dbo.ToDo (
- [Id] int primary key,
- [Priority] int null,
- [Description] nvarchar(200) not null
-)
-```
<a id="http-trigger-look-up-id-from-query-string-c"></a>
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,
- [Sql("select * from dbo.ToDo where Id = @Id",
+ [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
CommandType = System.Data.CommandType.Text, Parameters = "@Id={Query.id}", ConnectionStringSetting = "SqlConnectionString")]
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,
- [Sql("select * from dbo.ToDo where [Priority] > @Priority",
+ [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
CommandType = System.Data.CommandType.Text, Parameters = "@Priority={priority}", ConnectionStringSetting = "SqlConnectionString")]
namespace AzureSQLSamples
} ```
+<a id="http-trigger-delete-one-or-multiple-rows-c"></a>
+### HTTP trigger, delete one or multiple rows
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
++++ # [JavaScript](#tab/javascript) The Azure SQL binding for Azure Functions does not currently support JavaScript.
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
Title: Azure SQL output binding for Functions
description: Learn to use the Azure SQL output binding in Azure Functions. Previously updated : 12/15/2021 Last updated : 4/1/2022 ms.devlang: csharp
For information on setup and configuration details, see the [overview](./functio
This section contains the following examples: * [Http trigger, write one record](#http-trigger-write-one-record-c)
+* [Http trigger, write to two tables](#http-trigger-write-to-two-tables-c)
* [Http trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c)
-The examples refer to a `ToDoItem` type and a corresponding database table:
+The examples refer to a `ToDoItem` class and a corresponding database table:
-```cs
-namespace AzureSQLSamples
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public int Priority { get; set; }
- public string Description { get; set; }
- }
-}
-```
+
-```sql
-CREATE TABLE dbo.ToDo (
- [Id] int primary key,
- [Priority] int null,
- [Description] nvarchar(200) not null
-)
-```
<a id="http-trigger-write-one-record-c"></a> ### Http trigger, write one record
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
-```cs
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Host;
-using Microsoft.Extensions.Logging;
-using System;
-namespace AzureSQLSamples
+<a id="http-trigger-write-to-two-tables-c"></a>
+
+### Http trigger, write to two tables
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
++
+```cs
+namespace AzureSQL.ToDo
{
- public static class WriteOneRecord
+ public static class PostToDo
{
- [FunctionName("WriteOneRecord")]
- public static IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", Route = "addtodo")] HttpRequest req,
+ // create a new ToDoItem from body object
+ // uses output binding to insert new item into ToDo table
+ [FunctionName("PostToDo")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
ILogger log,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] out ToDoItem newItem)
+ [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ [Sql("dbo.RequestLog", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
{
- newItem = new ToDoItem
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ // generate a new id for the todo item
+ toDoItem.Id = Guid.NewGuid();
+
+ // set Url from env variable ToDoUri
+ toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString();
+
+ // if completed is not provided, default to false
+ if (toDoItem.completed == null)
{
- Id = req.Query["id"],
- Description =req.Query["desc"]
- };
+ toDoItem.completed = false;
+ }
+
+ await toDoItems.AddAsync(toDoItem);
+ await toDoItems.FlushAsync();
+ List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
- log.LogInformation($"C# HTTP trigger function inserted one row");
- return new CreatedResult($"/api/addtodo", newItem);
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+ await requestLogs.AddAsync(requestLog);
+ await requestLogs.FlushAsync();
+
+ return new OkObjectResult(toDoItemList);
} }+
+ public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+ }
} ```
namespace AzureSQLSamples
### HTTP trigger, write records using IAsyncCollector
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON.
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array.
```cs using Microsoft.AspNetCore.Http;
namespace AzureSQLSamples
public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] HttpRequest req,
- [Sql("dbo.Products", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
+ [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Title: Azure SQL bindings for Functions
description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 1/25/2022 Last updated : 4/1/2022 ms.devlang: csharp
The Azure SQL bindings for Azure Functions are open-source and available on the
- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) - [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/) - [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md)
+- [Use SQL bindings in Azure Stream Analytics](/azure/stream-analytics/sql-database-upsert#option-1-update-by-key-with-the-azure-function-sql-binding)
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
The following example shows the part of the workflow that sets up the environmen
```yaml
- - name: Setup Node 12.x Environment
+ - name: Setup Node 14.x Environment
uses: actions/setup-node@v2 with: node-version: 14.x
on:
env: AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your application's name AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
- NODE_VERSION: '12.x' # set this to the node version to use (supports 8.x, 10.x, 12.x)
+ NODE_VERSION: '14.x' # set this to the node version to use (supports 8.x, 10.x, 12.x, 14.x)
jobs: build-and-deploy:
on:
env: AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your application's name AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
- NODE_VERSION: '10.x' # set this to the node version to use (supports 8.x, 10.x, 12.x)
+ NODE_VERSION: '14.x' # set this to the node version to use (supports 8.x, 10.x, 12.x, 14.x)
jobs: build-and-deploy:
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
Previously updated : 08/20/2021
+recommendations: false
Last updated : 04/08/2022 # Azure guidance for secure isolation- Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance. A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
-Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: (1) user access controls with authentication and identity separation, (2) compute isolation for processing, (3) networking isolation including data encryption in transit, (4) storage isolation with data encryption at rest, and (5) security assurance processes embedded in service design to correctly develop logically isolated services.
+Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:
+
+1. User access controls with authentication and identity separation
+2. Compute isolation for processing
+3. Networking isolation including data encryption in transit
+4. Storage isolation with data encryption at rest
+5. Security assurance processes embedded in service design to correctly develop logically isolated services
-Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for isolated cloud services using a multi-layered approach depicted in Figure 1.
+Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low cost; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for isolated cloud services using a multi-layered approach depicted in Figure 1.
:::image type="content" source="./media/secure-isolation-fig1.png" alt-text="Azure isolation approaches" border="false"::: **Figure 1.** Azure isolation approaches
A brief summary of isolation approaches is provided below.
- **Compute isolation** ΓÇô Azure provides you with both logical and physical compute isolation for processing. Logical isolation is implemented via: - *Hypervisor isolation* for services that provide cryptographically certain isolation by using separate virtual machines and using Azure Hypervisor isolation. - *Drawbridge isolation* inside a virtual machine (VM) for services that provide cryptographically certain isolation for workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code.
- - *User context-based isolation* for services that are composed solely of Microsoft-controlled code and customer code is not allowed to run. </br>
-In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation, you can use Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.
+ - *User context-based isolation* for services that are composed solely of Microsoft-controlled code and customer code isn't allowed to run. </br>
+
+ In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation, you can use Azure Dedicated Host or isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.
- **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements. You can use [network security groups](../virtual-network/network-security-groups-overview.md) (NSGs) to achieve network isolation and protect your Azure resources from the Internet while accessing Azure services that have public endpoints. You can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, you can use [Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in your VNet, ensuring that traffic between your VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides you with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and Azure ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq).-- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and you can use Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes managed disks.
+- **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-encryption) and you can use Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes managed disks.
- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as you migrate workloads from your on-premises datacenter to the cloud, the delineation of responsibility between you and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. You can use Azure isolation technologies to achieve the desired level of isolation for your applications and data deployed in the cloud.
+In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as you migrate workloads from your on-premises datacenter to the cloud, the delineation of responsibility between you and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you're responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. You can use Azure isolation technologies to achieve the desired level of isolation for your applications and data deployed in the cloud.
Throughout this article, call-out boxes outline important considerations or actions considered to be part of your responsibility. For example, you can use Azure Key Vault to store your secrets, including encryption keys that remain under your control. > [!NOTE]
-> Use of Azure Key Vault for Customer Managed Keys (CMK) is optional and represents your responsibility.
+> Use of Azure Key Vault for customer managed keys (CMK) is optional and represents your responsibility.
>
-> *Additional resources:*
+> *Extra resources:*
> - How to **[get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)** This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help you achieve your secure isolation objectives. > [!TIP]
-> For recommendations on how to improve the security of applications and data deployed in Azure, you should review the **[Azure Security Benchmark](../security/benchmarks/index.yml)**.
+> For recommendations on how to improve the security of applications and data deployed on Azure, you should review the **[Azure Security Benchmark](/security/benchmark/azure/)** documentation.
## Identity-based isolation [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is an identity repository and cloud service that provides authentication, authorization, and access control for your users, groups, and objects. Azure AD can be used as a standalone cloud directory or as an integrated solution with existing on-premises Active Directory to enable key enterprise features such as directory synchronization and single sign-on.
Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscr
All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that your organization receives and owns when you sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the [Zero Trust model](https://aka.ms/Zero-Trust), which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust [design principles](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/). Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework. > [!NOTE]
-> Additional resources:
+> Extra resources:
>
-> - To learn more about how to implement Zero Trust architecture on Azure, read the **[6-part blog series](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)**.
+> - To learn how to implement Zero Trust architecture on Azure, see **[Zero Trust Guidance Center](/security/zero-trust/)**.
> - For definitions and general deployment models, see **[NIST SP 800-207](https://csrc.nist.gov/publications/detail/sp/800-207/final)** *Zero Trust Architecture*. ### Azure Active Directory
-The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. You can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide extra protections from untrusted traffic.
+The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) and its capabilities to support granular [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. You can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD can't access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide extra protections from untrusted traffic.
Azure AD implements extensive **data protection features**, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper [Active Directory Data Security Considerations](https://aka.ms/AADDataWhitePaper). Tenant isolation in Azure AD involves two primary elements: -- Preventing data leakage and access across tenants, which means that data belonging to Tenant A cannot in any way be obtained by users in Tenant B without explicit authorization by Tenant A.-- Resource access isolation across tenants, which means that operations performed by Tenant A cannot in any way impact access to resources for Tenant B.
+- Preventing data leakage and access across tenants, which means that data belonging to Tenant A can't in any way be obtained by users in Tenant B without explicit authorization by Tenant A.
+- Resource access isolation across tenants, which means that operations performed by Tenant A can't in any way impact access to resources for Tenant B.
As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state (through the Directory Services API) and Azure RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through: - Azure AD instances are discrete containers and there is no relationship between them. - Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.-- Access is not permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances.
+- Access isn't permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances.
- Physical access to servers that comprise the Azure AD service and direct access to Azure ADΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using Just-In-Time (JIT) privileged access management system.-- Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical Azure RBAC policy checks.
+- Azure AD users have no access to physical assets or locations, and therefore it isn't possible for them to bypass the logical Azure RBAC policy checks.
:::image type="content" source="./media/secure-isolation-fig2.png" alt-text="Azure Active Directory logical tenant isolation"::: **Figure 2.** Azure Active Directory logical tenant isolation
Azure has extensive support to safeguard your data using [data encryption](../se
Data encryption provides isolation assurances that are tied directly to encryption (cryptographic) key access. Since Azure uses strong ciphers for data encryption, only entities with access to cryptographic keys can have access to data. Deleting or revoking cryptographic keys renders the corresponding data inaccessible. More information about **data encryption in transit** is provided in *[Networking isolation](#networking-isolation)* section, whereas **data encryption at rest** is covered in *[Storage isolation](#storage-isolation)* section.
+Azure enables you to enforce [double encryption](../security/fundamentals/double-encryption.md) for both data at rest and data in transit. With this model, two or more layers of encryption are enabled to protect against compromises of any layer of encryption.
+ ### Azure Key Vault Proper protection and management of cryptographic keys is essential for data security. **[Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets.** The Key Vault service supports two resource types that are described in the rest of this section:
Proper protection and management of cryptographic keys is essential for data sec
The Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable service use from cloud applications and authentication through [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) to allow you to centralize and customize authentication, disaster recovery, high availability, and elasticity. Key Vault supports [cryptographic keys](../key-vault/keys/about-keys.md) of various types, sizes, and curves, including RSA and Elliptic Curve keys. With managed HSMs, support is also available for AES symmetric keys.
-With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios, as shown in Figure 3. **Keys generated inside the Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Key Vault vary depending on the underlying HSM, as explained in online documentation.
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key (BYOK)* scenarios, as shown in Figure 3. **Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. BYOK functionality is available with both [key vaults](../key-vault/keys/hsm-protected-keys.md) and [managed HSMs](../key-vault/managed-hsm/hsm-protected-keys-byok.md). Methods for transferring HSM-protected keys to Key Vault vary depending on the underlying HSM, as explained in online documentation.
:::image type="content" source="./media/secure-isolation-fig3.png" alt-text="Azure Key Vault support for bring your own key (BYOK)"::: **Figure 3.** Azure Key Vault support for bring your own key (BYOK)
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized: - Authentication establishes the identity of the caller (user or application). - Authorization determines which operations the caller can perform, based on a combination of [Azure role-based access control](../role-based-access-control/overview.md) (Azure RBAC) and key vault access policy or managed HSM local RBAC.
-Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in *[Azure Active Directory](#azure-active-directory)* section. Access to a key vault or managed HSM is controlled through two interfaces or planes - management plane and data plane - with both planes using Azure AD for authentication.
+Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in *[Azure Active Directory](#azure-active-directory)* section. Access to a key vault or managed HSM is controlled through two interfaces or planes ΓÇô management plane and data plane ΓÇô with both planes using Azure AD for authentication.
- **Management plane** enables you to manage the key vault or managed HSM itself, for example, create and delete key vaults or managed HSMs, retrieve key vault or managed HSM properties, and update access policies. For authorization, the management plane uses Azure RBAC with both key vaults and managed HSMs. - **Data plane** enables you to work with the data stored in your key vaults and managed HSMs, including adding, deleting, and modifying your data. For vaults, stored data can include keys, secrets, and certificates. For managed HSMs, stored data is limited to cryptographic keys only. For authorization, the data plane uses [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) and [Azure RBAC for data plane operations](../key-vault/general/rbac-guide.md) with key vaults, or [managed HSM local RBAC](../key-vault/managed-hsm/access-control.md) with managed HSMs.
When you create a key vault or managed HSM in an Azure subscription, it's automa
You control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information: - All authenticated REST API requests, including failed requests
- - Operations on the key vault such as creation, deletion, setting access policies, etc.
- - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secrets, and b) signing, verifying, encrypting keys, etc.
-- Unauthenticated requests such as requests that do not have a bearer token, are malformed or expired, or have an invalid token.
+ - Operations on the key vault such as creation, deletion, setting access policies, and so on.
+ - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secrets, and b) signing, verifying, encrypting keys, and so on.
+- Unauthenticated requests such as requests that don't have a bearer token, are malformed or expired, or have an invalid token.
> [!NOTE] > With Azure Key Vault, you can monitor how and when your key vaults and managed HSMs are accessed and by whom. >
-> *Additional resources:*
+> *Extra resources:*
> - **[Configure monitoring and alerting for Azure Key Vault](../key-vault/general/alert.md)** > - **[Enable logging for Azure Key Vault](../key-vault/general/logging.md)** > - **[How to secure storage account for Azure Key Vault logs](../storage/blobs/security-recommendations.md)**
-You can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.
+You can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it isn't necessary to write logs to Azure Blob storage.
> [!NOTE] > For a comprehensive list of Azure Key Vault security recommendations, see the **[Security baseline for Azure Key Vault](../key-vault/general/security-baseline.md)**. #### Vault
-**[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). To see a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication. These HSMs meet Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.
+**[Vaults](../key-vault/general/overview.md)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](../key-vault/general/about-keys-secrets-certificates.md). They can be either software-protected (standard tier) or HSM-protected (premium tier). For a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication.
-Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs and use them to encrypt data at rest for a wide range of Azure services. As mentioned previously, you can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support bring your own key (BYOK) scenarios.
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for many Azure services. As mentioned previously, you can [import or generate encryption keys](../key-vault/keys/hsm-protected-keys.md) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling you to enroll and automatically renew certificates from supported public Certificate Authorities. Key Vault certificates support provides for the management of your X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](../key-vault/certificates/create-certificate.md) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
When you create a key vault in a resource group, you can [manage access](../key-
> [!IMPORTANT] > You should control tightly who has Contributor role access to your key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy. >
-> *Additional resources:*
+> *Extra resources:*
> - How to **[secure access to a key vault](../key-vault/general/security-features.md)** #### Managed HSM
-**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by you and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure your control.
+**[Managed HSM](../key-vault/managed-hsm/overview.md)** provides a single-tenant, fully managed, highly available, zone-resilient (where available) HSM as a service to store and manage your cryptographic keys. It is most suitable for applications and usage scenarios that handle high value keys. It also helps you meet the most stringent security, compliance, and regulatory requirements. Managed HSM uses [FIPS 140 Level 3 validated HSMs](/azure/compliance/offerings/offering-fips-140-2) to protect your cryptographic keys. Each managed HSM pool is an isolated single-tenant instance with its own [security domain](../key-vault/managed-hsm/security-domain.md) controlled by you and isolated cryptographically from instances belonging to other customers. Cryptographic isolation relies on [Intel Software Guard Extensions](https://software.intel.com/sgx) (SGX) technology that provides encrypted code and data to help ensure your control over cryptographic keys.
When a managed HSM is created, the requestor also provides a list of data plane administrators. Only these administrators are able to [access the managed HSM data plane](../key-vault/managed-hsm/access-control.md) to perform key operations and manage data plane role assignments (managed HSM local RBAC). The permission model for both the management and data planes uses the same syntax, but permissions are enforced at different levels, and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane-managed HSM local RBAC is enforced by the managed HSM itself. > [!IMPORTANT]
-> Unlike with key vaults, granting your users management plane access to a managed HSM does not grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
+> Unlike with key vaults, granting your users management plane access to a managed HSM doesn't grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is implemented by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
-As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as bring your own key (BYOK) scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
+As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
Microsoft Azure compute platform is based on [machine virtualization](../securit
:::image type="content" source="./media/secure-isolation-fig4.png" alt-text="Isolation of Hypervisor, Root VM, and Guest VMs"::: **Figure 4.** Isolation of Hypervisor, Root VM, and Guest VMs
-Physical servers hosting VMs are grouped into clusters, and they are independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
+Physical servers hosting VMs are grouped into clusters, and they're independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads, and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
-The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing applies decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
+The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing uses decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.
### Management network isolation There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:
There are three Virtual Local Area Networks (VLANs) in each compute hardware clu
- Fabric Controller (FC) VLAN that contains trusted FCs and supporting systems, and - Device VLAN that contains trusted network and other infrastructure devices.
-Communication is permitted from the FC VLAN to the main VLAN but cannot be initiated from the main VLAN to the FC VLAN. This bridge from the FC VLAN to the Main VLAN is used to reduce the overall complexity and improve reliability/resiliency of the network. The connection is secured in several ways to ensure that commands are trusted and successfully routed:
+Communication is permitted from the FC VLAN to the main VLAN but can't be initiated from the main VLAN to the FC VLAN. The bridge from the FC VLAN to the Main VLAN is used to reduce the overall complexity and improve reliability/resiliency of the network. The connection is secured in several ways to ensure that commands are trusted and successfully routed:
-- Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It cannot initiate connections to the FC or other privileged internal nodes.
+- Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It can't initiate connections to the FC or other privileged internal nodes.
- The FC treats responses from the agent service as if they were untrusted. Communication with the agent is further restricted to a set of authorized IP addresses using firewall rules on each physical node, and routing rules at the border gateways.-- Throttling is used to ensure that customer VMs cannot saturate the network and management commands from being routed.
+- Throttling is used to ensure that customer VMs can't saturate the network and management commands from being routed.
-Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.
+Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it can't attack nodes on either the FC or device VLANs.
These controls ensure that the management consoles access to the Hypervisor is always valid and available. :::image type="content" source="./media/secure-isolation-fig5.png" alt-text="VLAN isolation"::: **Figure 5.** VLAN isolation
-The Hypervisor and the Host OS provide network packet filters so untrusted VMs cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
+The Hypervisor and the Host OS provide network packet filters so untrusted VMs can't generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in *[Networking isolation](#networking-isolation)* section.
### Management console and management plane The Azure Management Console and Management Plane follow strict security architecture principles of least privilege to secure and isolate tenant processing: -- **Management Console (MC)** ΓÇô The MC in Azure Cloud is composed of the Azure portal GUI and the Azure Resource Manager API layers. They both utilize user credentials to authenticate and authorized all operations.
+- **Management Console (MC)** ΓÇô The MC in Azure Cloud is composed of the Azure portal GUI and the Azure Resource Manager API layers. They both use user credentials to authenticate and authorize all operations.
- **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is composed of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor, which has its own Hypervisor Agent to service communication. These layers all use system contexts that are granted the least permissions needed to perform their operations.
-The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes - separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
+The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes ΓÇô separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
-Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications cannot bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Azure Active Directory](../active-directory/azuread-dev/v1-protocols-oauth-code.md).
:::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
Commands generated through all steps of the process identified in this section a
Azure provides isolation of compute processing through a multi-layered approach, including: - **Hypervisor isolation** for services that provide cryptographically certain isolation by using separate virtual machines and using Azure Hypervisor isolation. Examples: *App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.* - **Drawbridge isolation** inside a VM for services that provide cryptographically certain isolation to workloads running on the same virtual machine by using isolation provided by [Drawbridge](https://www.microsoft.com/research/project/drawbridge/). These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a *pico-process*. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: *Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics.*-- **User context-based isolation** for services that are composed solely of Microsoft-controlled code and customer code is not allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Microsoft Defender for Cloud, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.*
+- **User context-based isolation** for services that are composed solely of Microsoft-controlled code and customer code isn't allowed to run. Examples: *API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Microsoft Defender for Cloud, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.*
These logical isolation options are discussed in the rest of this section.
Hypervisor isolation in Azure is based on [Microsoft Hyper-V](/windows-server/vi
The Target of Evaluation (TOE) was composed of Microsoft Windows Server, Microsoft Windows 10 version 1909 (November 2019 Update), and Microsoft Windows Server 2019 (version 1809) Hyper-V (&#8220;Windows&#8221;). TOE enforces the following security policies as described in the report: -- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, authorized review of stored audit records, and providing secure storage for audit event entries.
+- **Security Audit** ΓÇô Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on many characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, authorized review of stored audit records, and providing secure storage for audit event entries.
- **Cryptographic Support** ΓÇô Windows provides validated cryptographic functions that support encryption/decryption, cryptographic signatures, cryptographic hashing, and random number generation. Windows implements these functions in support of IPsec, TLS, and HTTPS protocol implementation. Windows also ensures that its Guest VMs have access to entropy data so that virtualized operating systems can ensure the implementation of strong cryptography.-- **User Data Protection** ΓÇô Windows makes certain computing services available to Guest VMs but implements measures to ensure that access to these services is granted on an appropriate basis and that these interfaces do not result in unauthorized data leakage between Guest VMs and Windows or between multiple Guest VMs.
+- **User Data Protection** ΓÇô Windows makes certain computing services available to Guest VMs but implements measures to ensure that access to these services is granted on an appropriate basis and that these interfaces don't result in unauthorized data leakage between Guest VMs and Windows or between multiple Guest VMs.
- **Identification and Authentication** ΓÇô Windows offers several methods of user authentication, which includes X.509 certificates needed for trusted protocols. Windows implements password strength mechanisms and ensures that excessive failed authentication attempts using methods subject to brute force guessing (password, PIN) results in lockout behavior.-- **Security Management** - Windows includes several functions to manage security policies. Access to administrative functions is enforced through administrative roles. Windows also has the ability to support the separation of management and operational networks and to prohibit data sharing between Guest VMs.-- **Protection of the TOE Security Functions (TSF)** ΓÇô Windows implements various self-protection mechanisms to ensure that it cannot be used as a platform to gain unauthorized access to data stored on a Guest VM, that the integrity of both the TSF and its Guest VMs is maintained, and that Guest VMs are accessed solely through well-documented interfaces.-- **TOE Access** - In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.-- **Trusted Path/Channels** - Windows implements IPsec, TLS, and HTTPS trusted channels and paths for the purpose of remote administration, transfer of audit data to the operational environment, and separation of management and operational networks.
+- **Security Management** ΓÇô Windows includes several functions to manage security policies. Access to administrative functions is enforced through administrative roles. Windows also has the ability to support the separation of management and operational networks and to prohibit data sharing between Guest VMs.
+- **Protection of the TOE Security Functions (TSF)** ΓÇô Windows implements various self-protection mechanisms to ensure that it can't be used as a platform to gain unauthorized access to data stored on a Guest VM, that the integrity of both the TSF and its Guest VMs is maintained, and that Guest VMs are accessed solely through well-documented interfaces.
+- **TOE Access** ΓÇô In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.
+- **Trusted Path/Channels** ΓÇô Windows implements IPsec, TLS, and HTTPS trusted channels and paths for the purpose of remote administration, transfer of audit data to the operational environment, and separation of management and operational networks.
More information is available from the [third-party certification report](https://www.niap-ccevs.org/MMO/Product/st_vid11087-vr.pdf).
The Azure Hypervisor acts like a micro-kernel, passing all hardware access reque
- **Emulated devices** ΓÇô The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition. - **Para-virtualized devices** ΓÇô The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as &#8220;synthetic&#8221; devices.-- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.
+- **Hardware-accelerated devices** ΓÇô The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. [Azure Accelerated Networking](../virtual-network/accelerated-networking-overview.md) is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation:
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce
- I/O devices that are being accessed directly by Guest partitions. - **CPU context** ΓÇô the Hypervisor uses virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.
-The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, and secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation within the hypervisor itself that helps mitigate a wide range of sophisticated side channel attacks.
+The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, and secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation directly within the hypervisor that helps mitigate many sophisticated side channel attacks.
The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resources between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
The Azure Hypervisor meets the security objectives shown in Table 2.
||| |**Isolation**|The Azure Hypervisor security policy mandates no information transfer between VMs. This policy requires capabilities in the Virtual Machine Manager (VMM) and hardware for the isolation of memory, devices, networking, and managed resources such as persisted data.| |**VMM integrity**|Integrity is a core security objective for virtualization systems. To achieve system integrity, the integrity of each Hypervisor component is established and maintained. This objective concerns only the integrity of the Hypervisor itself, not the integrity of the physical platform or software running inside VMs.|
-|**Platform integrity**|The integrity of the Hypervisor depends on the integrity of the hardware and software on which it relies. Although the Hypervisor does not have direct control over the integrity of the platform, Azure relies on hardware and firmware mechanisms such as the [Cerberus](https://azure.microsoft.com/blog/microsoft-creates-industry-standards-for-datacenter-hardware-storage-and-security/) security microcontroller to [protect the underlying platform integrity](https://www.youtube.com/watch?v=oUvKEw8OchI), thereby preventing the VMM and Guests from running should platform integrity be compromised.|
+|**Platform integrity**|The integrity of the Hypervisor depends on the integrity of the hardware and software on which it relies. Although the Hypervisor doesn't have direct control over the integrity of the platform, Azure relies on hardware and firmware mechanisms such as the [Cerberus](https://azure.microsoft.com/blog/microsoft-creates-industry-standards-for-datacenter-hardware-storage-and-security/) security microcontroller to [protect the underlying platform integrity](https://www.youtube.com/watch?v=oUvKEw8OchI), thereby preventing the VMM and Guests from running should platform integrity be compromised.|
|**Management access**|Management functions are exercised only by authorized administrators, connected over secure connections with a principle of least privilege enforced by fine grained role access control mechanism.| |**Audit**|Azure provides audit capability to capture and protect system data so that it can later be inspected.|
Listed below are some key design principles adopted by Microsoft to secure Hyper
- All Hyper-V code is code reviewed and fuzzed. For more information on fuzzing, see *[Security assurance processes and practices](#security-assurance-processes-and-practices)* section later in this article. - Make exploitation of remaining vulnerabilities more difficult - The VM worker process has the following mitigations applied:
- - [Arbitrary Code Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Dynamically generated code cannot be loaded in the VM Worker process.
+ - [Arbitrary Code Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Dynamically generated code can't be loaded in the VM Worker process.
- [Code Integrity Guard](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) ΓÇô Only Microsoft signed code can be loaded in the VM Worker Process. - [Control Flow Guard (CFG)](/windows/win32/secbp/control-flow-guard) ΓÇô Provides course grained control flow protection to indirect calls and jumps.
- - NoChildProcess ΓÇô The worker process cannot create child processes (useful for bypassing CFG).
- - NoLowImages / NoRemoteImages ΓÇô The worker process cannot load DLLΓÇÖs over the network or DLLΓÇÖs that were written to disk by a sandboxed process.
- - NoWin32k ΓÇô The worker process cannot communicate with Win32k, which makes sandbox escapes more difficult.
+ - NoChildProcess ΓÇô The worker process can't create child processes (useful for bypassing CFG).
+ - NoLowImages / NoRemoteImages ΓÇô The worker process can't load DLLΓÇÖs over the network or DLLΓÇÖs that were written to disk by a sandboxed process.
+ - NoWin32k ΓÇô The worker process can't communicate with Win32k, which makes sandbox escapes more difficult.
- Heap randomization ΓÇô Windows ships with one of the most secure heap implementations of any operating system. - [Address Space Layout Randomization (ASLR)](https://en.wikipedia.org/wiki/Address_space_layout_randomization) ΓÇô Randomizes the layout of heaps, stacks, binaries, and other data structures in the address space to make exploitation less reliable. - [Data Execution Prevention (DEP/NX)](/windows/win32/win7appqual/dep-nx-protection) ΓÇô Only pages of memory intended to contain code are executable.
Microsoft investments in Hyper-V security benefit Azure Hypervisor directly. The
|Mitigation|Security Impact|Mitigation Details| |-|||
-|**Control flow integrity**|Increases cost to perform control flow integrity attacks (for example, return-orientedΓÇöprogramming exploits)|[Control Flow Guard](https://www.blackhat.com/docs/us-16/materials/us-16-Weston-Windows-10-Mitigation-Improvements.pdf) (CFG) ensures indirect control flow transfers are instrumented at compile time and enforced by the kernel (user-mode) or secure kernel (kernel-mode), mitigating stack return vulnerabilities.|
+|**Control flow integrity**|Increases cost to perform control flow integrity attacks (for example, return oriented programming exploits)|[Control Flow Guard](https://www.blackhat.com/docs/us-16/materials/us-16-Weston-Windows-10-Mitigation-Improvements.pdf) (CFG) ensures indirect control flow transfers are instrumented at compile time and enforced by the kernel (user-mode) or secure kernel (kernel-mode), mitigating stack return vulnerabilities.|
|**User-mode code integrity**|Protects against malicious and unwanted binary execution in user mode|Address Space Layout Randomization (ASLR) forced on all binaries in host partition, all code compiled with SDL security checks (for example, `strict_gs`), [arbitrary code generation restrictions](https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) in place on host processes prevent injection of runtime-generated code.| |**Hypervisor enforced user and kernel mode code integrity**|No code loaded into code pages marked for execution until authenticity of code is verified|[Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) uses memory isolation to create a secure world to enforce policy and store sensitive code and secrets. With Hypervisor enforced Code Integrity (HVCI), the secure world is used to prevent unsigned code from being injected into the normal world kernel.| |**Hardware root-of-trust with platform secure boot**|Ensures host only boots exact firmware and OS image required|Windows [secure boot](/windows-hardware/design/device-experiences/oem-secure-boot) validates that Azure Hypervisor infrastructure is only bootable in a known good configuration, aligned to Azure firmware, hardware, and kernel production versions.|
The ABI is implemented within two components:
- The Platform Adaptation Layer (PAL) runs as part of the pico-process. - The host implementation runs as part of the Host.
-Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and is not able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes etc.), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and cannot directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
+Pico-processes are grouped into isolation units called *sandboxes*. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and isn't able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes, and so on), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and can't directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.
-When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.
+When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process can't abuse the resources from the Host machine.
In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.
A normal Windows process can call more than 1200 functions that result in access
Like a virtual machine, the pico-process is much easier to secure than a traditional OS interface because it is significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time. #### User context-based isolation
-In cases where an Azure service is composed of Microsoft-controlled code and customer code is not allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code is not allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.
+In cases where an Azure service is composed of Microsoft-controlled code and customer code isn't allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing ΓÇô arbitrary code isn't allowed. For these services, a user context is provided to establish the data that can be accessed and what Azure role-based access control (Azure RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in *[Identity-based isolation](#identity-based-isolation)* section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.
### Physical isolation In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation you can use Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer.
In addition to robust logical compute isolation available by design to all Azure
> [!NOTE] > You can deploy a dedicated host using the **[portal, Azure PowerShell, and the Azure CLI](../virtual-machines/dedicated-hosts-how-to.md)**.
-You can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing you to opt in to a maintenance window to reduce potential impact to your provisioned services. Most maintenance events have little to no impact on your VMs; however, if you are in a highly regulated industry or with a sensitive workload, you may want to have control over any potential maintenance impact.
+You can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and extra features. Dedicated Host enables control over platform maintenance events by allowing you to opt in to a maintenance window to reduce potential impact to your provisioned services. Most maintenance events have little to no impact on your VMs; however, if you're in a highly regulated industry or with a sensitive workload, you may want to have control over any potential maintenance impact.
> [!NOTE] > Microsoft provides detailed customer guidance on **[Windows](../virtual-machines/windows/quick-create-portal.md)** and **[Linux](../virtual-machines/linux/quick-create-portal.md)** Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.
Table 5 summarizes available security guidance for customer virtual machines pro
Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/). ## Networking isolation
-The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
+The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty. ### Separation of tenant network traffic Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. Your Azure subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside your subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.
-Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. You can additionally configure your host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
+Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Moreover, you can configure your host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.
Azure provides network isolation for each deployment and enforces the following rules: - Traffic between VMs always traverses through trusted packet filters. - Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection.
- - VMs cannot capture any traffic on the network that is not intended for them.
-- Your VMs cannot send traffic to Azure private interfaces and infrastructure services, or to VMs belonging to other customers. Your VMs can only communicate with other VMs owned or controlled by you and with Azure infrastructure service endpoints meant for public communications.
+ - VMs can't capture any traffic on the network that isn't intended for them.
+- Your VMs can't send traffic to Azure private interfaces and infrastructure services, or to VMs belonging to other customers. Your VMs can only communicate with other VMs owned or controlled by you and with Azure infrastructure service endpoints meant for public communications.
- When you put a VM on a VNet, that VM gets its own address space that is invisible, and hence, not reachable from VMs outside of a deployment or VNet (unless configured to be visible via public IP addresses). Your environment is open only through the ports that you specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access. #### Packet flow and network path protection AzureΓÇÖs hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses several networking implementations to achieve these goals: a) flat addressing to allow service instances to be placed anywhere in the network; b) load balancing to spread traffic uniformly across network paths; and c) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.
-These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
+These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch ΓÇô a Virtual Layer 2 (VL2) ΓÇô and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it isn't possible for the traffic of one service to be affected by the traffic of any other service, as if each service were connected by a separate physical switch.
This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric - creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch. The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation): - **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, and forward packets encapsulated with CAs along shortest paths.-- **Provider address (PA)** is the Azure assigned internal fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that is not externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
+- **Provider address (PA)** is the Azure assigned internal fabric address that isn't visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that isn't externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
Figure 9 depicts a sample packet flow where sender S sends packets to destinatio
:::image type="content" source="./media/secure-isolation-fig9.png" alt-text="Sample packet flow"::: **Figure 9.** Sample packet flow
-A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it can enforce a policy that only servers belonging to the same service can communicate with each other.
+A server can't send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it can enforce a policy that only servers belonging to the same service can communicate with each other.
#### Traffic flow patterns To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (that is, the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over Azure ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
At the Internet Edge Router or the MSEE Router, the packet is encapsulated using
Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including: - **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, and reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.-- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you are always operating within your unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.-- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Therefore, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.
+- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you're always operating within your unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that hasn't been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
+- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users don't have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Therefore, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.
In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand. Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets [RFC 3809](https://datatracker.ietf.org/doc/rfc3809/) and [RFC 4110](https://datatracker.ietf.org/doc/rfc4110/). When using Azure VPN Gateway in combination with ExpressRoute, Azure meets [RFC 4111](https://datatracker.ietf.org/doc/rfc4111/) and [RFC 4364](https://datatracker.ietf.org/doc/rfc4364/). With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide you with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.
-You are also able to use Azure services to further isolate and protect your resources. Using [network security groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, you can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network access control list (ACL). You can apply an NSG to each NIC in a virtual machine, apply an NSG to the subnet that a NIC or another Azure resource is connected to, and directly to virtual machine scale set, allowing finer control over your infrastructure.
+You're also able to use Azure services to further isolate and protect your resources. Using [network security groups](../virtual-network/manage-network-security-group.md) (NSGs), a feature of Azure Virtual Network, you can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules ΓÇô essentially acting as a distributed virtual firewall and IP-based network access control list (ACL). You can apply an NSG to each NIC in a virtual machine, apply an NSG to the subnet that a NIC or another Azure resource is connected to, and directly to virtual machine scale set, allowing finer control over your infrastructure.
-At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running within virtual machines on top of the Hypervisor from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances use the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs - they are maintained by the same software that provisions tenants, so they are never out of date. The fine-grained ACLs are applied using the Machine Configuration File (MCF) to Windows Firewall.
+At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running within virtual machines on top of the Hypervisor from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances use the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs ΓÇô they're maintained by the same software that provisions tenants, so they're never out of date. The fine-grained ACLs are applied using the Machine Configuration File (MCF) to Windows Firewall.
-At the top of the operating system stack is the Guest OS, which you use as your operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. You can enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, you can define ACLs on the public IPs for extra protection if desired.
+At the top of the operating system stack is the Guest OS, which you use as your operating system. By default, this layer doesn't allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access isn't permitted by default. You can enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, you can define ACLs on the public IPs for extra protection if desired.
### Service tags You can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, you can define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. > [!NOTE]
-> You can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for a wide range of Azure services for use in network security group rules.
+> You can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for many Azure services for use in network security group rules.
>
-> *Additional resources:*
+> *Extra resources:*
> - **[Available service tags for specific Azure services](../virtual-network/service-tags-overview.md#available-service-tags)** ### Azure Private Link
From the networking isolation standpoint, key benefits of Private Link include:
> [!NOTE] > You can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections. >
-> *Additional resources:*
+> *Extra resources:*
> - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)** > - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)**
Azure provides many options for [encrypting data in transit](../security/fundame
- Microsoft datacenters as part of expected Azure service operation #### End user's connection to Azure service
-**Transport Layer Security (TLS):** Azure uses the TLS protocol to help protect data when it is traveling between your end users and Azure services. Most of your end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft does not control or limit the regions from which you or your end users may access or move customer data.
+**Transport Layer Security (TLS)** ΓÇô Azure uses the TLS protocol to help protect data when it is traveling between your end users and Azure services. Most of your end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/DPA) (DPA), Microsoft doesn't control or limit the regions from which you or your end users may access or move customer data.
> [!IMPORTANT] > You can increase security by enabling encryption in transit. For example, you can use **[Application Gateway](../application-gateway/ssl-overview.md)** to configure **[end-to-end encryption](../application-gateway/application-gateway-end-to-end-ssl-powershell.md)** of network traffic and rely on **[Key Vault integration](../application-gateway/key-vault-certs.md)** for TLS termination.
Across Azure services, traffic to and from the service is [protected by TLS 1.2]
TLS provides strong authentication, message privacy, and integrity. [Perfect Forward Secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protects connections between your client systems and Microsoft cloud services by generating a unique session key for every session you initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.
-**In-transit encryption for VMs:** Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from your client computer to Windows and Linux VMs enables TLS protection for data in transit. You can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
+**In-transit encryption for VMs** ΓÇô Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the [Remote Desktop Protocol](/windows/win32/termserv/remote-desktop-protocol) (RDP) initiated from your client computer to Windows and Linux VMs enables TLS protection for data in transit. You can also use [Secure Shell](../virtual-machines/linux/ssh-from-windows.md) (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.
> [!IMPORTANT] > You should review best practices for network security, including guidance for **[disabling RDP/SSH access to Virtual Machines](../security/fundamentals/network-best-practices.md#disable-rdpssh-access-to-virtual-machines)** from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via **[point-to-site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)**, **[site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)**, or **[Azure ExpressRoute](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)**.
-**Azure Storage transactions:** When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, you can configure your storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
+**Azure Storage transactions** ΓÇô When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, you can configure your storage accounts to accept requests only from secure connections by setting the &#8220;[secure transfer required](../storage/common/storage-require-secure-transfer.md)&#8221; property for the storage account. The &#8220;secure transfer required&#8221; option is enabled by default when creating a Storage account in the Azure portal.
[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard [Server Message Block](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) (SMB) protocol. By default, all Azure storage accounts [have encryption in transit enabled](../storage/files/storage-files-planning.md#encryption-in-transit). Therefore, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS. #### Datacenter connection to Azure region
-**VPN encryption:** [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of your internal (on-premises) network. With VNet, you choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses you are using elsewhere. You have options to securely connect to a VNet from your on-premises infrastructure or remote locations.
+**VPN encryption** ΓÇô [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of your internal (on-premises) network. With VNet, you choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they won't collide with addresses you're using elsewhere. You have options to securely connect to a VNet from your on-premises infrastructure or remote locations.
-- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and your internal network, allowing an Azure VM to connect to your back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. You can use Azure [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between your VNet and your on-premises infrastructure across the public Internet, for example, a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. VPN Gateway supports a wide range of encryption algorithms that are FIPS 140 validated. Moreover, you can configure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).-- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from your individual client computer to your VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, you need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections do not require a VPN device or a public facing IP address.
+- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and your internal network, allowing an Azure VM to connect to your back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. You can use Azure [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between your VNet and your on-premises infrastructure across the public Internet, for example, a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. VPN Gateway supports many encryption algorithms that are FIPS 140 validated. Moreover, you can configure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
+- **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from your individual client computer to your VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, you need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections don't require a VPN device or a public facing IP address.
In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides you with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows you to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates the PSK when the VPN tunnel is created. You can change the autogenerated PSK to your own.
-**Azure ExpressRoute encryption:** [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
+**Azure ExpressRoute encryption** ΓÇô [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
You can enable IPsec in addition to MACsec on your ExpressRoute Direct ports, as shown in Figure 11. Using VPN Gateway, you can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of your ExpressRoute circuit between your on-premises network and your Azure VNet. MACsec secures the physical connection between your on-premises network and Microsoft. IPsec secures the end-to-end connection between your on-premises network and your VNets in Azure. MACsec and IPsec can be enabled independently.
You can enable IPsec in addition to MACsec on your ExpressRoute Direct ports, as
**Figure 11.** VPN and ExpressRoute encryption for data in transit #### Traffic across Microsoft global network backbone
-Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same geography; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
+Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md) (GRS) and paired regions are also recommended when configuring active [geo-replication](../azure-sql/database/active-geo-replication-overview.md) for Azure SQL Database. Paired regions are located within the same geography; however, network traffic isn't guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.
Moreover, all Azure traffic traveling within a region or between regions is [encrypted by Microsoft using MACsec](../security/fundamentals/encryption-overview.md#data-link-layer-encryption-in-azure), which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft [global network backbone](../networking/microsoft-global-network.md) and never enters the public Internet. The backbone is one of the largest in the world with more than 250,000 km of lit fiber optic and undersea cable systems.
Moreover, all Azure traffic traveling within a region or between regions is [enc
> You should review Azure **[best practices](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit)** for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (for example, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics), data encryption in transit is **[enforced by default](../azure-sql/database/security-overview.md#information-protection-and-encryption)**. ### Third-party network virtual appliances
-Azure provides you with many features to help you achieve your security and isolation goals, including [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [network security groups](../virtual-network/network-security-groups-overview.md), [Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Microsoft Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, you can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate your specific network isolation requirements while at the same time applying existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in your virtual networks and deployments.
+Azure provides you with many features to help you achieve your security and isolation goals, including [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md), [Azure Monitor](../azure-monitor/overview.md), [Azure Firewall](../firewall/overview.md), [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md), [network security groups](../virtual-network/network-security-groups-overview.md), [Application Gateway](../application-gateway/overview.md), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Network Watcher](../network-watcher/network-watcher-monitoring-overview.md), [Microsoft Sentinel](../sentinel/overview.md), and [Azure Policy](../governance/policy/overview.md). In addition to the built-in capabilities that Azure provides, you can use third-party [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) to accommodate your specific network isolation requirements while at the same time applying existing in-house skills. Azure supports many appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in your virtual networks and deployments.
The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving your virtual networks is always encrypted. > [!TIP] > You should review published Azure networking documentation for guidance on how to use native security features to help protect your data. >
-> *Additional resources:*
+> *Extra resources:*
> - **[Azure network security overview](../security/fundamentals/network-overview.md)** > - **[Azure network security white paper](https://azure.microsoft.com/resources/azure-network-security/)**
Azure Storage provides storage for a wide variety of workloads, including:
- Network file shares in the cloud (File storage) - Serving web pages on the Internet (static websites)
-While Azure Storage supports a wide range of different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section.
+While Azure Storage supports many different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section.
### Data replication Your data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies your data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. You can typically choose to replicate your data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, you can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options):
Azure provides extensive options for [data encryption at rest](../security/funda
In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although other encryption keys can be used as described in *[Storage service encryption](#storage-service-encryption)* section. - **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys. These keys are not exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
+- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys. These keys aren't exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
:::image type="content" source="./media/secure-isolation-fig16.png" alt-text="Data Encryption Keys are encrypted using your key stored in Azure Key Vault"::: **Figure 16.** Data Encryption Keys are encrypted using your key stored in Azure Key Vault Therefore, the encryption key hierarchy involves both DEK and KEK. DEK is encrypted with KEK and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEK can decrypt the DEK. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEK, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
-Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for a wide range of Azure platform services is available in online documentation. Moreover, some Azure services provide other [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and Azure Disk encryption for IaaS Virtual Machines, including server-side encryption for managed disks.
+Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for many Azure platform services is available in online documentation. Moreover, some Azure services provide other [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and Azure Disk encryption for IaaS Virtual Machines, including server-side encryption for managed disks.
> [!TIP] > You should review published Azure data encryption documentation for guidance on how to protect your data. >
-> *Additional resources:*
+> *Extra resources:*
> - **[Encryption at rest overview](../security/fundamentals/encryption-atrest.md)** > - **[Data encryption models](../security/fundamentals/encryption-models.md)** > - **[Data encryption best practices](../security/fundamentals/data-encryption-best-practices.md)**
However, you can also choose to manage encryption with your own keys by specifyi
> [!NOTE] > You can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)**, **[PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)**, or **[Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)** command-line tool. You can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage.
-Storage service encryption is enabled by default for all new and existing storage accounts and it [cannot be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest:
+Storage service encryption is enabled by default for all new and existing storage accounts and it [can't be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest:
- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted. - *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. You must use Azure Key Vault to store your customer-managed keys for Storage service encryption. - *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, that is, cluster of storage hardware. This key is used to perform the final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)).
-These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it cannot be disabled.
+These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it can't be disabled.
:::image type="content" source="./media/secure-isolation-fig17.png" alt-text="Encryption flow for Storage service encryption"::: **Figure 17.** Encryption flow for Storage service encryption
Storage accounts are encrypted regardless of their performance tier (standard or
Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables [double encryption of data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md). #### Azure Disk encryption
-Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
+Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
-For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key* (BYOK) scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
+For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data can't be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key (BYOK)* scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
-Azure Disk encryption is not supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption.
+Azure Disk encryption isn't supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption.
> [!NOTE] > Detailed instructions are available for creating and configuring a key vault for Azure Disk encryption with both **[Windows](../virtual-machines/windows/disk-encryption-key-vault.md)** and **[Linux](../virtual-machines/linux/disk-encryption-key-vault.md)** VMs.
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
[Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for you. Azure managed disks automatically encrypt your data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140 validated. For encryption key management, you have the following choices: - [Platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys) is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.-- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables you to have control over your own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs do not support Azure Disk encryption.
+- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables you to have control over your own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs don't support Azure Disk encryption.
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host* Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled are not encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
-You are [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
+You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
### Data deletion
-Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
+Storage is allocated sparsely, which means that when a virtual disk is created, disk space isn't allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.
When you delete a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data, if you provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage). At the primary location, you can immediately try to access the blob or entity, and you wonΓÇÖt find it in your index, since Azure provides strong consistency for the delete. So, you can verify directly that the data has been deleted.
-In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they are written - new versions of pointers are also written sequentially. A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
+In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they're written ΓÇô new versions of pointers are also written sequentially. A side effect of this design is that it isn't possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
-The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
+The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process isn't deterministic and there is no guarantee when particular data will be gone from physical storage. **However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:**
- A customer can't read deleted data of another customer.-- If anyone tries to read a region on a virtual disk that they have not yet written to, physical space will not have been allocated for that region and therefore only zeroes would be returned.
+- If anyone tries to read a region on a virtual disk that they haven't yet written to, physical space won't have been allocated for that region and therefore only zeroes would be returned.
-Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
+Customers aren't provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way for another customer to express a request to read from or write to a physical address that is allocated to you or a physical address that is free.
Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. For [Azure SQL Database](../security/fundamentals/isolation-choices.md#sql-database-isolation), it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. For non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.
Finally, as described in *[Data encryption at rest](#data-encryption-at-rest)* s
### Data retention Always during the term of your Azure subscription, you have the ability to access, extract, and delete your data stored in Azure.
-If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It is recommended that you do not terminate your service until you have extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something.
+If your subscription expires or is terminated, Microsoft will preserve your customer data for a 90-day retention period to permit you to extract your data or renew your subscriptions. After this retention period, Microsoft will delete all your customer data within another 90 days, that is, your customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, you can control how long your data is stored by timing when you end the service with Microsoft. It is recommended that you don't terminate your service until you have extracted all data so that the initial 90-day retention period can act as a safety buffer should you later realize you missed something.
-If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription.
+If you deleted an entire storage account by mistake, you should contact [Azure Support](https://azure.microsoft.com/support/options/) promptly for assistance with recovery. You can [create and manage support requests](../azure-portal/supportability/how-to-create-azure-support-request.md) in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (for example, blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless you made a backup, deleted storage objects can't be recovered. For Blob storage, you can implement extra protection against accidental or erroneous modifications or deletions by enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md). When [soft delete is enabled](../storage/blobs/soft-delete-blob-enable.md) for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they're deleted, within a retention period that you specified. To avoid retention of data after storage account or subscription deletion, you can delete storage objects individually before deleting the storage account or subscription.
For accidental deletion involving Azure SQL Database, you should check backups that the service makes automatically and use point-in-time restore. For example, full database backup is done weekly, and differential database backups are done hourly. Also, individual services (such as Azure DevOps) can have their own policies for [accidental data deletion](/azure/devops/organizations/security/data-protection#mistakes-happen). ### Data destruction
-If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data cannot be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
+If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://www.microsoft.com/trustcenter/privacy/data-management) before decommissioning. The data on the drive is erased to ensure that the data can't be recovered by any means. When such devices are decommissioned, Microsoft follows the [NIST SP 800-88 R1](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final) disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:
-- **Purge:** &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, etc.) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three-pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary.-- **Destroy:** &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;cannot be reused as originally intended.&#8221;
+- **Purge** ΓÇô &#8220;a media sanitization process that protects the confidentiality of information against a laboratory attack&#8221;, which involves &#8220;resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment&#8221; using &#8220;signal processing equipment and specially trained personnel.&#8221; Note: For hard disk drives (including ATA, SCSI, SATA, SAS, and so on) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three-pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary.
+- **Destroy** ΓÇô &#8220;a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting&#8221; after which the media &#8220;can't be reused as originally intended.&#8221;
Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or destroyed.
Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of t
- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect AzureΓÇÖs infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $300,000. - **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
-If you are accustomed to traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
+If you're accustomed to traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
## Logical isolation considerations
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you are migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you're migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
### Physical versus logical security considerations Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (bare metal) versus logically isolated cloud-based deployments (Azure). ItΓÇÖs useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
Table 6 provides a summary of key security considerations for physically isolate
|Security consideration|On-premises|Azure| ||||
-|**Firewalls, networking**|- Physical network enforcement (switches, etc.) </br>- Physical host-based firewall can be manipulated by compromised application </br>- 2 layers of enforcement|- Physical network enforcement (switches, etc.) </br>- Hyper-V host virtual network switch enforcement cannot be changed from inside VM </br>- VM host-based firewall can be manipulated by compromised application </br>- 3 layers of enforcement|
+|**Firewalls, networking**|- Physical network enforcement (switches, and so on) </br>- Physical host-based firewall can be manipulated by compromised application </br>- 2 layers of enforcement|- Physical network enforcement (switches, and so on) </br>- Hyper-V host virtual network switch enforcement can't be changed from inside VM </br>- VM host-based firewall can be manipulated by compromised application </br>- 3 layers of enforcement|
|**Attack surface area**|- Large hardware attack surface exposed to complex workloads, enables firmware based advanced persistent threat (APT)|- Hardware not directly exposed to VM, no potential for APT to persist in firmware from VM </br>- Small software-based Hyper-V attack surface area with low historical bug counts exposed to VM| |**Side channel attacks**|- Side channel attacks may be a factor, although reduced vs. shared hardware|- Side channel attacks assume control over VM placement across applications; may not be practical in large cloud service| |**Patching**|- Varied effective patching policy applied across host systems </br>- Highly varied/fragile updating for hardware and firmware|- Uniform patching policy applied across host and VMs|
-|**Security analytics**|- Security analytics dependent on host-based security solutions, which assume host/security software has not been compromised|- Outside VM (hypervisor based) forensics/snapshot capability allows assessment of potentially compromised workloads|
-|**Security policy**|- Security policy verification (patch scanning, vulnerability scanning, etc.) subject to tampering by compromised host </br>- Inconsistent security policy applied across customer entities|- Outside VM verification of security policies </br>- Possible to enforce uniform security policies across customer entities|
+|**Security analytics**|- Security analytics dependent on host-based security solutions, which assume host/security software hasn't been compromised|- Outside VM (hypervisor based) forensics/snapshot capability allows assessment of potentially compromised workloads|
+|**Security policy**|- Security policy verification (patch scanning, vulnerability scanning, and so on) subject to tampering by compromised host </br>- Inconsistent security policy applied across customer entities|- Outside VM verification of security policies </br>- Possible to enforce uniform security policies across customer entities|
|**Logging and monitoring**|- Varied logging and security analytics solutions|- Common Azure platform logging and security analytics solutions </br>- Most existing on-premises / varied logging and security analytics solutions also work| |**Malicious insider**|- Persistent threat caused by system admins having elevated access rights typically for during employment|- Greatly reduced threat because admins have no default access rights| Listed below are key risks that are unique to shared cloud environments that may need to be addressed when accommodating sensitive data and workloads. ### Exploitation of vulnerabilities in virtualization technologies
-Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Your software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect your service from attacks by its own end users. You can modify these permissions, and you can also choose to configure your VMs to allow remote administrative access.
+Compared to traditional on-premises hosted systems, Azure provides a greatly **reduced attack surface** by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs don't have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Your software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect your service from attacks by its own end users. You can modify these permissions, and you can also choose to configure your VMs to allow remote administrative access.
-PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it more difficult for a compromise to persist.
+PaaS VMs offer more advanced **protection against persistent malware** infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that haven't even been detected. This approach makes it more difficult for a compromise to persist.
-When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they cannot learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
+When VMs belonging to different customers are running on the same physical server, it is the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as **side-channel attacks**, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:
- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used. - Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM.
Azure addresses the perceived risk of resource sharing by providing a trustworth
In line with the shared responsibility model in cloud computing, this article provides you with guidance for activities that are part of your responsibility. It also explores design principles and technologies available in Azure to help you achieve your secure isolation objectives. ## Next steps- Learn more about: -- [Azure Security](../security/fundamentals/overview.md)-- [Azure Compliance](../compliance/index.yml)-- [Azure Government developer guidance](./documentation-government-developer-guide.md)
+- [Azure security fundamentals documentation](../security/fundamentals/index.yml)
+- [Azure infrastructure security](../security/fundamentals/infrastructure.md)
+- [Azure platform integrity and security](../security/fundamentals/platform.md)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure and other Microsoft services compliance offerings](/azure/compliance/offerings/)
azure-maps Tutorial Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md
To update the `occupied` state of the unit with feature `id` "UNIT26":
## Additional information
+* For information on how to retrieve the state of a feature using its feature id, see [Feature State - List States](/rest/api/maps/v2/feature-state/list-states).
* For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset](/rest/api/maps/v2/feature-state/delete-stateset) . * For information on using the Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature-state) to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps](indoor-map-dynamic-styling.md).
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Resources emit metrics, these metrics are later processed by rules. Metrics come
Virtual machine scale sets use telemetry data from Azure diagnostics agents whereas telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used statistics include CPU Usage, memory usage, thread counts, queue length, and disk usage. For a list of what telemetry data you can use, see [Autoscale Common Metrics](autoscale-common-metrics.md). ## Custom Metrics
-You can also leverage your own custom metrics that your application(s) may be emitting. If you have configured your application(s) to send metrics to Application Insights you can leverage those metrics to make decisions on whether to scale or not.
+You can also use your own custom metrics that your application(s) may be emitting. If you've configured your application(s) to send metrics to Application Insights you can use those metrics to make decisions on whether to scale or not.
## Time Schedule-based rules are based on UTC. You must set your time zone properly when setting up your rules.
You can set up autoscale via
| Logic Apps |[Adding integration service environment (ISE) capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity)| | Spring Cloud |[Set up autoscale for microservice applications](../../spring-cloud/how-to-setup-autoscale.md)| | Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
+| Azure SignalR Service | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) |
## Next steps To learn more about autoscale, use the Autoscale Walkthroughs listed previously or refer to the following resources:
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
+
+ Title: Monitoring Azure monitor data reference
+description: Important reference material needed when you monitor parts of Azure Monitor
+++++ Last updated : 04/03/2022++
+# Monitoring Azure Monitor data reference
+
+> [!NOTE]
+> This article may seem confusing because it lists the parts of the Azure Monitor service that are monitored by itself.
+
+See [Monitoring Azure Monitor](monitor-azure-monitor.md) for an explanation of how Azure Monitor monitors itself.
+
+## Metrics
+
+This section lists all the platform metrics collected automatically for Azure Monitor into Azure Monitor.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| [Autoscale behaviors for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [microsoft.insights/autoscalesettings](/azure/azure-monitor/platform/metrics-supported#microsoftinsightsautoscalesettings) |
+
+While technically not about Azure Monitor operations, the following metrics are collected into Azure Monitor namespaces.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Log Analytics agent gathered data for the [Metric alerts on logs](/azure/azure-monitor/alerts/alerts-metric-logs#metrics-and-dimensions-supported-for-logs) feature | [Microsoft.OperationalInsights/workspaces](/azure/azure-monitor/platform/metrics-supported##microsoftoperationalinsightsworkspaces)
+| [Application Insights availability tests](/azure/azure-monitor/app/availability-overview) | [Microsoft.Insights/Components](/azure/azure-monitor/essentials/metrics-supported#microsoftinsightscomponents)
+
+See a complete list of [platform metrics for other resources types](/azure/azure-monitor/platform/metrics-supported).
+
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+The following dimensions are relevant for the following areas of Azure Monitor.
+
+### Autoscale
+
+| Dimension Name | Description |
+| - | -- |
+|MetricTriggerRule | The autoscale rule that triggered the scale action |
+|MetricTriggerSource | The metric value that triggered the scale action |
+|ScaleDirection | The direction of the scale action (up or down)
+
+## Resource logs
+
+This section lists all the Azure Monitor resource log category types collected.
+
+|Resource Log Type | Resource Provider / Type Namespace<br/> and link |
+|-|--|
+| [Autoscale for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [Microsoft.insights/autoscalesettings](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightsautoscalesettings)|
+| [Application Insights availability tests](/azure/azure-monitor/app/availability-overview) | [Microsoft.insights/Components](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightscomponents) |
+
+For additional reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
++
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Monitor resource types and available for query by Log Analytics.
+
+|Resource Type | Notes |
+|--|-|
+| [Autoscale for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [Autoscale Tables](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-monitor-autoscale-settings) |
++
+## Activity log
+
+For a partial list of entires that the Azure Monitor services writes to the activity log, see [Azure resource provider operations](/azure/role-based-access-control/resource-provider-operations#monitor). There may be other entires not listed here.
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## Schemas
+
+The following schemas are in use by Azure Monitor.
+
+### Action Groups
+
+The following schemas are relevant to action groups, which are part of the notification infrastructure for Azure Monitor. Following are example calls and responses for action groups.
+
+#### Create Action Group
+```json
+{
+ "authorization": {
+ "action": "microsoft.insights/actionGroups/write",
+ "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc"
+ },
+ "caller": "test.cam@ieee.org",
+ "channels": "Operation",
+ "claims": {
+ "aud": "https://management.core.windows.net/",
+ "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
+ "iat": "1627074914",
+ "nbf": "1627074914",
+ "exp": "1627078814",
+ "http://schemas.microsoft.com/claims/authnclassreference": "1",
+ "aio": "AUQAu/8TbbbbyZJhgackCVdLETN5UafFt95J8/bC1SP+tBFMusYZ3Z4PBQRZUZ4SmEkWlDevT4p7Wtr4e/R+uksbfixGGQumxw==",
+ "altsecid": "1:live.com:00037FFE809E290F",
+ "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
+ "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
+ "appidacr": "2",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
+ "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
+ "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
+ "ipaddr": "73.254.xxx.xx",
+ "name": "test cam",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
+ "puid": "1003000086500F96",
+ "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
+ "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
+ "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
+ "uti": "KuRF5PX4qkyvxJQOXwZ2AA",
+ "ver": "1.0",
+ "wids": "62e90394-bbbb-4237-9190-012177145e10",
+ "xms_tcdt": "1373393473"
+ },
+ "correlationId": "74d253d8-bd5a-4e8d-a38e-5a52b173b7bd",
+ "description": "",
+ "eventDataId": "0e9bc114-dcdb-4d2d-b1ea-d3f45a4d32ea",
+ "eventName": {
+ "value": "EndRequest",
+ "localizedValue": "End request"
+ },
+ "category": {
+ "value": "Administrative",
+ "localizedValue": "Administrative"
+ },
+ "eventTimestamp": "2021-07-23T21:21:22.9871449Z",
+ "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/0e9bc114-dcdb-4d2d-b1ea-d3f45a4d32ea/ticks/637626720829871449",
+ "level": "Informational",
+ "operationId": "74d253d8-bd5a-4e8d-a38e-5a52b173b7bd",
+ "operationName": {
+ "value": "microsoft.insights/actionGroups/write",
+ "localizedValue": "Create or update action group"
+ },
+ "resourceGroupName": "testK-TEST",
+ "resourceProviderName": {
+ "value": "microsoft.insights",
+ "localizedValue": "Microsoft Insights"
+ },
+ "resourceType": {
+ "value": "microsoft.insights/actionGroups",
+ "localizedValue": "microsoft.insights/actionGroups"
+ },
+ "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "status": {
+ "value": "Succeeded",
+ "localizedValue": "Succeeded"
+ },
+ "subStatus": {
+ "value": "Created",
+ "localizedValue": "Created (HTTP Status Code: 201)"
+ },
+ "submissionTimestamp": "2021-07-23T21:22:22.1634251Z",
+ "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
+ "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "properties": {
+ "statusCode": "Created",
+ "serviceRequestId": "33658bb5-fc62-4e40-92e8-8b1f16f649bb",
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "message": "microsoft.insights/actionGroups/write",
+ "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
+ },
+ "relatedEvents": []
+}
+```
+
+#### Delete Action Group
+```json
+{
+ "authorization": {
+ "action": "microsoft.insights/actionGroups/delete",
+ "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc"
+ },
+ "caller": "test.cam@ieee.org",
+ "channels": "Operation",
+ "claims": {
+ "aud": "https://management.core.windows.net/",
+ "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
+ "iat": "1627076795",
+ "nbf": "1627076795",
+ "exp": "1627080695",
+ "http://schemas.microsoft.com/claims/authnclassreference": "1",
+ "aio": "AUQAu/8TbbbbTkWb9O23RavxIzqfHvA2fJUU/OjdhtHPNAjv0W4pyNnoZ3ShUOEzDut700WhNXth6ZYpd7al4XyJPACEfmtr9g==",
+ "altsecid": "1:live.com:00037FFE809E290F",
+ "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
+ "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
+ "appidacr": "2",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
+ "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
+ "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
+ "ipaddr": "73.254.xxx.xx",
+ "name": "test cam",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
+ "puid": "1003000086500F96",
+ "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
+ "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
+ "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
+ "uti": "E1BRdcfDzk64rg0eFx8vAA",
+ "ver": "1.0",
+ "wids": "62e90394-bbbb-4237-9190-012177145e10",
+ "xms_tcdt": "1373393473"
+ },
+ "correlationId": "a0bd5f9f-d87f-4073-8650-83f03cf11733",
+ "description": "",
+ "eventDataId": "8c7c920e-6a50-47fe-b264-d762e60cc788",
+ "eventName": {
+ "value": "EndRequest",
+ "localizedValue": "End request"
+ },
+ "category": {
+ "value": "Administrative",
+ "localizedValue": "Administrative"
+ },
+ "eventTimestamp": "2021-07-23T21:52:07.2708782Z",
+ "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc/events/8c7c920e-6a50-47fe-b264-d762e60cc788/ticks/637626739272708782",
+ "level": "Informational",
+ "operationId": "f7cb83ba-36fa-47dd-8ec4-bcac40879241",
+ "operationName": {
+ "value": "microsoft.insights/actionGroups/delete",
+ "localizedValue": "Delete action group"
+ },
+ "resourceGroupName": "testk-test",
+ "resourceProviderName": {
+ "value": "microsoft.insights",
+ "localizedValue": "Microsoft Insights"
+ },
+ "resourceType": {
+ "value": "microsoft.insights/actionGroups",
+ "localizedValue": "microsoft.insights/actionGroups"
+ },
+ "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "status": {
+ "value": "Succeeded",
+ "localizedValue": "Succeeded"
+ },
+ "subStatus": {
+ "value": "OK",
+ "localizedValue": "OK (HTTP Status Code: 200)"
+ },
+ "submissionTimestamp": "2021-07-23T21:54:00.1811815Z",
+ "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
+ "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "properties": {
+ "statusCode": "OK",
+ "serviceRequestId": "88fe5ac8-ee1a-4b97-9d5b-8a3754e256ad",
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testk-test/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "message": "microsoft.insights/actionGroups/delete",
+ "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
+ },
+ "relatedEvents": []
+}
+```
+
+#### Unsubscribe using Email
+
+```json
+{
+ "caller": "test.cam@ieee.org",
+ "channels": "Operation",
+ "claims": {
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "person@contoso.com",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn": "",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": ""
+ },
+ "correlationId": "8f936022-18d0-475f-9704-5151c75e81e4",
+ "description": "User with email address:person@contoso.com has unsubscribed from action group:TestingLogginc, Action:testEmail_-EmailAction-",
+ "eventDataId": "9b4b7b3f-79a2-4a6a-b1ed-30a1b8907765",
+ "eventName": {
+ "value": "",
+ "localizedValue": ""
+ },
+ "category": {
+ "value": "Administrative",
+ "localizedValue": "Administrative"
+ },
+ "eventTimestamp": "2021-07-23T21:38:35.1687458Z",
+ "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/9b4b7b3f-79a2-4a6a-b1ed-30a1b8907765/ticks/637626731151687458",
+ "level": "Informational",
+ "operationId": "",
+ "operationName": {
+ "value": "microsoft.insights/actiongroups/write",
+ "localizedValue": "Create or update action group"
+ },
+ "resourceGroupName": "testK-TEST",
+ "resourceProviderName": {
+ "value": "microsoft.insights",
+ "localizedValue": "Microsoft Insights"
+ },
+ "resourceType": {
+ "value": "microsoft.insights/actiongroups",
+ "localizedValue": "microsoft.insights/actiongroups"
+ },
+ "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "status": {
+ "value": "Succeeded",
+ "localizedValue": "Succeeded"
+ },
+ "subStatus": {
+ "value": "Updated",
+ "localizedValue": "Updated"
+ },
+ "submissionTimestamp": "2021-07-23T21:38:35.1687458Z",
+ "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
+ "tenantId": "",
+ "properties": {},
+ "relatedEvents": []
+}
+```
+
+#### Unsubscribe using SMS
+```json
+{
+ "caller": "",
+ "channels": "Operation",
+ "claims": {
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "4252137109",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn": "",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": ""
+ },
+ "correlationId": "e039f06d-c0d1-47ac-b594-89239101c4d0",
+ "description": "User with phone number:4255557109 has unsubscribed from action group:TestingLogginc, Action:testPhone_-SMSAction-",
+ "eventDataId": "789d0b03-2a2f-40cf-b223-d228abb5d2ed",
+ "eventName": {
+ "value": "",
+ "localizedValue": ""
+ },
+ "category": {
+ "value": "Administrative",
+ "localizedValue": "Administrative"
+ },
+ "eventTimestamp": "2021-07-23T21:31:47.1537759Z",
+ "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/789d0b03-2a2f-40cf-b223-d228abb5d2ed/ticks/637626727071537759",
+ "level": "Informational",
+ "operationId": "",
+ "operationName": {
+ "value": "microsoft.insights/actiongroups/write",
+ "localizedValue": "Create or update action group"
+ },
+ "resourceGroupName": "testK-TEST",
+ "resourceProviderName": {
+ "value": "microsoft.insights",
+ "localizedValue": "Microsoft Insights"
+ },
+ "resourceType": {
+ "value": "microsoft.insights/actiongroups",
+ "localizedValue": "microsoft.insights/actiongroups"
+ },
+ "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "status": {
+ "value": "Succeeded",
+ "localizedValue": "Succeeded"
+ },
+ "subStatus": {
+ "value": "Updated",
+ "localizedValue": "Updated"
+ },
+ "submissionTimestamp": "2021-07-23T21:31:47.1537759Z",
+ "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
+ "tenantId": "",
+ "properties": {},
+ "relatedEvents": []
+}
+```
+
+#### Update Action Group
+```json
+{
+ "authorization": {
+ "action": "microsoft.insights/actionGroups/write",
+ "scope": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc"
+ },
+ "caller": "test.cam@ieee.org",
+ "channels": "Operation",
+ "claims": {
+ "aud": "https://management.core.windows.net/",
+ "iss": "https://sts.windows.net/04ebb17f-c9d2-bbbb-881f-8fd503332aac/",
+ "iat": "1627074914",
+ "nbf": "1627074914",
+ "exp": "1627078814",
+ "http://schemas.microsoft.com/claims/authnclassreference": "1",
+ "aio": "AUQAu/8TbbbbyZJhgackCVdLETN5UafFt95J8/bC1SP+tBFMusYZ3Z4PBQRZUZ4SmEkWlDevT4p7Wtr4e/R+uksbfixGGQumxw==",
+ "altsecid": "1:live.com:00037FFE809E290F",
+ "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
+ "appid": "c44b4083-3bb0-49c1-bbbb-974e53cbdf3c",
+ "appidacr": "2",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress": "test.cam@ieee.org",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "cam",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "test",
+ "groups": "d734c6d5-bbbb-4b39-8992-88fd979076eb",
+ "http://schemas.microsoft.com/identity/claims/identityprovider": "live.com",
+ "ipaddr": "73.254.xxx.xx",
+ "name": "test cam",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "f19e58c4-5bfa-4ac6-8e75-9823bbb1ea0a",
+ "puid": "1003000086500F96",
+ "rh": "0.AVgAf7HrBNLJbkKIH4_VAzMqrINAS8SwO8FJtH2XTlPL3zxYAFQ.",
+ "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "SzEgbtESOKM8YsOx9t49Ds-L2yCyUR-hpIDinBsS-hk",
+ "http://schemas.microsoft.com/identity/claims/tenantid": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "live.com#test.cam@ieee.org",
+ "uti": "KuRF5PX4qkyvxJQOXwZ2AA",
+ "ver": "1.0",
+ "wids": "62e90394-bbbb-4237-9190-012177145e10",
+ "xms_tcdt": "1373393473"
+ },
+ "correlationId": "5a239734-3fbb-4ff7-b029-b0ebf22d3a19",
+ "description": "",
+ "eventDataId": "62c3ebd8-cfc9-435f-956f-86c45eecbeae",
+ "eventName": {
+ "value": "BeginRequest",
+ "localizedValue": "Begin request"
+ },
+ "category": {
+ "value": "Administrative",
+ "localizedValue": "Administrative"
+ },
+ "eventTimestamp": "2021-07-23T21:24:34.9424246Z",
+ "id": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc/events/62c3ebd8-cfc9-435f-956f-86c45eecbeae/ticks/637626722749424246",
+ "level": "Informational",
+ "operationId": "5a239734-3fbb-4ff7-b029-b0ebf22d3a19",
+ "operationName": {
+ "value": "microsoft.insights/actionGroups/write",
+ "localizedValue": "Create or update action group"
+ },
+ "resourceGroupName": "testK-TEST",
+ "resourceProviderName": {
+ "value": "microsoft.insights",
+ "localizedValue": "Microsoft Insights"
+ },
+ "resourceType": {
+ "value": "microsoft.insights/actionGroups",
+ "localizedValue": "microsoft.insights/actionGroups"
+ },
+ "resourceId": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "status": {
+ "value": "Started",
+ "localizedValue": "Started"
+ },
+ "subStatus": {
+ "value": "",
+ "localizedValue": ""
+ },
+ "submissionTimestamp": "2021-07-23T21:25:22.1522025Z",
+ "subscriptionId": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a",
+ "tenantId": "04ebb17f-c9d2-bbbb-881f-8fd503332aac",
+ "properties": {
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a/resourceGroups/testK-TEST/providers/microsoft.insights/actionGroups/TestingLogginc",
+ "message": "microsoft.insights/actionGroups/write",
+ "hierarchy": "52c65f65-bbbb-bbbb-bbbb-7dbbfc68c57a"
+ },
+ "relatedEvents": []
+}
+```
+
+## See Also
+
+- See [Monitoring Azure Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
azure-monitor Change Analysis Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-powershell.md
+
+ Title: Azure PowerShell for Change Analysis in Azure Monitor
+description: Learn how to use Azure PowerShell in Azure Monitor's Change Analysis to determine changes to resources in your subscription
+++
+ms.contributor: cawa
+ms.devlang: azurepowershell
Last updated : 04/11/2022++++
+# Azure PowerShell for Change Analysis in Azure Monitor (preview)
+
+This article describes how you can use Change Analysis with the
+[Az.ChangeAnalysis PowerShell module](/powershell/module/az.changeanalysis/) to determine changes
+made to resources in your Azure subscription.
+
+> [!CAUTION]
+> Change analysis is currently in public preview. This preview version is provided without a
+> service level agreement. It's not recommended for production workloads. Some features might not be
+> supported or might have constrained capabilities. For more information, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
++
+> [!IMPORTANT]
+> While the **Az.ChangeAnalysis** PowerShell module is in preview, you must install it separately using
+> the `Install-Module` cmdlet.
+
+```azurepowershell-interactive
+Install-Module -Name Az.ChangeAnalysis -Scope CurrentUser -Repository PSGallery
+```
+
+If you have multiple Azure subscriptions, choose the appropriate subscription. Select a specific
+subscription using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+```azurepowershell-interactive
+Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+```
+
+## View Azure subscription changes
+
+To view changes made to all resources in your Azure subscription, you use the `Get-AzChangeAnalysis`
+command. You specify the time range for events in UTC date format using the `StartTime` and
+`EndTime` parameters.
+
+```azurepowershell-interactive
+$startDate = Get-Date -Date '2022-04-07T12:09:03.141Z' -AsUTC
+$endDate = Get-Date -Date '2022-04-10T12:09:03.141Z' -AsUTC
+Get-AzChangeAnalysis -StartTime $startDate -EndTime $endDate
+```
+
+## View Azure resource group changes
+
+To view changes made to all resources in a resource group, you use the `Get-AzChangeAnalysis`
+command and specify the `ResourceGroupName` parameter. The following example returns a list of
+changes made within the last 12 hours. Specify `StartTime` and `EndTime` in UTC date formats.
+
+```azurepowershell-interactive
+$startDate = (Get-Date -AsUTC).AddHours(-12)
+$endDate = Get-Date -AsUTC
+Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate
+```
+
+## View Azure resource changes
+
+To view changes made to a resource, you use the `Get-AzChangeAnalysis` command and specify the
+`ResourceId` parameter. The following example uses PowerShell splatting to return a list of the
+changes made within the last day. Specify `StartTime` and `EndTime` in UTC date formats.
+
+```azurepowershell-interactive
+$Params = @{
+ StartTime = (Get-Date -AsUTC).AddDays(-1)
+ EndTime = Get-Date -AsUTC
+ ResourceId = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/<myResourceGroup>/providers/Microsoft.Network/networkInterfaces/<myNetworkInterface>'
+}
+Get-AzChangeAnalysis @Params
+```
+
+> [!NOTE]
+> A resource not found message is returned if the specified resource has been removed or deleted.
+> Use Change Analysis at the resource group or subscription level to determine changes for resources
+> that have been removed or deleted.
+
+## View detailed information
+
+You can view more properties for any of the commands shown in this article by piping the results to
+`Select-Object -Property *`.
+
+```azurepowershell-interactive
+$startDate = (Get-Date -AsUTC).AddHours(-12)
+$endDate = Get-Date -AsUTC
+Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate |
+ Select-Object -Property *
+```
+
+The `PropertyChange` property is a complex object that has addition nested properties. Pipe the
+`PropertyChange` property to `Select-Object -Property *` to see the nested properties.
+
+```azurepowershell-interactive
+$startDate = (Get-Date -AsUTC).AddHours(-12)
+$endDate = Get-Date -AsUTC
+(Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate |
+ Select-Object -First 1).PropertyChange | Select-object -Property *
+```
+
+## Next steps
+
+- Learn how to use [Get-AzChangeAnalysis](/powershell/module/az.changeanalysis/get-azchangeanalysis/)
+- Learn how to [use Change Analysis in Azure Monitor](change-analysis.md)
+- Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
This article describes how to set up Container insights to monitor managed Kuber
You can enable monitoring of an AKS cluster that's already deployed using one of the supported methods: * Azure CLI
-* Terraform
+* [Terraform](#enable-using-terraform)
* [From Azure Monitor](#enable-from-azure-monitor-in-the-portal) or [directly from the AKS cluster](#enable-directly-from-aks-cluster-in-the-portal) in the Azure portal * With the [provided Azure Resource Manager template](#enable-using-an-azure-resource-manager-template) by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with Azure CLI.
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
+
+ Title: Monitoring Azure Monitor
+description: Learn about how Azure Monitor monitors itself
+++++ Last updated : 04/07/2022+++
+<!-- VERSION 2.2-->
+
+# Monitoring Azure Monitor
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure Monitor. Azure Monitor uses [itself](/azure/azure-monitor/overview) to monitor certain parts of its own functionality. You can monitor:
+
+- Autoscale operations
+- Monitoring operations in the audit log
+
+ If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+For an overview showing where autoscale and the audit log fit into Azure Monitor, see [Introduction to Azure Monitor](overview.md).
+
+## Monitoring overview page in Azure portal
+
+The **Overview** page in the Azure portal for Azure Monitor shows links and tutorials on how to use Azure Monitor in general. It doesn't mention any of the specific resources discussed later in this article.
+
+## Monitoring data
+
+Azure Monitor collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *Azure Monitor* data reference](azure-monitor-monitoring-reference.md) for detailed information on the metrics and logs metrics created by Azure Monitor.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Monitor* are listed in [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#resource-logs).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+
+For a list of the platform metrics collected for Azure Monitor into itself, see [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#metrics).
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+
+<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schemas for autoscale resource logs are found in the [Azure Monitor Data Reference](azure-monitor-monitoring-reference.md#resource-logs)
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Monitor, see [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md#azure-monitor-logs-tables)
+
+### Sample Kusto queries
+
+These are now listed in the [Log Analytics user interface](./logs/queries.md).
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+
+For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot).
+
+## Next steps
+
+- See [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md) for a reference of the metrics, logs, and other important values created by Azure Monitor to monitor itself.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
Output for this command will look similar to the following and specify whether a
## Log Analytics workspace VM insights requires a Log Analytics workspace. See [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md) for details and requirements of this workspace.+
+> [!NOTE]
+> VM Insights does not support sending data to more than one Log Analytics workspace (multi-homing).
+>
## Agents When you enable VM insights for a machine, the following two agents are installed. See [Network requirements](../agents/log-analytics-agent.md#network-requirements) for the network requirements for these agents.
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
The steps to configure VM insights are as follows. Follow each link for detailed
- [Add VMInsights solution to workspace.](./vminsights-configure-workspace.md#add-vminsights-solution-to-workspace) - [Install agents on virtual machine and virtual machine scale set to be monitored.](./vminsights-enable-overview.md)
-Currently, VM insights does not support multi-homing.
+> [!NOTE]
+> VM Insights does not support sending data to more than one Log Analytics workspace (multi-homing).
## Next steps
azure-netapp-files Develop Rest Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/develop-rest-api-powershell.md
The REST API specification for Azure NetApp Files is published through [GitHub](
5. Send a test call and include the token to validate your access to the REST API: ```azurepowershell
- $SubId = (Get-AzureRmContext).Subscription.Id
+ $SubId = (Get-AzContext).Subscription.Id
Invoke-RestMethod -Method Get -Headers $headers -Uri https://management.azure.com/subscriptions/$SubId/providers/Microsoft.Web/sites?api-version=2019-11-01 ```
This section shows sample scripts for PowerShell.
## Next steps
-[See the Azure NetApp Files REST API reference](/rest/api/netapp/)
+[See the Azure NetApp Files REST API reference](/rest/api/netapp/)
azure-percept How To Set Up Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-over-the-air-updates.md
Last updated 03/30/2021-+ # Set up Azure IoT Hub to deploy over-the-air updates
Keep your Azure Percept DK secure and up to date using over-the-air updates. In
The final step will enable you to grant permissions to users to publish and deploy updates.
-1. In your Device Update for IoT Hub resource, click **Access control (IAM)**.
+1. In your Device Update for IoT Hub resource, select **Access control (IAM)**.
-1. Click **+Add** and then select **Add role assignment**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. For **Role**, select **Device Update Administrator**. For **Assign access to** select **User, group, or service principle**. For **Select**, select your account or the account of the person who will be deploying updates. Click **Save**.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Device Update Administrator |
+ | Assign access to | User, group, or service principal |
+ | Members | &lt;Your account or the account deploying updates&gt; |
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
> [!TIP] > If you would like to give more people in your organization access, you can repeat this step and make each of these users a **Device Update Administrator**.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that are currently
| [Maintenance window advance notifications](../database/advance-notifications.md)| Advance notifications are available for databases configured to use a non-default [maintenance window](maintenance-window.md). Advance notifications for maintenance windows are in public preview for Azure SQL Database. | | [Query editor in the Azure portal](connect-query-portal.md) | The query editor in the portal allows you to run queries against your Azure SQL Database directly from the [Azure portal](https://portal.azure.com).| | [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. |
+| [Reverse migrate from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale) | Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures. |
| [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.| | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.| | [Zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. **The feature is currently in preview for the General Purpose and Hyperscale service tiers.** |
Learn about significant changes to the Azure SQL Database documentation.
| Changes | Details | | | |
+| **Hyperscale reverse migration** | Reverse migration is now in preview. Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures. Learn about [reverse migration from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale). |
+| **New Hyperscale articles** | We have reorganized some existing content into new articles and added new content for Hyperscale. Learn about [Hyperscale distributed functions architecture](hyperscale-architecture.md), [how to manage a Hyperscale database](manage-hyperscale-database.md), and how to [create a Hyperscale database](hyperscale-database-create-quickstart.md). |
| **Free Azure SQL Database** | Try Azure SQL Database for free using the Azure free account. To learn more, review [Try SQL Database for free](free-sql-db-free-account-how-to-deploy.md).|
azure-sql Hyperscale Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/hyperscale-architecture.md
+
+ Title: Hyperscale distributed functions architecture
+description: Learn how Hyperscale databases are architected to scale out storage and compute resources for Azure SQL Database.
+++++++ Last updated : 2/17/2022++
+# Hyperscale distributed functions architecture
++
+The [Hyperscale service tier](service-tier-hyperscale.md) utilizes an architecture with highly scalable storage and compute performance tiers. This article describes the components that enable customers to quickly scale Hyperscale databases while benefiting from nearly instantaneous backups and highly scalable transaction logging.
+
+## Hyperscale architecture overview
+
+Traditional database engines centralize data management functions in a single process: even so called distributed databases in production today have multiple copies of a monolithic data engine.
+
+Hyperscale databases follow a different approach. Hyperscale separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, storage capacity can be smoothly scaled out as far as needed. The initially supported storage limit is 100 TB.
+
+High availability and named replicas share the same storage components, so no data copy is required to spin up a new replica.
+
+The following diagram illustrates the different types of nodes in a Hyperscale database:
++
+A Hyperscale database contains the following types of components: compute nodes, page servers, the log service, and Azure storage.
+
+## Compute
+
+The compute node is where the relational engine lives. The compute node is where language, query, and transaction processing occur. All user interactions with a Hyperscale database happen through compute nodes.
+
+Compute nodes have SSD-based caches called Resilient Buffer Pool Extension (RBPEX Data Cache). RBPEX Data Cache is a non-covering data cache that minimizes the number of network round trips required to fetch a page of data.
+
+Hyperscale databases have one primary compute node where the read-write workload and transactions are processed. One or more secondary compute nodes act as hot standby nodes for failover purposes. Secondary compute nodes can serve as read-only compute nodes to offload read workloads when desired. [Named replicas](service-tier-hyperscale-replicas.md#named-replica-in-preview) are secondary compute nodes designed to enable massive OLTP [read-scale out](read-scale-out.md) scenarios and to improve Hybrid Transactional and Analytical Processing (HTAP) workloads.
+
+The database engine running on Hyperscale compute nodes is the same as in other Azure SQL Database service tiers. When users interact with the database engine on Hyperscale compute nodes, the supported surface area and engine behavior are the same as in other service tiers, with the exception of [known limitations](service-tier-hyperscale.md#known-limitations).
+
+## Page server
+
+Page servers are systems representing a scaled-out storage engine. Each page server is responsible for a subset of the pages in the database. Nominally, each page server controls either up to 128 GB or up to 1 TB of data. Each page server also has a replica that is kept for redundancy and availability.
+
+The job of a page server is to serve database pages out to the compute nodes on demand, and to keep the pages updated as transactions update data. Page servers are kept up to date by playing transaction log records from the log service.
+
+Page servers also maintain covering SSD-based caches to enhance performance. Long-term storage of data pages is kept in Azure Storage for durability.
+
+## Log service
+
+The log service accepts transaction log records that correspond to data changes from the primary compute replica. Page servers then receive the log records from the log service and apply the changes to their respective slices of data. Additionally, compute secondary replicas receive log records from the log service and replay only the changes to pages already in their buffer pool or local RBPEX cache. All data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers.
+
+Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service has local memory and SSD caches to speed up access to log records.
+
+The log on Hyperscale is practically infinite, with the restriction that a single transaction cannot generate more than 1 TB of log. Additionally, if using [Change Data Capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server), at most 1 TB of log can be generated since the start of the oldest active transaction. Avoid unnecessarily large transactions to stay below this limit.
+
+## Azure storage
+
+Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This storage is also used for backup purposes and may be replicated between regions based on choice of storage redundancy.
+
+Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast regardless of data size. A database can be restored to any point in time within its backup retention period.
+
+Hyperscale supports configurable storage redundancy. When creating a Hyperscale database, you can choose read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS)(preview), or locally redundant storage (LRS)(preview) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and [backup storage redundancy](automated-backups-overview.md#backup-storage-redundancy).
+
+## Next steps
+
+Learn more about Hyperscale in the following articles:
+
+- [Hyperscale service tier](service-tier-hyperscale.md)
+- [Azure SQL Database Hyperscale FAQ](service-tier-hyperscale-frequently-asked-questions-faq.yml)
+- [Quickstart: Create a Hyperscale database in Azure SQL Database](hyperscale-database-create-quickstart.md)
+- [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml)
azure-sql Hyperscale Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/hyperscale-database-create-quickstart.md
+
+ Title: Create a Hyperscale database
+description: Create a Hyperscale database in Azure SQL Database using the Azure portal, Transact-SQL, PowerShell, or the Azure CLI.
+++++++ Last updated : 2/17/2022+
+# Quickstart: Create a Hyperscale database in Azure SQL Database
+
+In this quickstart, you create a [logical server in Azure](logical-servers.md) and a [Hyperscale](service-tier-hyperscale.md) database in Azure SQL Database using the Azure portal, a PowerShell script, or an Azure CLI script, with the option to create one or more [High Availability (HA) replicas](service-tier-hyperscale-replicas.md#high-availability-replica). If you would like to use an existing logical server in Azure, you can also create a Hyperscale database using Transact-SQL.
+
+## Prerequisites
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- The latest version of either [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli-windows), if you would like to follow the quickstart programmatically. Alternately, you can complete the quickstart in the Azure portal.
+- An existing [logical server](logical-servers.md) in Azure is required if you would like to create a Hyperscale database with Transact-SQL. For this approach, you will need to install [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or the client of your choice to run Transact-SQL commands ([sqlcmd](/sql/tools/sqlcmd-utility), etc.).
+
+## Create a Hyperscale database
+
+This quickstart creates a single database in the [Hyperscale service tier](service-tier-hyperscale.md).
+
+# [Portal](#tab/azure-portal)
+
+To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
+
+1. Browse to the [Select SQL Deployment option](https://portal.azure.com/#create/Microsoft.AzureSQL) page.
+1. Under **SQL databases**, leave **Resource type** set to **Single database**, and select **Create**.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/azure-sql-create-resource.png" alt-text="Screenshot of the Azure SQL page in the Azure portal. The page offers the ability to select a deployment option including creating SQL databases, SQL managed instances, and SQL virtual machines." lightbox="media/hyperscale-database-create-quickstart/azure-sql-create-resource.png":::
+
+1. On the **Basics** tab of the **Create SQL Database** form, under **Project details**, select the desired Azure **Subscription**.
+1. For **Resource group**, select **Create new**, enter *myResourceGroup*, and select **OK**.
+1. For **Database name**, enter *mySampleDatabase*.
+1. For **Server**, select **Create new**, and fill out the **New server** form with the following values:
+ - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. Enter a name such as *mysqlserver12345*, and the portal will let you know if it's available.
+ - **Server admin login**: Enter *azureuser*.
+ - **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field.
+ - **Location**: Select a location from the dropdown list.
+
+ Select **OK**.
+
+1. Under **Compute + storage**, select **Configure database**.
+1. This quickstart creates a Hyperscale database. For **Service tier**, select **Hyperscale**.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/create-database-select-hyperscale-service-tier.png" alt-text="Screenshot of the service and compute tier configuration page for a new database in Azure SQL Database. The Hyperscale service tier has been selected." lightbox="media/hyperscale-database-create-quickstart/create-database-select-hyperscale-service-tier.png":::
+
+1. Under **Compute Hardware**, select **Change configuration**. Review the available hardware configurations and select the most appropriate configuration for your database. For this example, we will select the **Gen5** configuration.
+1. Select **OK** to confirm the hardware generation.
+1. Under **Save money**, review if you qualify to use Azure Hybrid Benefit for this database. If so, select **Yes** and then confirm you have the required license.
+1. Optionally, adjust the **vCores** slider if you would like to increase the number of vCores for your database. For this example, we will select 2 vCores.
+1. Adjust the **High-Availability Secondary Replicas** slider to create one [High Availability (HA) replica](service-tier-hyperscale-replicas.md#high-availability-replica).
+1. Select **Apply**.
+1. Carefully consider the configuration option for **Backup storage redundancy** when creating a Hyperscale database. Storage redundancy can only be specified during the database creation process for Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-redundant storage. The selected storage redundancy option will be used for the lifetime of the database for both [data storage redundancy](hyperscale-architecture.md#azure-storage) and [backup storage redundancy](automated-backups-overview.md#backup-storage-redundancy). Existing databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/azure-sql-create-database-basics-tab.png" alt-text="Screenshot of the basics tab in the create database process after the Hyperscale service tier has been selected and configured." lightbox="media/hyperscale-database-create-quickstart/azure-sql-create-database-basics-tab.png":::
++
+1. Select **Next: Networking** at the bottom of the page.
+1. On the **Networking** tab, for **Connectivity method**, select **Public endpoint**.
+1. For **Firewall rules**, set **Add current client IP address** to **Yes**. Leave **Allow Azure services and resources to access this server** set to **No**.
+1. Select **Next: Security** at the bottom of the page.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/azure-sql-database-configure-network.png" alt-text="Screenshot of the networking configuration page for a new database in Azure SQL Database that enables you to configure endpoints and optionally add a firewall rule for your client IP address." lightbox="media/hyperscale-database-create-quickstart/azure-sql-database-configure-network.png":::
+
+1. Optionally, enable [Microsoft Defender for SQL](../database/azure-defender-for-sql.md).
+1. Select **Next: Additional settings** at the bottom of the page.
+1. On the **Additional settings** tab, in the **Data source** section, for **Use existing data**, select **Sample**. This creates an AdventureWorksLT sample database so there's some tables and data to query and experiment with, as opposed to an empty blank database.
+1. Select **Review + create** at the bottom of the page:
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/azure-sql-create-database-sample-data.png" alt-text="Screenshot of the 'Additional Settings' screen to create a database in Azure SQL Database allows you to select sample data." lightbox="media/hyperscale-database-create-quickstart/azure-sql-create-database-sample-data.png":::
+
+1. On the **Review + create** page, after reviewing, select **Create**.
+
+# [Azure CLI](#tab/azure-cli)
+
+The Azure CLI code blocks in this section create a resource group, server, single database, and server-level IP firewall rule for access to the server. Make sure to record the generated resource group and server names, so you can manage these resources later.
++++
+### Set parameter values
+
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
+
+Before running the sample code, change the `location` as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
++
+```azurecli-interactive
+let "randomIdentifier=$RANDOM*$RANDOM"
+location="East US"
+resourceGroupName="myResourceGroup"
+tag="create-and-configure-database"
+serverName="mysqlserver-$randomIdentifier"
+databaseName="mySampleDatabase"
+login="azureuser"
+password="Pa$$w0rD-$randomIdentifier"
+# Specify appropriate IP address values for your environment
+# to limit access to the SQL Database server
+startIp=0.0.0.0
+endIp=0.0.0.0
+
+echo "Using resource group $resourceGroupName with login: $login, password: $password..."
+
+```
+
+### Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group in the location specified for the `location` parameter in the prior step:
+
+```azurecli-interactive
+echo "Creating $resourceGroupName in $location..."
+az group create --name $resourceGroupName --location "$location" --tag $tag
+
+```
+
+### Create a server
+
+Create a [logical server](logical-servers.md) with the [az sql server create](/cli/azure/sql/server) command.
+
+```azurecli-interactive
+
+echo "Creating $serverName in $location..."
+az sql server create --name $serverName --resource-group $resourceGroupName --location "$location" --admin-user $login --admin-password $password
+
+```
+
+### Configure a server-based firewall rule
+
+Create a firewall rule with the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) command.
+
+```azurecli-interactive
+echo "Configuring firewall..."
+az sql server firewall-rule create --resource-group $resourceGroupName --server $serverName -n AllowYourIp --start-ip-address $startIp --end-ip-address $endIp
+
+```
+
+### Create a single database
+
+Create a database in the [Hyperscale service tier](service-tier-hyperscale.md) with the [az sql db create](/cli/azure/sql/db) command.
+
+When creating a Hyperscale database, carefully consider the setting for `backup-storage-redundancy`. Storage redundancy can only be specified during the database creation process for Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-redundant storage. The selected storage redundancy option will be used for the lifetime of the database for both [data storage redundancy](hyperscale-architecture.md#azure-storage) and [backup storage redundancy](automated-backups-overview.md#backup-storage-redundancy). Existing databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore. Allowed values for the `backup-storage-redundancy` parameter are: `Local`, `Zone`, `Geo`. Unless explicitly specified, databases will be configured to use geo-redundant backup storage.
+
+Run the following command to create a Hyperscale database populated with AdventureWorksLT sample data. The database uses Gen5 hardware with 2 vCores. Geo-redundant backup storage is used for the database. The command also creates one [High Availability (HA) replica](service-tier-hyperscale-replicas.md#high-availability-replica).
+
+```azurecli
+az sql db create \
+ --resource-group $resourceGroupName \
+ --server $serverName \
+ --name $databaseName \3
+ --sample-name AdventureWorksLT \
+ --edition Hyperscale \
+ --compute-model Provisioned \
+ --family Gen5 \
+ --capacity 2 \
+ --backup-storage-redundancy Geo \
+ --ha-replicas 1
+
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+You can create a resource group, server, and single database using Azure PowerShell.
+
+### Launch Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com).
+
+When Cloud Shell opens, verify that **PowerShell** is selected for your environment. Subsequent sessions will use Azure CLI in a Bash environment, Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+### Set parameter values
+
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the Get-Random cmdlet is used to create the server name.
+
+Before running the sample code, change the `location` as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
+
+```azurepowershell-interactive
+ # Set variables for your server and database
+ $resourceGroupName = "myResourceGroup"
+ $location = "eastus"
+ $adminLogin = "azureuser"
+ $password = "Pa$$w0rD-$(Get-Random)"
+ $serverName = "mysqlserver-$(Get-Random)"
+ $databaseName = "mySampleDatabase"
+
+ # The ip address range that you want to allow to access your server
+ $startIp = "0.0.0.0"
+ $endIp = "0.0.0.0"
+
+ # Show randomized variables
+ Write-host "Resource group name is" $resourceGroupName
+ Write-host "Server name is" $serverName
+ Write-host "Password is" $password
+
+```
+
+### Create resource group
+
+Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
+
+```azurepowershell-interactive
+ Write-host "Creating resource group..."
+ $resourceGroup = New-AzResourceGroup -Name $resourceGroupName -Location $location -Tag @{Owner="SQLDB-Samples"}
+ $resourceGroup
+
+```
+
+### Create a server
+
+Create a server with the [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver) cmdlet.
+
+```azurepowershell-interactive
+ Write-host "Creating primary server..."
+ $server = New-AzSqlServer -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -Location $location `
+ -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential `
+ -ArgumentList $adminLogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
+ $server
+
+```
+
+### Create a firewall rule
+
+Create a server firewall rule with the [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule) cmdlet.
+
+```azurepowershell-interactive
+ Write-host "Configuring server firewall rule..."
+ $serverFirewallRule = New-AzSqlServerFirewallRule -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FirewallRuleName "AllowedIPs" -StartIpAddress $startIp -EndIpAddress $endIp
+ $serverFirewallRule
+
+```
+
+### Create a single database
+
+Create a single database with the [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) cmdlet.
+
+When creating a Hyperscale database, carefully consider the setting for `BackupStorageRedundancy`. Storage redundancy can only be specified during the database creation process for Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-redundant storage. The selected storage redundancy option will be used for the lifetime of the database for both [data storage redundancy](hyperscale-architecture.md#azure-storage) and [backup storage redundancy](automated-backups-overview.md#backup-storage-redundancy). Existing databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore. Allowed values for the `BackupStorageRedundancy` parameter are: `Local`, `Zone`, `Geo`. Unless explicitly specified, databases will be configured to use geo-redundant backup storage.
+
+Run the following command to create a Hyperscale database populated with AdventureWorksLT sample data. The database uses Gen5 hardware with 2 vCores. Geo-redundant backup storage is used for the database. The command also creates one [High Availability (HA) replica](service-tier-hyperscale-replicas.md#high-availability-replica).
+
+```azurepowershell-interactive
+ Write-host "Creating a gen5 2 vCore Hyperscale database..."
+ $database = New-AzSqlDatabase -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -DatabaseName $databaseName `
+ -Edition Hyperscale `
+ -ComputeModel Provisioned `
+ -ComputeGeneration Gen5 `
+ -VCore 2 `
+ -MinimumCapacity 2 `
+ -SampleName "AdventureWorksLT" `
+ -BackupStorageRedundancy Geo `
+ -HighAvailabilityReplicaCount 1
+ $database
+
+```
+
+# [Transact-SQL](#tab/t-sql)
+
+To create a Hyperscale database with Transact-SQL, you must first [create or identify connection information for an existing logical server](logical-servers.md) in Azure.
+
+Connect to the master database using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or the client of your choice to run Transact-SQL commands ([sqlcmd](/sql/tools/sqlcmd-utility), etc.).
+
+When creating a Hyperscale database, carefully consider the setting for `BACKUP_STORAGE_REDUNDANCY`. Storage redundancy can only be specified during the database creation process for Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-redundant storage. The selected storage redundancy option will be used for the lifetime of the database for both [data storage redundancy](hyperscale-architecture.md#azure-storage) and [backup storage redundancy](automated-backups-overview.md#backup-storage-redundancy). Existing databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore. Allowed values for the `BackupStorageRedundancy` parameter are: `LOCAL`, `ZONE`, `GEO`. Unless explicitly specified, databases will be configured to use geo-redundant backup storage.
+
+Run the following Transact-SQL command to create a new Hyperscale database with Gen 5 hardware, 2 vCores, and geo-redundant backup storage. You must specify both the edition and service objective in the `CREATE DATABASE` statement. Refer to the [resource limits](./resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen4) for a list of valid service objectives, such as `HS_Gen5_2`.
+
+This example code creates an empty database. If you would like to create a database with sample data, use the Azure portal, Azure CLI, or PowerShell examples in this quickstart.
+
+```sql
+CREATE DATABASE [myHyperscaleDatabase]
+ (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_2', BACKUP_STORAGE_REDUNDANCY= 'GEO');
+GO
+```
+
+Refer to [CREATE DATABASE (Transact-SQL)](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current&preserve-view=true) for more parameters and options.
+
+To add one or more [High Availability (HA) replicas](service-tier-hyperscale-replicas.md#high-availability-replica) to your database, use the **Compute and storage** pane for the database in the Azure portal, the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) PowerShell command, or the [az sql db update](/cli/azure/sql/db#az_sql_db_update) Azure CLI command.
+++
+## Query the database
+
+Once your database is created, you can use the **Query editor (preview)** in the Azure portal to connect to the database and query data. If you prefer, you can alternately query the database by [connecting with Azure Data Studio](/sql/azure-data-studio/quickstart-sql-database), [SQL Server Management Studio (SSMS)](connect-query-ssms.md), or the client of your choice to run Transact-SQL commands ([sqlcmd](/sql/tools/sqlcmd-utility), etc.).
+
+1. In the portal, search for and select **SQL databases**, and then select your database from the list.
+1. On the page for your database, select **Query editor (preview)** in the left menu.
+1. Enter your server admin login information, and select **OK**.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/query-editor-azure-portal-authenticate.png" alt-text="Screenshot of the Query editor (preview) pane in Azure SQL Database gives two options for authentication. In this example, we have filled in Login and Password under SQL server authentication." lightbox="media/hyperscale-database-create-quickstart/query-editor-azure-portal-authenticate.png":::
+
+1. If you created your Hyperscale database from the AdventureWorksLT sample database, enter the following query in the **Query editor** pane.
+
+ ```sql
+ SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
+ FROM SalesLT.ProductCategory pc
+ JOIN SalesLT.Product p
+ ON pc.productcategoryid = p.productcategoryid;
+ ```
+
+ If you created an empty database using [the Transact-SQL sample code](?tabs=t-sql#create-a-hyperscale-database), enter another example query in the **Query editor** pane, such as the following:
+
+ ```sql
+ CREATE TABLE dbo.TestTable(
+ TestTableID int IDENTITY(1,1) NOT NULL,
+ TestTime datetime NOT NULL,
+ TestMessage nvarchar(4000) NOT NULL,
+ CONSTRAINT PK_TestTable_TestTableID PRIMARY KEY CLUSTERED (TestTableID ASC)
+ )
+ GO
+
+ ALTER TABLE dbo.TestTable ADD CONSTRAINT DF_TestTable_TestTime DEFAULT (getdate()) FOR TestTime
+ GO
+
+ INSERT dbo.TestTable (TestMessage)
+ VALUES (N'This is a test');
+ GO
+
+ SELECT TestTableID, TestTime, TestMessage
+ FROM dbo.TestTable;
+ GO
+ ```
+
+1. Select **Run**, and then review the query results in the **Results** pane.
+
+ :::image type="content" source="media/hyperscale-database-create-quickstart/query-editor-azure-portal-run-query.png" alt-text="Screenshot of the Query editor (preview) pane in Azure SQL Database after a query has been run against AdventureWorks sample data." lightbox="media/hyperscale-database-create-quickstart/query-editor-azure-portal-run-query.png":::
+
+1. Close the **Query editor** page, and select **OK** when prompted to discard your unsaved edits.
+
+## Clean up resources
+
+Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and query your database with different methods.
+
+When you're finished using these resources, you can delete the resource group you created, which will also delete the server and single database within it.
+
+# [Portal](#tab/azure-portal)
+
+To delete **myResourceGroup** and all its resources using the Azure portal:
+
+1. In the portal, search for and select **Resource groups**, and then select **myResourceGroup** from the list.
+1. On the resource group page, select **Delete resource group**.
+1. Under **Type the resource group name**, enter *myResourceGroup*, and then select **Delete**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, and to delete.
+
+```azurecli-interactive
+az group delete --name $resourceGroup
+
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+To delete the resource group and all its resources, run the following PowerShell cmdlet, using the name of your resource group:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name $resourceGroupName
+
+```
+
+# [Transact-SQL](#tab/t-sql)
+
+This option deletes only the Hyperscale database. It doesn't remove any logical SQL servers or resource groups that you may have created in addition to the database.
+
+To delete a Hyperscale database with Transact-SQL, connect to the master database using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or the client of your choice to run Transact-SQL commands ([sqlcmd](/sql/tools/sqlcmd-utility), etc.).
+
+Run the following Transact-SQL command to drop the database:
+
+```sql
+DROP DATABASE [myHyperscaleDatabase];
+GO
+```
+++
+## Next steps
+
+[Connect and query](connect-query-content-reference-guide.md) your database using different tools and languages:
+- [Connect and query using SQL Server Management Studio](connect-query-ssms.md)
+- [Connect and query using Azure Data Studio](/sql/azure-data-studio/quickstart-sql-database?toc=/azure/sql-database/toc.json)
+
+Learn more about Hyperscale databases in the following articles:
+
+- [Hyperscale service tier](service-tier-hyperscale.md)
+- [Azure SQL Database Hyperscale FAQ](service-tier-hyperscale-frequently-asked-questions-faq.yml)
+- [Hyperscale secondary replicas](service-tier-hyperscale-replicas.md)
+- [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml)
azure-sql Manage Hyperscale Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/manage-hyperscale-database.md
+
+ Title: How to manage a Hyperscale database
+description: How to manage a Hyperscale database, including migrating to Hyperscale, restoring to a different region, and reverse migration.
+++++++ Last updated : 2/17/2022++
+# How to manage a Hyperscale database
++
+The [Hyperscale service tier](service-tier-hyperscale.md) provides a highly scalable storage and compute performance tier that leverages the Azure architecture to scale out storage and compute resources for an Azure SQL Database substantially beyond the limits available for the General Purpose and Business Critical service tiers. This article describes how to carry out essential administration tasks for Hyperscale databases, including migrating an existing database to Hyperscale, restoring a Hyperscale database to a different region, reverse migrating from Hyperscale to another service tier, and monitoring the status of ongoing and recent operations against a Hyperscale database.
+
+Learn how to create a new Hyperscale database in [Quickstart: Create a Hyperscale database in Azure SQL Database](hyperscale-database-create-quickstart.md).
+
+## Migrate an existing database to Hyperscale
+
+You can migrate existing databases in Azure SQL Database to Hyperscale using the Azure portal, the Azure CLI, PowerShell, or Transact-SQL.
+
+The time required to move an existing database to Hyperscale consists of the time to copy data and the time to replay the changes made in the source database while copying data. The data copy time is proportional to data size. We recommend migrating to Hyperscale during a lower write activity period so that the time to replay accumulated changes to replay will be shorter.
+
+You will only experience a short period of downtime, generally a few minutes, during the final cutover to the Hyperscale service tier.
+
+### Prerequisites
+
+To move a database that is a part of a [geo-replication](active-geo-replication-overview.md) relationship, either as the primary or as a secondary, to Hyperscale, you need to first terminate data replication between the primary and secondary replica. Databases in a [failover group](auto-failover-group-overview.md) must be removed from the group first.
+
+Once a database has been moved to Hyperscale, you can create a new Hyperscale geo-replica for that database. Geo-replication for Hyperscale is in preview with certain [limitations](active-geo-replication-overview.md).
+
+### How to migrate a database to the Hyperscale service tier
+
+To migrate an existing database in Azure SQL Database to the Hyperscale service tier, first identify your target service objective. Review [resource limits for single databases](resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen4) if you aren't sure which service objective is right for your database. In many cases, you can choose a service objective with the same number of vCores and the same hardware generation as the original database. If needed, you will be able to [adjust this later with minimal downtime](scale-resources.md).
+
+Select the tab for your preferred tool to migrate your database:
+
+# [Portal](#tab/azure-portal)
+
+The Azure portal enables you to migrate to the Hyperscale service tier by modifying the pricing tier for your database.
++
+1. Navigate to the database you wish to migrate in the Azure portal.
+1. In the left navigation bar, select **Compute + storage**.
+1. Select the **Service tier** drop-down to expand the options for service tiers.
+1. Select **Hyperscale (On-demand scalable storage)** from the dropdown menu.
+1. Review the **Hardware Configuration** listed. If desired, select **Change configuration** to select the appropriate hardware configuration for your workload.
+1. Review the option to **Save money**. Select it if you qualify for Azure Hybrid Benefit and wish to use it for this database.
+1. Select the **vCores** slider if you wish to change the number of vCores available for your database under the Hyperscale service tier.
+1. Select the **High-AvailabilitySecondaryReplicas** slider if you wish to change the number of replicas under the Hyperscale service tier.
+1. Select **Apply**.
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+
+# [Azure CLI](#tab/azure-cli)
+
+This code sample calls [az sql db update](/cli/azure/sql/db#az_sql_db_update) to migrate an existing database in Azure SQL Database to the Hyperscale service tier. You must specify both the edition and service objective.
+
+Replace `resourceGroupName`, `serverName`, `databaseName`, and `serviceObjective` with the appropriate values before running the following code sample:
+
+```azurecli-interactive
+resourceGroupName="myResourceGroup"
+serverName="server01"
+databaseName="mySampleDatabase"
+serviceObjective="HS_Gen5_2"
+
+az sql db update -g $resourceGroupName -s $serverName -n $databaseName \
+ --edition Hyperscale --service-objective $serviceObjective
+
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+
+# [PowerShell](#tab/azure-powershell)
+
+The following example uses the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) cmdlet to migrate an existing database in Azure SQL Database to the Hyperscale service tier. You must specify both the edition and service objective.
+
+Replace `$resourceGroupName`, `$serverName`, `$databaseName`, and `$serviceObjective` with the appropriate values before running this code sample:
+
+```powershell-interactive
+$resourceGroupName = "myResourceGroup"
+$serverName = "server01"
+$databaseName = "mySampleDatabase"
+$serviceObjective = "HS_Gen5_2"
+
+Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName `
+ -DatabaseName $databaseName -Edition "Hyperscale" `
+ -RequestedServiceObjectiveName $serviceObjective
+
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+
+# [Transact-SQL](#tab/t-sql)
+
+To migrate an existing database in Azure SQL Database to the Hyperscale service tier with Transact-SQL, first connect to the master database on your [logical SQL server](logical-servers.md) using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+
+You must specify both the edition and service objective in the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?preserve-view=true&view=azuresqldb-current) statement.
+
+This example statement migrates a database named `mySampleDatabase` to the Hyperscale service tier with the `HS_Gen5_2` service objective. Replace the database name with the appropriate value before executing the statement.
+
+```sql
+ALTER DATABASE [mySampleDatabase]
+ MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_2');
+GO
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+++
+## <a id="reverse-migrate-from-hyperscale"></a>Reverse migrate from Hyperscale (preview)
+
+Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures.
+
+### Limitations for reverse migration
+
+Reverse migration is available under the following conditions:
+
+- Reverse migration is only available within 45 days of the original migration to Hyperscale.
+- Databases originally created in the Hyperscale service tier are not eligible for reverse migration.
+- You may reverse migrate to the [General Purpose](service-tier-general-purpose.md) service tier only. Your migration from Hyperscale to General Purpose can target either the serverless or provisioned compute tiers. If you wish to migrate the database to another service tier, such as [Business Critical](service-tier-business-critical.md) or a [DTU based service tier](service-tiers-dtu.md), first reverse migrate to the General Purpose service tier, then change the service tier.
+
+### Duration and downtime
+
+Unlike regular service level objective change operations in Hyperscale, migrating to Hyperscale and reverse migration to General Purpose are size-of-data operations.
+
+The duration of a reverse migration depends mainly on the size of the database and concurrent write activities happening during the migration. The number of vCores you assign to the target General Purpose database will also impact the duration of the reverse migration. We recommend that the target General Purpose database be provisioned with a number of vCores greater than or equal to the number of vCores assigned to the source Hyperscale database to sustain similar workloads.
+
+During reverse migration, the source Hyperscale database may experience performance degradation if under substantial load. Specifically, transaction log rate may be reduced (throttled) to ensure that reverse migration is making progress.
+
+You will only experience a short period of downtime, generally a few minutes, during the final cutover to the new target General Purpose database.
+
+### Prerequisites
+
+Before you initiate a reverse migration from Hyperscale to the General Purpose service tier, you must ensure that your database meets the [limitations for reverse migration](#limitations-for-reverse-migration) and:
+
+- Your database does not have Geo Replication enabled.
+- Your database does not have named replicas.
+- Your database (allocated size) is small enough to fit into the target service tier.
+- If you specify max database size for the target General Purpose database, ensure the allocated size of the database is small enough to fit into that maximum size.
+
+Prerequisite checks will occur before a reverse migration starts. If prerequisites are not met, the reverse migration will fail immediately.
+
+### Backup policies
+
+You will be [billed using the regular pricing](automated-backups-overview.md?tabs=single-database#backup-storage-costs) for all existing database backups within the [configured retention period](automated-backups-overview.md#backup-retention). You will be billed for the Hyperscale backup storage snapshots and for size-of-data storage blobs that must be retained to be able to restore the backup.
+
+You can migrate a database to Hyperscale and reverse migrate back to General Purpose multiple times. Only backups from the current and once-previous tier of your database will be available for restore. If you have moved from the General Purpose service tier to Hyperscale and back to General Purpose, the only backups available are the ones from the current General Purpose database and the immediately previous Hyperscale database. These retained backups are billed as per Azure SQL Database billing. Any previous tiers tried won't have backups available and will not be billed.
+
+For example, you could migrate between Hyperscale and non-Hyperscale service tiers:
+
+1. General Purpose
+1. Migrate to Hyperscale
+1. Reverse migrate to General Purpose
+1. Service tier change to Business Critical
+1. Migrate to Hyperscale
+1. Reverse migrate to General Purpose
+
+In this case, the only backups available would be from steps 5 and 6 of the timeline, if they are still within the [configured retention period](automated-backups-overview.md#backup-retention). Any backups from previous steps would be unavailable. This should be a careful consideration when attempting multiple reverse migrations from Hyperscale to the General Purpose tier.
+
+### How to reverse migrate a Hyperscale database to the General Purpose service tier
+
+To reverse migrate an existing Hyperscale database in Azure SQL Database to the Hyperscale service tier, first identify your target service objective in the General Purpose service tier and whether you wish to migrate to the provisioned or serverless compute tiers. Review [resource limits for single databases](resource-limits-vcore-single-databases.md#gen5-compute-generation-part-1) if you aren't sure which service objective is right for your database.
+
+If you wish to perform an additional service tier change after reverse migrating to General Purpose, identify your eventual target service objective as well and ensure that your database's allocated size is small enough to fit in that service objective.
+
+Select the tab for your preferred method to reverse migrate your database:
+
+# [Portal](#tab/azure-portal)
+
+The Azure portal enables you to reverse migrate to the General Purpose service tier by modifying the pricing tier for your database.
++
+1. Navigate to the database you wish to migrate in the Azure portal.
+1. In the left navigation bar, select **Compute + storage**.
+1. Select the **Service tier** drop-down to expand the options for service tiers.
+1. Select **General Purpose (Scalable compute and storage options)** from the dropdown menu.
+1. Review the **Hardware Configuration** listed. If desired, select **Change configuration** to select the appropriate hardware configuration for your workload.
+1. Review the option to **Save money**. Select it if you qualify for Azure Hybrid Benefit and wish to use it for this database.
+1. Select the **vCores** slider if you wish to change the number of vCores available for your database under the General Purpose service tier.
+1. Select **Apply**.
+
+# [Azure CLI](#tab/azure-cli)
+
+This code sample calls [az sql db update](/cli/azure/sql/db#az_sql_db_update) to reverse migrate an existing Hyperscale database to the General Purpose service tier. You must specify both the edition and service objective. You may select either `Provisioned` or `Serverless` for the target compute model.
+
+Replace `resourceGroupName`, `serverName`, `databaseName`, and `serviceObjective` with the appropriate values before running the following code sample:
+
+```azurecli-interactive
+resourceGroupName="myResourceGroup"
+serverName="server01"
+databaseName="mySampleDatabase"
+serviceObjective="GP_Gen5_2"
+computeModel="Provisioned"
+
+az sql db update -g $resourceGroupName -s $serverName -n $databaseName \
+ --edition GeneralPurpose --service-objective $serviceObjective \
+ --compute-model $computeModel
+
+```
+
+You can optionally include the `maxsize` argument. If the `maxsize` value exceeds the valid maximum size for the target service objective, an error will be returned. If the `maxsize` argument is not specified, the operation will default to the maximum size available for the given service objective. The following example specifies `maxsize`:
+
+```azurecli-interactive
+resourceGroupName="myResourceGroup"
+serverName="server01"
+databaseName="mySampleDatabase"
+serviceObjective="GP_Gen5_2"
+computeModel="Provisioned"
+maxsize="200GB"
+
+az sql db update -g $resourceGroupName -s $serverName -n $databaseName \
+ --edition GeneralPurpose --service-objective $serviceObjective \
+ --compute-model $computeModel --max-size $maxsize
+
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+
+# [PowerShell](#tab/azure-powershell)
+
+This code sample uses the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) cmdlet to reverse migrate an existing database from the Hyperscale service tier to the General Purpose service tier. You must specify both the edition and service objective. You may select either `Provisioned` or `Serverless` for the target compute tier.
+
+Replace `$resourceGroupName`, `$serverName`, `$databaseName`, `$serviceObjective`, and `$computeModel` with the appropriate values before running this code sample:
+
+```powershell-interactive
+$resourceGroupName = "myResourceGroup"
+$serverName = "server01"
+$databaseName = "mySampleDatabase"
+$serviceObjective = "GP_Gen5_2"
+$computeModel = "Provisioned"
+
+Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName `
+ -DatabaseName $databaseName -Edition "GeneralPurpose" -computemodel $computeModel `
+ -RequestedServiceObjectiveName $serviceObjective
+
+```
+
+You can optionally include the `maxsize` argument. If the `maxsize` value exceeds the valid maximum size for the target service objective, an error will be returned. If the `maxsize` argument is not specified, the operation will default to the maximum size available for the given service objective. The following example specifies `maxsize`:
+
+```powershell-interactive
+$resourceGroupName = "myResourceGroup"
+$serverName = "server01"
+$databaseName = "mySampleDatabase"
+$serviceObjective = "GP_Gen5_2"
+$computeModel = "Provisioned"
+$maxSizeBytes = "268435456000"
+
+Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName `
+ -DatabaseName $databaseName -Edition "GeneralPurpose" -computemodel $computeModel `
+ -RequestedServiceObjectiveName $serviceObjective -MaxSizeBytes $maxSizeBytes
+
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+
+# [Transact-SQL](#tab/t-sql)
+
+To reverse migrate a Hyperscale database to the General Purpose service tier with Transact-SQL, first connect to the master database on your [logical SQL server](logical-servers.md) using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) .
+
+You must specify both the edition and service objective in the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?preserve-view=true&view=azuresqldb-current) statement.
+
+This example statement migrates a database named `mySampleDatabase` to the General Purpose service tier with the `GP_Gen5_4` service objective. Replace the database name and service objective with the appropriate values before executing the statement.
+
+```sql
+ALTER DATABASE [mySampleDatabase]
+ MODIFY (EDITION = 'GeneralPurpose', SERVICE_OBJECTIVE = 'GP_Gen5_2');
+GO
+```
+
+You can optionally include the `maxsize` argument. If the `maxsize` value exceeds the valid maximum size for the target service objective, an error will be returned. If the `maxsize` argument is not specified, the operation will default to the maximum size available for the given service objective. The following example specifies `maxsize`:
+
+```sql
+ALTER DATABASE [mySampleDatabase]
+ MODIFY (EDITION = 'GeneralPurpose', SERVICE_OBJECTIVE = 'GP_Gen5_2', MAXSIZE = 200 GB);
+GO
+```
+
+You can [monitor operations for a Hyperscale database](#monitor-operations-for-a-hyperscale-database) while the operation is ongoing.
+++
+## Monitor operations for a Hyperscale database
+
+You can monitor the status of ongoing or recently completed operations for an Azure SQL Database using the Azure portal, the Azure CLI, PowerShell, or Transact-SQL.
+
+Select the tab for your preferred method to monitor operations.
+
+# [Portal](#tab/azure-portal)
+
+The Azure portal shows a notification for a database in Azure SQL Database when an operation such as a migration, reverse migration, or restore is in progress.
++
+1. Navigate to the database in the Azure portal.
+1. In the left navigation bar, select **Overview**.
+1. Review the **Notifications** section at the bottom of the right pane. If operations are ongoing, a notification box will appear.
+1. Select the notification box to view details.
+1. The **Ongoing operations** pane will open. Review the details of the ongoing operations.
++
+# [Azure CLI](#tab/azure-cli)
+
+This code sample calls [az sql db op list](/cli/azure/sql/db/op#az-sql-db-op-list) to return recent or ongoing operations for a database in Azure SQL Database.
+
+Replace `resourceGroupName`, `serverName`, `databaseName`, and `serviceObjective` with the appropriate values before running the following code sample:
+
+```azurecli-interactive
+resourceGroupName="myResourceGroup"
+serverName="server01"
+databaseName="mySampleDatabase"
+
+az sql db op list -g $resourceGroupName -s $serverName --database $databaseName
+
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+The [Get-AzSqlDatabaseActivity](/powershell/module/az.sql/get-azsqldatabaseactivity) cmdlet returns recent or ongoing operations for a database in Azure SQL Database.
+
+Set the `$resourceGroupName`, `$serverName`, and `$databaseName` parameters to the appropriate values for your database before running the sample code:
+
+```powershell-interactive
+$resourceGroupName = "myResourceGroup"
+$serverName = "server01"
+$databaseName = "mySampleDatabase"
+
+Get-AzSqlDatabaseActivity -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName
+
+```
+
+# [Transact-SQL](#tab/t-sql)
+
+To monitor operations for a Hyperscale database, first connect to the master database on your [logical server](logical-servers.md) using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or the client of your choice to run Transact-SQL commands.
+
+Query the [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) Dynamic Management View to review information about recent operations performed on databases on your [logical server](logical-servers.md].
+
+This code sample returns all entires in `sys.dm_operation_status` for the specified database, sorted by which operations began most recently. Replace the database name with the appropriate value before running the code sample.
+
+```sql
+SELECT *
+FROM sys.dm_operation_status
+WHERE major_resource_id = 'mySampleDatabase'
+ORDER BY start_time DESC;
+GO
+```
+++
+## View databases in the Hyperscale service tier
+
+After migrating a database to Hyperscale or reconfiguring a database within the Hyperscale service tier, you may wish to view and/or document the configuration of your Hyperscale database.
+
+# [Portal](#tab/azure-portal)
+
+The Azure portal shows a list of all databases on a [logical server](logical-servers.md). The **Pricing tier** column includes the service tier for each database.
++
+1. Navigate to your [logical server](logical-servers.md) in the Azure portal.
+1. In the left navigation bar, select **Overview**.
+1. Scroll to the list of resources at the bottom of the pane. The window will display SQL elastic pools and databases on the logical server.
+1. Review the **Pricing tier** column to identify databases in the Hyperscale service tier.
+
+# [Azure CLI](#tab/azure-cli)
+
+This Azure CLI code sample calls [az sql db list](/cli/azure/sql/db/op#az-sql-db-list) to list Hyperscale databases on a [logical server](logical-servers.md) with their name, location, service level objective, maximum size, and number of high availability replicas.
+
+Replace `resourceGroupName` and `serverName` with the appropriate values before running the following code sample:
+
+```azurecli-interactive
+resourceGroupName="myResourceGroup"
+serverName="server01"
+
+az sql db list -g $resourceGroupName -s $serverName --query "[].{Name:name, Location:location, SLO:currentServiceObjectiveName, Tier:currentSku.tier, maxSizeBytes:maxSizeBytes,HAreplicas:highAvailabilityReplicaCount}[?Tier=='Hyperscale']" --output table
+
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+The Azure PowerShell [Get-AzSqlDatabase](/powershell/module/az.sql/get-azsqldatabase) cmdlet returns a list of Hyperscale databases on a [logical server](logical-servers.md) with their name, location, service level objective, maximum size, and number of high availability replicas.
+
+Set the `$resourceGroupName` and `$serverName` parameters to the appropriate values before running the sample code:
+
+```powershell-interactive
+$resourceGroupName = "myResourceGroup"
+$serverName = "server01"
+
+Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName | `
+ Where-Object { $_.Edition -eq 'Hyperscale' } | `
+ Select-Object DatabaseName, Location, currentServiceObjectiveName, Edition, `
+ MaxSizeBytes, HighAvailabilityReplicaCount | `
+ Format-Table
+
+```
+
+Review the **Edition** column to identify databases in the Hyperscale service tier.
+
+# [Transact-SQL](#tab/t-sql)
+
+To review the service tiers of all Hyperscale databases on a [logical server](logical-servers.md) with Transact-SQL, first connect to the master database using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+
+Query the [sys.database_service_objectives](/sql/relational-databases/system-catalog-views/sys-database-service-objectives-azure-sql-database) system catalog view to review databases in the Hyperscale service tier:
+
+```sql
+SELECT d.name, dso.edition, dso.service_objective
+FROM sys.database_service_objectives AS dso
+JOIN sys.databases as d on dso.database_id = d.database_id
+WHERE dso.edition = 'Hyperscale';
+GO
+```
+++
+## Next steps
+
+Learn more about Hyperscale databases in the following articles:
+
+- [Quickstart: Create a Hyperscale database in Azure SQL Database](hyperscale-database-create-quickstart.md)
+- [Hyperscale service tier](service-tier-hyperscale.md)
+- [Azure SQL Database Hyperscale FAQ](service-tier-hyperscale-frequently-asked-questions-faq.yml)
+- [Hyperscale secondary replicas](service-tier-hyperscale-replicas.md)
+- [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml)
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-single-databases.md
This article provides the detailed resource limits for single databases in Azure
> [!IMPORTANT] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
-Each read-only replica of a database has its own resources, such as vCores, memory, data IOPS, TempDB, workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
+Each read-only replica of a database has its own resources, such as vCores, memory, data IOPS, tempdb, workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount for a single database using:
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|512|1024|1024|1024|1536| |Max log size (GB) <sup>2</sup>|154|307|307|307|461|
-|TempDB max data size (GB)|32|64|128|192|256|
+|Tempdb max data size (GB)|32|64|128|192|256|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>3</sup>|320|640|1280|1920|2560|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A| |Max data size (GB)|1536|3072|3072|3072| |Max log size (GB) <sup>1</sup>|461|461|461|922|
-|TempDB max data size (GB)|320|384|448|512|
+|Tempdb max data size (GB)|320|384|448|512|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|3200|3840|4480|5120|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|3072|3072|4096|4096|4096| |Max log size (GB) <sup>1</sup>|922|922|1024|1024|1024|
-|TempDB max data size (GB)|576|640|768|1024|1280|
+|Tempdb max data size (GB)|576|640|768|1024|1280|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5760|6400|7680|10240|12800|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Compute generation|Gen4|Gen4|Gen4|Gen4|Gen4|Gen4| |vCores|1|2|3|4|5|6| |Memory (GB)|7|14|21|28|35|42|
-|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
+|[RBPEX](hyperscale-architecture.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
|Columnstore support|Yes|Yes|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (TB)|100 |100 |100 |100 |100 |100| |Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |
-|TempDB max data size (GB)|32|64|96|128|160|192|
+|Tempdb max data size (GB)|32|64|96|128|160|192|
|Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) | |Max local SSD IOPS <sup>1</sup>|4000 |8000 |12000 |16000 |20000 |24000 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
+<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](hyperscale-architecture.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
### Gen4 compute generation (part 2)
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Compute generation|Gen4|Gen4|Gen4|Gen4|Gen4|Gen4| |vCores|7|8|9|10|16|24| |Memory (GB)|49|56|63|70|112|159.5|
-|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
+|[RBPEX](hyperscale-architecture.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
|Columnstore support|Yes|Yes|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (TB)|100 |100 |100 |100 |100 |100 | |Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |
-|TempDB max data size (GB)|224|256|288|320|512|768|
+|Tempdb max data size (GB)|224|256|288|320|512|768|
|Storage type| [Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) | |Max local SSD IOPS <sup>1</sup>|28000 |32000 |36000 |40000 |64000 |76800 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
+<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](hyperscale-architecture.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
## Hyperscale - provisioned compute - Gen5
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Compute generation|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5| |vCores|2|4|6|8|10|12|14| |Memory (GB)|10.4|20.8|31.1|41.5|51.9|62.3|72.7|
-|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
+|[RBPEX](hyperscale-architecture.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
|Columnstore support|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (TB)|100 |100 |100 |100 |100 |100 |100| |Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |
-|TempDB max data size (GB)|64|128|192|256|320|384|448|
+|Tempdb max data size (GB)|64|128|192|256|320|384|448|
|Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) | |Max local SSD IOPS <sup>1</sup>|8000 |16000 |24000 |32000 |40000 |48000 |56000 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |100 |
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
+<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](hyperscale-architecture.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
### Gen5 compute generation (part 2)
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Compute generation|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5|Gen5| |vCores|16|18|20|24|32|40|80| |Memory (GB)|83|93.4|103.8|124.6|166.1|207.6|415.2|
-|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
+|[RBPEX](hyperscale-architecture.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|3X Memory|
|Columnstore support|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (TB)|100 |100 |100 |100 |100 |100 |100 | |Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |
-|TempDB max data size (GB)|512|576|640|768|1024|1280|2560|
+|Tempdb max data size (GB)|512|576|640|768|1024|1280|2560|
|Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) | |Max local SSD IOPS <sup>1</sup>|64000 |72000 |80000 |96000 |128000 |160000 |204800 | |Max log rate (MBps)|100 |100 |100 |100 |100 |100 |100 |
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
+<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](hyperscale-architecture.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
#### Notes
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Compute generation|DC-series|DC-series|DC-series|DC-series| |vCores|2|4|6|8| |Memory (GB)|9|18|27|36|
-|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|
+|[RBPEX](hyperscale-architecture.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|
|Columnstore support|Yes|Yes|Yes|Yes| |In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A| |Max data size (TB)|100 |100 |100 |100 | |Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |
-|TempDB max data size (GB)|64|128|192|256|
+|Tempdb max data size (GB)|64|128|192|256|
|Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) | |Max local SSD IOPS <sup>1</sup>|14000|28000|42000|44800| |Max log rate (MBps)|100 |100 |100 |100 |
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Backup storage retention|7 days|7 days|7 days|7 days|
-<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
+<sup>1</sup> Besides local SSD IO, workloads will use remote [page server](hyperscale-architecture.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
### Notes
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1024|1536|1536|1536|3072| |Max log size (GB) <sup>1</sup>|307|307|461|461|461|922|
-|TempDB max data size (GB)|32|64|96|128|160|192|
+|Tempdb max data size (GB)|32|64|96|128|160|192|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|320|640|960|1280|1600|1920|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|3072|3072|3072|3072|4096|4096| |Max log size (GB) <sup>1</sup>|922|922|922|922|1229|1229|
-|TempDB max data size (GB)|224|256|288|320|512|768|
+|Tempdb max data size (GB)|224|256|288|320|512|768|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read) |Max data IOPS <sup>2</sup>|2240|2560|2880|3200|5120|7680|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1024|1536|1536|1536|3072|3072| |Max log size (GB) <sup>1</sup>|307|307|461|461|461|922|922|
-|TempDB max data size (GB)|64|128|192|256|320|384|384|
+|Tempdb max data size (GB)|64|128|192|256|320|384|384|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|640|1280|1920|2560|3200|3840|4480|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|3072|3072|3072|4096|4096|4096|4096| |Max log size (GB) <sup>1</sup>|922|922|922|1024|1024|1024|1024|
-|TempDB max data size (GB)|512|576|640|768|1024|1280|2560|
+|Tempdb max data size (GB)|512|576|640|768|1024|1280|2560|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5120|5760|6400|7680|10240|12800|12800|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1024|1024|1024|1536| |Max log size (GB) <sup>1</sup>|336|336|336|336|512|
-|TempDB max data size (GB)|37|46|56|65|74|
+|Tempdb max data size (GB)|37|46|56|65|74|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|2560|3200|3840|4480|5120|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|N/A|N/A| |Max data size (GB)|1536|1536|1536|3072|3072|4096| |Max log size (GB) <sup>1</sup>|512|512|512|1024|1024|1024|
-|TempDB max data size (GB)|83|93|111|148|167|333|
+|Tempdb max data size (GB)|83|93|111|148|167|333|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|5760|6400|7680|10240|11520|12800|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A| |Max data size (GB)|1024|1536|3072|3072| |Max log size (GB) <sup>1</sup>|307|461|922|922|
-|TempDB max data size (GB)|64|128|192|256|
+|Tempdb max data size (GB)|64|128|192|256|
|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD| |IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)| |Max data IOPS <sup>2</sup>|640|1280|1920|2560|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |Max data size (GB)|1024|1024|1024|1024|1024|1024| |Max log size (GB) <sup>1</sup>|307|307|307|307|307|307|
-|TempDB max data size (GB)|32|64|96|128|160|192|
+|Tempdb max data size (GB)|32|64|96|128|160|192|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|1356|1356|1356|1356|1356|1356| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|4,000|8,000|12,000|16,000|20,000|24,000|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |Max data size (GB)|1024|1024|1024|1024|1024|1024| |Max log size (GB) <sup>1</sup>|307|307|307|307|307|307|
-|TempDB max data size (GB)|224|256|288|320|512|768|
+|Tempdb max data size (GB)|224|256|288|320|512|768|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|1356|1356|1356|1356|1356|1356| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)| |Max data IOPS <sup>2</sup>|28,000|32,000|36,000|40,000|64,000|76,800|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|1.57|3.14|4.71|6.28|8.65|11.02|13.39| |Max data size (GB)|1024|1024|1536|1536|1536|3072|3072| |Max log size (GB) <sup>1</sup>|307|307|461|461|461|922|922|
-|TempDB max data size (GB)|64|128|192|256|320|384|448|
+|Tempdb max data size (GB)|64|128|192|256|320|384|448|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|4829|4829|4829|4829|4829|4829|4829| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|15.77|18.14|20.51|25.25|37.94|52.23|131.64| |Max data size (GB)|3072|3072|3072|4096|4096|4096|4096| |Max log size (GB) <sup>1</sup>|922|922|922|1024|1024|1024|1024|
-|TempDB max data size (GB)|512|576|640|768|1024|1280|2560|
+|Tempdb max data size (GB)|512|576|640|768|1024|1280|2560|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|4829|4829|4829|4829|4829|4829|4829| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|64|80|96|112|128|150| |Max data size (GB)|512|640|768|896|1024|1152| |Max log size (GB) <sup>1</sup>|171|213|256|299|341|384|
-|TempDB max data size (GB)|256|320|384|448|512|576|
+|Tempdb max data size (GB)|256|320|384|448|512|576|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|13836|13836|13836|13836|13836|13836| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|172|216|304|704|1768| |Max data size (GB)|1280|1536|2048|4096|4096| |Max log size (GB) <sup>1</sup>|427|512|683|1024|1024|
-|TempDB max data size (GB)|640|768|1024|2048|4096|
+|Tempdb max data size (GB)|640|768|1024|2048|4096|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|13836|13836|13836|13836|13836| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|1.7|3.7|5.9|8.2| |Max data size (GB)|768|768|768|768| |Max log size (GB) <sup>1</sup>|230|230|230|230|
-|TempDB max data size (GB)|64|128|192|256|
+|Tempdb max data size (GB)|64|128|192|256|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|1406|1406|1406|1406| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
azure-sql Scale Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scale-resources.md
The service tier, compute tier, and resource limits for a database, elastic pool
> [!NOTE] > Notable exceptions where you cannot change the service tier of a database are:
-> - Databases in the Hyperscale service tier cannot currently be changed to a different service tier.
> - Databases using features which are [only available](features-comparison.md#features-of-sql-database-and-sql-managed-instance) in the Business Critical / Premium service tiers, cannot be changed to use the General Purpose / Standard service tier.
+> - Databases originally created in the Hyperscale service tier cannot be migrated to other service tiers. If you migrate an existing database in Azure SQL Database to the Hyperscale service tier, you can reverse migrate to the General Purpose service tier within 45 days of the original migration to Hyperscale. If you wish to migrate the database to another service tier, such as Business Critical, first reverse migrate to the General Purpose service tier, then perform a further migration. Learn more in [How to reverse migrate from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale).
You can adjust the resources allocated to your database by changing service objective, or scaling, to meet workload demands. This also enables you to only pay for the resources that you need, when you need them. Please refer to the [note](#impact-of-scale-up-or-scale-down-operations) on the potential impact that a scale operation might have on an application.
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
Last updated 03/02/2022
# Hyperscale service tier [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Azure SQL Database is based on SQL Server Database Engine architecture that is adjusted for the cloud environment in order to ensure 99.99% availability even in the cases of infrastructure failures. There are three architectural models that are used in Azure SQL Database:
+Azure SQL Database is based on SQL Server Database Engine architecture that is adjusted for the cloud environment to ensure [high availability](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) even in cases of infrastructure failures. There are three architectural models that are used in Azure SQL Database:
- General Purpose/Standard - Hyperscale
The Hyperscale service tier in Azure SQL Database is the newest service tier in
The Hyperscale service tier in Azure SQL Database provides the following additional capabilities: -- Support for up to 100 TB of database size-- Nearly instantaneous database backups (based on file snapshots stored in Azure Blob storage) regardless of size with no IO impact on compute resources -- Fast database restores (based on file snapshots) in minutes rather than hours or days (not a size of data operation)-- Higher overall performance due to higher transaction log throughput and faster transaction commit times regardless of data volumes-- Rapid scale out - you can provision one or more [read-only replicas](service-tier-hyperscale-replicas.md) for offloading your read workload and for use as hot-standbys
+- Support for up to 100 TB of database size.
+- Nearly instantaneous database backups (based on file snapshots stored in Azure Blob storage) regardless of size with no IO impact on compute resources.
+- Fast database restores (based on file snapshots) in minutes rather than hours or days (not a size of data operation).
+- Higher overall performance due to higher transaction log throughput and faster transaction commit times regardless of data volumes.
+- Rapid scale out - you can provision one or more [read-only replicas](service-tier-hyperscale-replicas.md) for offloading your read workload and for use as hot-standbys.
- Rapid Scale up - you can, in constant time, scale up your compute resources to accommodate heavy workloads when needed, and then scale the compute resources back down when not needed. The Hyperscale service tier removes many of the practical limits traditionally seen in cloud databases. Where most other databases are limited by the resources available in a single node, databases in the Hyperscale service tier have no such limits. With its flexible storage architecture, storage grows as needed. In fact, Hyperscale databases aren't created with a defined max size. A Hyperscale database grows as needed - and you're billed only for the capacity you use. For read-intensive workloads, the Hyperscale service tier provides rapid scale-out by provisioning additional replicas as needed for offloading read workloads.
vCore resource limits are listed in the following articles, please be sure to up
/managed-instance/resource-limits.md >
-The vCore-based service tiers are differentiated based on database availability and storage type, performance, and maximum storage size, as described in the following table:
+The vCore-based service tiers are differentiated based on database availability and storage type, performance, and maximum storage size, as described in the following table:
-|| **General Purpose** | **Hyperscale** | **Business Critical** |
+|πàñ| **General Purpose** | **Hyperscale** | **Business Critical** |
|::|::|::|::|
-| **Best for** | Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.| OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup> | 1 to 80 vCores |
-| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)|
-| **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
-| **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS |
-| **Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
-| **Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
+|**Best for** |Offers budget oriented balanced compute and storage options. |Most business workloads. Autoscaling storage size up to 100 TB, fast vertical and horizontal compute scaling, fast database restore. |OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas. |
+|**Compute size** |1 to 80 vCores |1 to 80 vCores<sup>1</sup> |1 to 80 vCores |
+|**Storage type** |Premium remote storage (per instance) |De-coupled storage with local SSD cache (per instance) |Super-fast local SSD storage (per instance) |
+|**Storage size**<sup>1</sup> |5 GB ΓÇô 4 TB |Up to 100 TB |5 GB ΓÇô 4 TB |
+|**IOPS** |500 IOPS per vCore with 7,000 maximum IOPS. |Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. |5,000 IOPS with 200,000 maximum IOPS. |
+|**Availability** |1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache. |Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache. |3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage. |
+|**Backups** |A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days). |A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 7 day retention. |A choice of geo-redundant,zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days). |
<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier. -- ## Distributed functions architecture
-Unlike traditional database engines that have centralized all of the data management functions in one location/process (even so called distributed databases in production today have multiple copies of a monolithic data engine), a Hyperscale database separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, the storage capacity can be smoothly scaled out as far as needed (initial target is 100 TB). High-availability and named replicas share the same storage components so no data copy is required to spin up a new replica.
+Hyperscale separates the query processing engine from the components that provide long-term storage and durability for the data. This architecture provides the ability to smoothly scale storage capacity as far as needed (initial target is 100 TB), as well as the ability to scale compute resources rapidly.
The following diagram illustrates the different types of nodes in a Hyperscale database: ![architecture](./media/service-tier-Hyperscale/Hyperscale-architecture.png)
-A Hyperscale database contains the following different types of components:
-
-### Compute
-
-The compute node is where the relational engine lives. This is where language, query, and transaction processing occur. All user interactions with a Hyperscale database happen through these compute nodes. Compute nodes have SSD-based caches (labeled RBPEX - Resilient Buffer Pool Extension in the preceding diagram) to minimize the number of network round trips required to fetch a page of data. There is one primary compute node where all the read-write workloads and transactions are processed. There are one or more secondary compute nodes that act as hot standby nodes for failover purposes, as well as act as read-only compute nodes for offloading read workloads (if this functionality is desired).
-
-The database engine running on Hyperscale compute nodes is the same as in other Azure SQL Database service tiers. When users interact with the database engine on Hyperscale compute nodes, the supported surface area and engine behavior are the same as in other service tiers, with the exception of [known limitations](#known-limitations).
-
-### Page server
-
-Page servers are systems representing a scaled-out storage engine. Each page server is responsible for a subset of the pages in the database. Nominally, each page server controls either up to 128 GB or up to 1 TB of data. No data is shared on more than one page server (outside of page server replicas that are kept for redundancy and availability). The job of a page server is to serve database pages out to the compute nodes on demand, and to keep the pages updated as transactions update data. Page servers are kept up to date by playing transaction log records from the log service. Page servers also maintain covering SSD-based caches to enhance performance. Long-term storage of data pages is kept in Azure Storage for additional reliability.
-
-### Log service
-
-The log service accepts transaction log records from the primary compute replica, persists them in a durable cache, and forwards the log records to the rest of compute replicas (so they can update their caches) as well as the relevant page server(s), so that the data can be updated there. In this way, all data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers. Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service also has local memory and SSD caches to speed up access to log records. The log on hyperscale is practically infinite, with the restriction that a single transaction cannot generate more than 1 TB of log. Additionally, if using [Change Data Capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server), at most 1 TB of log can be generated since the start of the oldest active transaction. It is recommended to avoid unnecessarily large transactions to stay below this limit.
-
-### Azure storage
-
-Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This storage is used for backup purposes, as well as for replication between Azure regions. Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast regardless of data size. A database can be restored to any point in time within its backup retention period.
+Learn more about the [Hyperscale distributed functions architecture](hyperscale-architecture.md).
## Scale and performance advantages With the ability to rapidly spin up/down additional read-only compute nodes, the Hyperscale architecture allows significant read scale capabilities and can also free up the primary compute node for serving more write requests. Also, the compute nodes can be scaled up/down rapidly due to the shared-storage architecture of the Hyperscale architecture.
-## Create a Hyperscale database
+## Create and manage Hyperscale databases
-A Hyperscale database can be created using the [Azure portal](https://portal.azure.com), [T-SQL](/sql/t-sql/statements/create-database-transact-sql), [PowerShell](/powershell/module/azurerm.sql/new-azurermsqldatabase), or [CLI](/cli/azure/sql/db#az-sql-db-create). Hyperscale databases are available only using the [vCore-based purchasing model](service-tiers-vcore.md).
+You can create and manage Hyperscale databases using the [Azure portal](https://portal.azure.com), [Transact-SQL](/sql/t-sql/statements/create-database-transact-sql), [PowerShell](/powershell/module/azurerm.sql/new-azurermsqldatabase), and the [Azure CLI](/cli/azure/sql/db#az_sql_db_create).
-The following T-SQL command creates a Hyperscale database. You must specify both the edition and service objective in the `CREATE DATABASE` statement. Refer to the [resource limits](./resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen4) for a list of valid service objectives.
-```sql
Create a Hyperscale Database
-CREATE DATABASE [HyperscaleDB1] (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_4');
-GO
-```
-
-This will create a Hyperscale database on Gen5 hardware with four cores.
-
-## Upgrade existing database to Hyperscale
-
-You can move your existing databases in Azure SQL Database to Hyperscale using the [Azure portal](https://portal.azure.com), [T-SQL](/sql/t-sql/statements/alter-database-transact-sql), [PowerShell](/powershell/module/azurerm.sql/set-azurermsqldatabase), or [CLI](/cli/azure/sql/db#az-sql-db-update). At this time, this is a one-way migration. You can't move databases from Hyperscale to another service tier, other than by exporting and importing data. For proofs of concept (POCs), we recommend making a copy of your production databases, and migrating the copy to Hyperscale. Migrating an existing database in Azure SQL Database to the Hyperscale tier is a size of data operation.
-
-The following T-SQL command moves a database into the Hyperscale service tier. You must specify both the edition and service objective in the `ALTER DATABASE` statement.
-
-```sql
Alter a database to make it a Hyperscale Database
-ALTER DATABASE [DB2] MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_4');
-GO
-```
-
-> [!NOTE]
-> To move a database that is a part of a [geo-replication](active-geo-replication-overview.md) relationship, either as the primary or as a secondary, to Hyperscale, you have to stop replication. Databases in a [failover group](auto-failover-group-overview.md) must be removed from the group first.
->
-> Once a database has been moved to Hyperscale, you can create a new Hyperscale geo-replica for that database. Geo-replication for Hyperscale is in preview with certain [limitations](active-geo-replication-overview.md).
+| **Operation** | **Details** | **Learn more** |
+|:|:|:|
+|**Create a Hyperscale database**| Hyperscale databases are available only using the [vCore-based purchasing model](service-tiers-vcore.md). | Find examples to create a Hyperscale database in [Quickstart: Create a Hyperscale database in Azure SQL Database](hyperscale-database-create-quickstart.md). |
+| **Upgrade an existing database to Hyperscale** | Migrating an existing database in Azure SQL Database to the Hyperscale tier is a size of data operation. | Learn [how to migrate an existing database to Hyperscale](manage-hyperscale-database.md#migrate-an-existing-database-to-hyperscale).|
+| **Reverse migrate a Hyperscale database to the General Purpose service tier (preview)** | If you previously migrated an existing Azure SQL Database to the Hyperscale service tier, you can reverse migrate the database to the General Purpose service tier within 45 days of the original migration to Hyperscale.<BR/><BR/>If you wish to migrate the database to another service tier, such as Business Critical, first reverse migrate to the General Purpose service tier, then change the service tier. | Learn [how to reverse migrate from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale), including the [limitations for reverse migration](manage-hyperscale-database.md#limitations-for-reverse-migration).|
+| | | |
## Database high availability in Hyperscale
Learn more in [restoring a Hyperscale database to a different region](automated-
## <a name=regions></a>Available regions
-The Azure SQL Database Hyperscale tier is enabled in the vast majority of Azure regions. If you want to create a Hyperscale database in a region where Hyperscale is not enabled by default, you can send an onboarding request via Azure portal. For instructions, see [Request quota increases for Azure SQL Database](quota-increase-request.md) for instructions. When submitting your request, use the following guidelines:
+The Azure SQL Database Hyperscale tier is enabled in the vast majority of Azure regions. If you want to create a Hyperscale database in a region where Hyperscale is not enabled by default, you can send an onboarding request via Azure portal. For instructions, see [Request quota increases for Azure SQL Database](quota-increase-request.md). When submitting your request, use the following guidelines:
- Use the [Region access](quota-increase-request.md#region) SQL Database quota type. - In the description, add the compute SKU/total cores including high-availability and named replicas, and indicate that you are requesting Hyperscale capacity. - Also specify a projection of the total size of all databases over time in TB. - ## Known limitations
-These are the current limitations to the Hyperscale service tier as of GA. We're actively working to remove as many of these limitations as possible.
+These are the current limitations of the Hyperscale service tier. We're actively working to remove as many of these limitations as possible.
| Issue | Description | | :- | : | | Backup retention is currently seven days; long-term retention policies aren't yet supported. | Hyperscale has a unique method for managing backups, so a non-Hyperscale database can't be restored as a Hyperscale database, and a Hyperscale database can't be restored as a non-Hyperscale database.<BR/><BR/>For databases migrated to Hyperscale from other Azure SQL Database service tiers, pre-migration backups are kept for the duration of [backup retention](automated-backups-overview.md#backup-retention) period of the source database, including long-term retention policies. Restoring a pre-migration backup within the backup retention period of the database is supported [programmatically](recovery-using-backups.md#programmatic-recovery-using-automated-backups). You can restore these backups to any non-Hyperscale service tier.|
+| Service tier change from Hyperscale to another tier is not supported directly | Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures. Databases created in the Hyperscale service tier are not eligible for reverse migration. Learn the [limitations for reverse migration](manage-hyperscale-database.md#limitations-for-reverse-migration). <BR/><BR/> For databases that don't qualify for reverse migration, the only way to migrate from Hyperscale to a non-Hyperscale service tier is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az_sql_db_export) and [az sql db import](/cli/azure/sql/db#az_sql_db_import), and from [REST API](/rest/api/sql/) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons. |
| When changing Azure SQL Database service tier to Hyperscale, the operation fails if the database has any data files larger than 1 TB | In some cases, it may be possible to work around this issue by [shrinking](file-space-manage.md#shrinking-data-files) the large files to be less than 1 TB before attempting to change the service tier to Hyperscale. Use the following query to determine the current size of database files. `SELECT file_id, name AS file_name, size * 8. / 1024 / 1024 AS file_size_GB FROM sys.database_files WHERE type_desc = 'ROWS'`;| | SQL Managed Instance | Azure SQL Managed Instance isn't currently supported with Hyperscale databases. | | Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.|
-| Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az-sql-db-export) and [az sql db import](/cli/azure/sql/db#az-sql-db-import), and from [REST API](/rest/api/sql/) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.|
| Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be changed to disk tables.| | Geo-replication | [Geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md) on Hyperscale is now in public preview. | | Intelligent Database Features | With the exception of the "Force Plan" option, all other Automatic Tuning options aren't yet supported on Hyperscale: options may appear to be enabled, but there won't be any recommendations or actions made. | | Query Performance Insights | Query Performance Insights is currently not supported for Hyperscale databases. | | Shrink Database | DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases. | | Database integrity check | DBCC CHECKDB isn't currently supported for Hyperscale databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK and DBCC CHECKFILEGROUP WITH TABLOCK may be used as a workaround. See [Data Integrity in Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/) for details on data integrity management in Azure SQL Database. |
-| Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other Azure SQL database. |
+| Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other database in Azure SQL Database. |
|Data Sync| Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology. | |Import Export | Import-Export service is currently not supported for Hyperscale databases. | ## Next steps
+Learn more about Hyperscale in Azure SQL Database in the following articles:
+ - For an FAQ on Hyperscale, see [Frequently asked questions about Hyperscale](service-tier-hyperscale-frequently-asked-questions-faq.yml).-- For information about service tiers, see [Service tiers](purchasing-models.md)
+- For information about service tiers, see [Service tiers](purchasing-models.md).
- See [Overview of resource limits on a server](resource-limits-logical-server.md) for information about limits at the server and subscription levels. - For purchasing model limits for a single database, see [Azure SQL Database vCore-based purchasing model limits for a single database](resource-limits-vcore-single-databases.md). - For a features and comparison list, see [SQL common features](features-comparison.md).
+- Learn about the [Hyperscale distributed functions architecture](hyperscale-architecture.md).
+- Learn [How to manage a Hyperscale database](manage-hyperscale-database.md).
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-create-quickstart.md
Use the [az sql up](/cli/azure/sql#az-sql-up) command to create and configure a
# [PowerShell](#tab/azure-powershell)
-You can create a resource group, server, and single database using Windows PowerShell.
+You can create a resource group, server, and single database using Azure PowerShell.
### Launch Azure Cloud Shell
azure-sql Instance Pools Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/instance-pools-configure.md
az sql instance-pool create
--capacity 8 --tier GeneralPurpose --family Gen5
- --resrouce-group myResourceGroup
+ --resource-group myResourceGroup
--subnet miPoolSubnet --vnet-name miPoolVirtualNetwork ```
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware (public preview) is currently available
| Region | **Premium-series** | **Memory optimized premium-series** | |: |: |: |
-| Australia Central | Yes | |
| Australia East | Yes | Yes | | Canada Central | Yes | |
+| Canada East | Yes | |
+| Central US | Yes | Yes |
+| Germany West Central | Yes | Yes |
| Japan East | Yes | | | Korea Central | Yes | |
-| North Central US | Yes | |
+| North Central US | Yes | Yes |
+| North Europe | Yes | |
| South Central US | Yes | Yes | | Southeast Asia | Yes | |
+| UK South | Yes | Yes |
| West Europe | | Yes |
-| West US | Yes | Yes |
+| West US | Yes | |
| West US 2 | Yes | Yes | | West US 3 | Yes | Yes |
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
ms.devlang:
- Previously updated : 04/06/2022+ Last updated : 04/11/2022 # Migration guide: SQL Server to Azure SQL Managed Instance+ [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)] This guide helps you migrate your SQL Server instance to Azure SQL Managed Instance.
For more migration information, see the [migration overview](sql-server-to-manag
To migrate your SQL Server to Azure SQL Managed Instance, make sure you have: - Chosen a [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options) and the corresponding tools for your method.
+- Install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
- Installed the [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server. - Created a target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md) - Configured connectivity and proper permissions to access both source and target. - Reviewed the SQL Server database engine features [available in Azure SQL Managed Instance](../../database/features-comparison.md). - ## Pre-migration After you've verified that your source environment is supported, start with the pre-migration stage. Discover all of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent your migration.
Proceed to the following steps to assess and migrate databases to Azure SQL Mana
:::image type="content" source="media/sql-server-to-managed-instance-overview/migration-process-sql-managed-instance-steps.png" alt-text="Steps for migration to Azure SQL Managed Instance"::: - [Assess SQL Managed Instance compatibility](#assess) where you should ensure that there are no blocking issues that can prevent your migrations.
- This step also includes creation of a [performance baseline](sql-server-to-managed-instance-performance-baseline.md#create-a-baseline) to determine resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly sized managed instance and verify that performance after migration is not affected.
+ This step also includes creation of a [performance baseline](sql-server-to-managed-instance-performance-baseline.md#create-a-baseline) to determine resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly sized managed instance and verify that performance after migration isn't affected.
- [Choose app connectivity options](../../managed-instance/connect-application-instance.md).-- [Deploy to an optimally sized managed instance](#deploy-to-an-optimally-sized-managed-instance) where you will choose technical characteristics (number of vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed instance.
+- [Deploy to an optimally sized managed instance](#deploy-to-an-optimally-sized-managed-instance) where you'll choose technical characteristics (number of vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed instance.
- [Select migration method and migrate](sql-server-to-managed-instance-overview.md#compare-migration-options) where you migrate your databases using offline migration or online migration options. - [Monitor and remediate applications](#monitor-and-remediate-applications) to ensure that you have expected performance. - ### Assess [!INCLUDE [assess-estate-with-azure-migrate](../../../../includes/azure-migrate-to-assess-sql-data-estate.md)]
-Determine whether SQL Managed Instance is compatible with the database requirements of
-your application. SQL Managed Instance is designed to provide easy lift and shift migration for
-the majority of existing applications that use SQL Server. However, you may sometimes require
-features or capabilities that are not yet supported and the cost of implementing a workaround is
-too high.
+Determine whether SQL Managed Instance is compatible with the database requirements of your application. SQL Managed Instance is designed to provide easy lift and shift migration for most existing applications that use SQL Server. However, you may sometimes require features or capabilities that aren't yet supported and the cost of implementing a workaround is too high.
+
+The [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data [to recommend a right-sized Azure SQL Managed Instance](../../../dms/ads-sku-recommend.md) to meet the performance needs of your workload (with the least price).
-You can use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
+You can also use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
- [Azure target recommendations](/sql/dma/dma-assess-sql-data-estate-to-sqldb) - [Azure SKU recommendations](/sql/dma/dma-sku-recommend-sql-db)
To assess your environment using the Database Migration Assessment, follow these
1. Specify a project name, select SQL Server as the source server type, and then select Azure SQL Managed Instance as the target server type. 1. Select the type(s) of assessment reports that you want to generate. For example, database compatibility and feature parity. Based on the type of assessment, the permissions required on the source SQL Server can be different. DMA will highlight the permissions required for the chosen advisor before running the assessment. - The **feature parity** category provides a comprehensive set of recommendations, alternatives available in Azure, and mitigating steps to help you plan your migration project. (sysadmin permissions required)
- - The **compatibility issues** category identifies partially supported or unsupported feature compatibility issues that might block migration as well as recommendations to address them (`CONNECT SQL`, `VIEW SERVER STATE`, and `VIEW ANY DEFINITION` permissions required).
+ - The **compatibility issues** category identifies partially supported or unsupported feature compatibility issues that might block migration, and recommendations to address them (`CONNECT SQL`, `VIEW SERVER STATE`, and `VIEW ANY DEFINITION` permissions required).
1. Specify the source connection details for your SQL Server and connect to the source database. 1. Select **Start assessment**. 1. When the process is complete, select and review the assessment reports for migration blocking and feature parity issues. The assessment report can also be exported to a file that can be shared with other teams or personnel in your organization.
To assess your environment using the Database Migration Assessment, follow these
To learn more, see [Perform a SQL Server migration assessment with Data Migration Assistant](/sql/dma/dma-assesssqlonprem).
-If SQL Managed Instance is not a suitable target for your workload, SQL Server on Azure VMs might be a viable alternative target for your business.
+If SQL Managed Instance isn't a suitable target for your workload, SQL Server on Azure VMs might be a viable alternative target for your business.
-#### Scaled Assessments and Analysis
+#### Scaled assessments and analysis
-Data Migration Assistant supports performing scaled assessments and consolidation of the assessment reports for analysis. If you have multiple servers and databases that need to be assessed and analyzed at scale to provide a wider view of the data estate, click on the following links to learn more.
+If you have multiple servers or databases that require Azure readiness assessment, you can automate the process by using scripts using one of the following options. To learn more about using scripting see [Migrate databases at scale using automation](../../../dms/migration-dms-powershell-cli.md).
+
+- [Az.DataMigration PowerShell module](/powershell/module/az.datamigration)
+- [az datamigration CLI extension](/cli/azure/datamigration)
+- [Data Migration Assistant command-line interface](/sql/dma/dma-commandline)
+
+Data Migration Assistant also supports consolidation of the assessment reports for analysis. If you have multiple servers and databases that need to be assessed and analyzed at scale to provide a wider view of the data estate, see the following links to learn more.
- [Performing scaled assessments using PowerShell](/sql/dma/dma-consolidatereports) - [Analyzing assessment reports using Power BI](/sql/dma/dma-consolidatereports#dma-reports) > [!IMPORTANT]
+>
>Running assessments at scale for multiple databases can also be automated using [DMA's Command Line Utility](/sql/dma/dma-commandline) which also allows the results to be uploaded to [Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb#view-target-readiness-assessment-results) for further analysis and target readiness. ### Deploy to an optimally sized managed instance
-Based on the information in the discover and assess phase, create an appropriately-sized target SQL Managed Instance. You can do so by using the [Azure portal](../../managed-instance/instance-create-quickstart.md), [PowerShell](../../managed-instance/scripts/create-configure-managed-instance-powershell.md), or an [Azure Resource Manager (ARM) Template](../../managed-instance/create-template-quickstart.md).
+You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
-SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [purchasing model](../../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This purchasing model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
+Based on the information in the discover and assess phase, create an appropriately sized target SQL Managed Instance. You can do so by using the [Azure portal](../../managed-instance/instance-create-quickstart.md), [PowerShell](../../managed-instance/scripts/create-configure-managed-instance-powershell.md), or an [Azure Resource Manager (ARM) Template](../../managed-instance/create-template-quickstart.md).
-- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).-- Based on the baseline memory usage, choose [the service tier that has matching memory](../../managed-instance/resource-limits.md#hardware-configuration-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
+SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [purchasing model](../../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you're probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This purchasing model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
+
+- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you're using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).
+- Based on the baseline memory usage, choose [the service tier that has matching memory](../../managed-instance/resource-limits.md#hardware-configuration-characteristics). The amount of memory can't be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
- Based on the baseline IO latency of the file subsystem, choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers. - Based on baseline throughput, pre-allocate the size of data or log files to get expected IO performance.
You can choose compute and storage resources at deployment time and then change
To learn how to create the VNet infrastructure and a managed instance, see [Create a managed instance](../../managed-instance/instance-create-quickstart.md). > [!IMPORTANT]
+>
> It is important to keep your destination VNet and subnet in accordance with [managed instance VNet requirements](../../managed-instance/connectivity-architecture-overview.md#network-requirements). Any incompatibility can prevent you from creating new instances or using those that you already created. Learn more about [creating new](../../managed-instance/virtual-network-subnet-create-arm-template.md) and [configuring existing](../../managed-instance/vnet-existing-add-subnet.md) networks. ## Migrate
-After you have completed tasks associated with the Pre-migration stage, you are ready to perform the schema and data migration.
+After you have completed tasks associated with the Pre-migration stage, you're ready to perform the schema and data migration.
Migrate your data using your chosen [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options).
-SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or
-Azure VM database implementations. They are the optimal choice when you need to lift and shift
-the back end of the applications that regularly use instance level and/or cross-database
-functionalities. If this is your scenario, you can move an entire instance to a corresponding
-environment in Azure without the need to re-architect your applications.
+SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or Azure VM database implementations. They are the optimal choice when you need to lift and shift the back end of the applications that regularly use instance level and/or cross-database functionalities. If this is your scenario, you can move an entire instance to a corresponding environment in Azure without the need to rearchitect your applications.
To move SQL instances, you need to plan carefully: - The migration of all databases that need to be collocated (ones running on the same instance).-- The migration of instance-level objects that your application depends on, including logins,
-credentials, SQL Agent jobs and operators, and server-level triggers.
+- The migration of instance-level objects that your application depends on, including logins, credentials, SQL Agent jobs and operators, and server-level triggers.
-SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA
-activities to the platform as they are built in. Therefore, some instance-level data does not
-need to be migrated, such as maintenance jobs for regular backups or Always On configuration, as
-[high availability](../../database/high-availability-sla.md) is built in.
+SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA activities to the platform as they're built in. Therefore, some instance-level data doesn't need to be migrated, such as maintenance jobs for regular backups or Always On configuration, as [high availability](../../database/high-availability-sla.md) is built in.
This article covers two of the recommended migration options: -- Azure Database Migration Service - migration with near-zero downtime.-- Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some
-downtime.
+- Azure SQL Migration extension for Azure Data Studio - migration with near-zero downtime.
+- Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some downtime.
This guide describes the two most popular options - Azure Database Migration Service (DMS) and native backup and restore. For other migration tools, see [Compare migration options](sql-server-to-managed-instance-overview.md#compare-migration-options).
-### Database Migration Service
-
-To perform migrations using DMS, follow the steps below:
-
-1. [Register the **Microsoft.DataMigration** resource provider](../../../dms/quickstart-create-data-migration-service-portal.md#register-the-resource-provider) in your subscription if you are performing this for the first time.
-1. Create an Azure Database Migration Service Instance in a desired location of your choice (preferably in the same region as your target Azure SQL Managed Instance) and select an existing virtual network or create a new one to host your DMS instance.
-1. After creating your DMS instance, create a new migration project and specify the source server type as **SQL Server** and the target server type as **Azure SQL Database Managed Instance**. Choose the type of activity in the project creation blade - online or offline data migration.
-1. Specify the source SQL Server details on the **Migration source** details page and the target Azure SQL Managed Instance details on the **Migration target** details page. Select **Next**.
-1. Choose the database you want to migrate.
-1. Provide configuration settings to specify the **SMB Network Share** that contains your database backup files. Use Windows User credentials with DMS that can access the network share. Provide your **Azure Storage account details**.
-1. Review the migration summary, and choose **Run migration**. You can then monitor the migration activity and check the progress of your database migration.
-1. After database is restored, choose **Start cutover**. The migration process copies the tail-log backup once you make it available in the SMB network share and restore it on the target.
-1. Stop all incoming traffic to your source database and update the connection string to the new Azure SQL Managed Instance database.
-
-For a detailed step-by-step tutorial of this migration option, see [Migrate SQL Server to an Azure SQL Managed Instance online using DMS](../../../dms/tutorial-sql-server-managed-instance-online.md).
+### Migrate using the Azure SQL Migration extension for Azure Data Studio (minimal downtime)
+
+To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a detailed step-by-step tutorial, see [Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio](../../../dms/tutorial-sql-server-managed-instance-online-ads.md):
+
+1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
+1. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect performance data and get right-sized Azure recommendation.
+1. Select your Azure account and your target Azure SQL Managed Instance from your subscription.
+1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
+1. Create a new Azure Database Migration Service using the wizard in Azure Data Studio. If you've previously created an Azure Database Migration Service using Azure Data Studio, you can reuse the same if desired.
+1. *Optional*: If your backups are on an on-premises network share, download and install [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a machine that can connect to the source SQL Server, and the location containing the backup files.
+1. Start the database migration and monitor the progress in Azure Data Studio. You can also monitor the progress under the Azure Database Migration Service resource in Azure portal.
+1. Complete the cutover.
+ 1. Stop all incoming transactions to the source database.
+ 1. Make application configuration changes to point to the target database in Azure SQL Managed Instance.
+ 1. Take any tail log backups for the source database in the backup location specified.
+ 1. Ensure all database backups have the status Restored in the monitoring details page.
+ 1. Select Complete cutover in the monitoring details page.
-### Backup and restore
+### Backup and restore
-One of the key capabilities of Azure SQL Managed Instance to enable quick and easy database migration is the native restore of database backup (`.bak`) files stored on on [Azure Storage](https://azure.microsoft.com/services/storage/). Backup and restore is an asynchronous operation based on the size of your database.
+One of the key capabilities of Azure SQL Managed Instance to enable quick and easy database migration is the native restore of database backup (`.bak`) files stored on [Azure Storage](https://azure.microsoft.com/services/storage/). Backing up and restoring are asynchronous operations based on the size of your database.
The following diagram provides a high-level overview of the process:
-![Diagram shows SQL Server with an arrow labeled BACKUP / Upload to URL flowing to Azure Storage and a second arrow labeled RESTORE from URL flowing from Azure Storage to a Managed Instance of SQL.](./media/sql-server-to-managed-instance-overview/migration-restore.png)
> [!NOTE]
+>
> The time to take the backup, upload it to Azure storage, and perform a native restore operation to Azure SQL Managed Instance is based on the size of the database. Factor a sufficient downtime to accommodate the operation for large databases. The following table provides more information regarding the methods you can use depending on
-source SQL Server version you are running:
+source SQL Server version you're running:
|Step|SQL Engine and version|Backup/restore method| ||||
source SQL Server version you are running:
> [!IMPORTANT] >
-> - When you're migrating a database protected by [Transparent Data Encryption](../../database/transparent-data-encryption-tde-overview.md) to a managed instance using native restore option,
-the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated
-before database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](../../managed-instance/tde-certificate-migrate.md).
-> - Restore of system databases is not supported. To migrate instance-level objects (stored in
-master or msdb databases), we recommend to script them out and run T-SQL scripts on the
-destination instance.
+> - When you're migrating a database protected by [Transparent Data Encryption](../../database/transparent-data-encryption-tde-overview.md) to a managed instance using native restore option, the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated before database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](../../managed-instance/tde-certificate-migrate.md).
+> - Restore of system databases is not supported. To migrate instance-level objects (stored in `master` or `msdb` databases), we recommend to script them out and run T-SQL scripts on the destination instance.
To migrate using backup and restore, follow these steps:
To migrate using backup and restore, follow these steps:
To learn more about this migration option, see [Restore a database to Azure SQL Managed Instance with SSMS](../../managed-instance/restore-sample-database-quickstart.md). > [!NOTE]
-> A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
-
+>
+> A database restore operation is asynchronous and can be retried. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
## Data sync and cutover When using migration options that continuously replicate / sync data changes from source to the target, the source data and schema can change and drift from the target. During data sync, ensure that all changes on the source are captured and applied to the target during the migration process.
-After you verify that data is the same on both source and target, you can cutover from the source to the target environment. It is important to plan the cutover process with business / application teams to ensure minimal interruption during cutover does not affect business continuity.
+After you verify that data is the same on both source and target, you can cut over from the source to the target environment. It's important to plan the cutover process with business / application teams to ensure minimal interruption during cutover doesn't affect business continuity.
> [!IMPORTANT]
+>
> For details on the specific steps associated with performing a cutover as part of migrations using DMS, see [Performing migration cutover](../../../dms/tutorial-sql-server-managed-instance-online.md#performing-migration-cutover). - ## Post-migration
-After you have successfully completed the migration stage, go through a series of post-migration tasks to ensure that everything is functioning smoothly and efficiently.
+After you've successfully completed the migration stage, go through a series of post-migration tasks to ensure that everything is functioning smoothly and efficiently.
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, and addressing performance issues with the workload.
### Monitor and remediate applications
-Once you have completed the migration to a managed instance, you should track the application behavior and performance of your workload. This process includes the following activities:
+Once you've completed the migration to a managed instance, you should track the application behavior and performance of your workload. This process includes the following activities:
- [Compare performance of the workload running on the managed instance](sql-server-to-managed-instance-performance-baseline.md#compare-performance) with the [performance baseline that you created on the source SQL Server instance](sql-server-to-managed-instance-performance-baseline.md#create-a-baseline). - Continuously [monitor performance of your workload](sql-server-to-managed-instance-performance-baseline.md#monitor-performance) to identify potential issues and improvement.
Once you have completed the migration to a managed instance, you should track th
The test approach for database migration consists of the following activities:
-1. **Develop validation tests**: To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+1. **Develop validation tests**: To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you've defined.
1. **Set up test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment. 1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results. 1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
+## Use advanced features
-## Leverage advanced features
-
-Be sure to take advantage of the advanced cloud-based features offered by SQL Managed Instance, such as [built-in high availability](../../database/high-availability-sla.md), [threat detection](../../database/azure-defender-for-sql.md), and [monitoring and tuning your workload](../../database/monitor-tune-overview.md).
+You can take advantage of the advanced cloud-based features offered by SQL Managed Instance, such as [built-in high availability](../../database/high-availability-sla.md), [threat detection](../../database/azure-defender-for-sql.md), and [monitoring and tuning your workload](../../database/monitor-tune-overview.md).
[Azure SQL Analytics](../../../azure-sql/database/monitor-tune-overview.md) allows you to monitor a large set of managed instances in a centralized manner. Some SQL Server features are only available once the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level (150). - ## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- See [Service and tools for data migration](../../../dms/dms-tools-matrix.md) for a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks.
- To learn more about Azure SQL Managed Instance see: - [Service Tiers in Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md#service-tiers) - [Differences between SQL Server and Azure SQL Managed Instance](../../managed-instance/transact-sql-tsql-differences-sql-server.md) - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/) - - To learn more about the framework and adoption cycle for Cloud migrations, see - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale) - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)+ - For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
ms.devlang:
- Previously updated : 04/06/2022+ Last updated : 04/11/2022 # Migration overview: SQL Server to Azure SQL Managed Instance+ [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlmi.md)] Learn about the options and considerations for migrating your SQL Server databases to Azure SQL Managed Instance.
One of the key benefits of migrating your SQL Server databases to SQL Managed In
- Instance-level objects required for your application, including logins, credentials, SQL Agent jobs and operators, and server-level triggers > [!NOTE]
-> Azure SQL Managed Instance guarantees 99.99 percent availability, even in critical scenarios. Overhead caused by some features in SQL Managed Instance can't be disabled. For more information, see the [Key causes of performance differences between SQL Managed Instance and SQL Server](https://azure.microsoft.com/blog/key-causes-of-performance-differences-between-sql-managed-instance-and-sql-server/) blog entry.
-
+>
+> Azure SQL Managed Instance guarantees 99.99 percent availability, even in critical scenarios. Overhead caused by some features in SQL Managed Instance can't be disabled. For more information, see the [Key causes of performance differences between SQL Managed Instance and SQL Server](https://azure.microsoft.com/blog/key-causes-of-performance-differences-between-sql-managed-instance-and-sql-server/) blog entry.
## Choose an appropriate target
+You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
+ The following general guidelines can help you choose the right service tier and characteristics of SQL Managed Instance to help match your [performance baseline](sql-server-to-managed-instance-performance-baseline.md): -- Use the CPU usage baseline to provision a managed instance that matches the number of cores that your instance of SQL Server uses. It might be necessary to scale resources to match the [hardware characteristics](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).
+- Use the CPU usage baseline to provision a managed instance that matches the number of cores that your instance of SQL Server uses. It might be necessary to scale resources to match the [hardware configuration characteristics](../../managed-instance/resource-limits.md#hardware-configuration-characteristics).
- Use the memory usage baseline to choose a [vCore option](../../managed-instance/resource-limits.md#service-tier-characteristics) that appropriately matches your memory allocation. - Use the baseline I/O latency of the file subsystem to choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers. - Use the baseline throughput to preallocate the size of the data and log files to achieve expected I/O performance.
The following general guidelines can help you choose the right service tier and
You can choose compute and storage resources during deployment and then [change them afterward by using the Azure portal](../../database/scale-resources.md), without incurring downtime for your application. > [!IMPORTANT]
+>
> Any discrepancy in the [virtual network requirements for managed instances](../../managed-instance/connectivity-architecture-overview.md#network-requirements) can prevent you from creating new instances or using existing ones. Learn more about [creating new](../../managed-instance/virtual-network-subnet-create-arm-template.md) and [configuring existing](../../managed-instance/vnet-existing-add-subnet.md) networks. Another key consideration in the selection of the target service tier in Azure SQL Managed Instance (General Purpose versus Business Critical) is the availability of certain features, like In-Memory OLTP, that are available only in the Business Critical tier.
We recommend the following migration tools:
|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.| |[Managed Instance link](../../managed-instance/managed-instance-link-feature-overview.md) | This feature enables online migration to Managed Instance using Always On technology. ItΓÇÖs a migration option for customers who require database on Managed Instance to be accessible in R/O mode while migration is in progress, who need to keep the migration running for prolonged periods of time (weeks or months at the time), who require true online replication to Business Critical service tier, and for customers who require the most performant minimum downtime migration. | -- The following table lists alternative migration tools: |**Technology** |**Description** | ||| |[Transactional replication](../../managed-instance/replication-transactional-overview.md) | Replicate data from source SQL Server database tables to SQL Managed Instance by providing a publisher-subscriber type migration option while maintaining transactional consistency. |
-|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| The [bulk copy program (bcp) tool](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the tool to export the data from your source and import the data file into the target SQL managed instance. </br></br> For high-speed bulk copy operations to move data to Azure SQL Managed Instance, you can use the [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) to maximize transfer speed by taking advantage of parallel copy tasks. |
+|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| The [bulk copy program (bcp) tool](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the tool to export the data from your source and import the data file into the target SQL managed instance.<br/><br/>For high-speed bulk copy operations to move data to Azure SQL Managed Instance, you can use the [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) to maximize transfer speed by taking advantage of parallel copy tasks. |
|[Import Export Wizard/BACPAC](../../database/database-import.md?tabs=azure-powershell)| [BACPAC](/sql/relational-databases/data-tier-applications/data-tier-applications#bacpac) is a Windows file with a .bacpac extension that encapsulates a database's schema and data. You can use BACPAC to both export data from a SQL Server source and import the data back into Azure SQL Managed Instance. |
-|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to SQL Managed Instance by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to SQL Managed Instance. |
+|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to SQL Managed Instance by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).<br/><br/>Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to SQL Managed Instance. |
## Compare migration options
The following table compares the recommended migration options:
|Migration option |When to use |Considerations | ||||
-|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale. </br> - Can run in both online (minimal downtime) and offline (acceptable downtime) modes. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Easy to setup and get started. </br> - Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups. </br> - Includes both assessment and migration capabilities. |
-|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). </br> - Time to complete migration depends on database size and is affected by backup and restore time. </br> - Sufficient downtime might be required. |
-|[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases. </br> - Quick and easy migration without a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate. </br> - Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).|
-|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used for read or write workloads until the process is complete.|
-|[Managed Instance link](../../managed-instance/managed-instance-link-feature-overview.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a network connection between SQL Server and SQL Managed Instance, and opening communication ports. </br> - Uses [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) technology to replicate database near real-time, making an exact replica of the SQL Server database on SQL Managed Instance. </br> - The database can be used for read-only access on SQL Managed Instance while migration is in progress. </br> - Provides the best performance during migration with minimum downtime. |
-
+|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale.<br/>- Can run in both online (minimal downtime) and offline (acceptable downtime) modes.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Easy to setup and get started.<br/>- Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups.<br/>- Includes both assessment and migration capabilities. |
+|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale.<br/>- Can accommodate downtime during the migration process.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md).<br/>- Time to complete migration depends on database size and is affected by backup and restore time.<br/>- Sufficient downtime might be required. |
+|[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases.<br/>- Quick and easy migration without a separate migration service or tool.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate.<br/>- Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).|
+|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases.<br/>- More control is needed for database migrations.<br/><br/>Supported sources:<br/>- SQL Server (2008 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.<br/>- Databases being restored during the migration process will be in a restoring mode and can't be used for read or write workloads until the process is complete.|
+|[Link feature for Azure SQL Managed Instance](../../managed-instance/managed-instance-link-feature-overview.md) | - Migrate individual line-of-business application databases.<br/>- More control is needed for database migrations.<br/>- Minimum downtime migration is needed.<br/><br/>Supported sources:<br/>- SQL Server (2016 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- GCP Compute SQL Server VM | - The migration entails establishing a network connection between SQL Server and SQL Managed Instance, and opening communication ports.<br/>- Uses [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) technology to replicate database near real-time, making an exact replica of the SQL Server database on SQL Managed Instance.<br/>- The database can be used for read-only access on SQL Managed Instance while migration is in progress.<br/>- Provides the best performance during migration with minimum downtime. |
The following table compares the alternative migration options: |Method or technology |When to use |Considerations | ||||
-|[Transactional replication](../../managed-instance/replication-transactional-overview.md) | - Migrate by continuously publishing changes from source database tables to target SQL Managed Instance database tables. </br> - Do full or partial database migrations of selected tables (subset of a database). </br> </br> Supported sources: </br> - SQL Server (2012 to 2019) with some limitations </br> - AWS EC2 </br> - GCP Compute SQL Server VM | </br> - Setup is relatively complex compared to other migration options. </br> - Provides a continuous replication option to migrate data (without taking the databases offline).</br> - Transactional replication has limitations to consider when you're setting up the publisher on the source SQL Server instance. See [Limitations on publishing objects](/sql/relational-databases/replication/publish/publish-data-and-database-objects#limitations-on-publishing-objects) to learn more. </br> - Capability to [monitor replication activity](/sql/relational-databases/replication/monitor/monitoring-replication) is available. |
-|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations. </br> - Can accommodate downtime. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target. </br> - The file formats and data types used in the export or import need to be consistent with table schemas. |
-|[Import Export Wizard/BACPAC](../../database/database-import.md)| - Migrate individual line-of-business application databases. </br>- Suited for smaller databases. </br> Does not require a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | </br> - Requires downtime because data needs to be exported at the source and imported at the destination. </br> - The file formats and data types used in the export or import need to be consistent with table schemas to avoid truncation or data-type mismatch errors. </br> - Time taken to export a database with a large number of objects can be significantly higher. |
-|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| - Migrate and/or transform data from source SQL Server databases.</br> - Merging data from multiple sources of data to Azure SQL Managed Instance is typically for business intelligence (BI) workloads. </br> - Requires creating data movement pipelines in Data Factory to move data from source to destination. </br> - [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. |
-
+|[Transactional replication](../../managed-instance/replication-transactional-overview.md) | - Migrate by continuously publishing changes from source database tables to target SQL Managed Instance database tables.<br/>- Do full or partial database migrations of selected tables (subset of a database).<br/><br/>Supported sources:<br/>- SQL Server (2012 to 2019) with some limitations<br/>- AWS EC2<br/>- GCP Compute SQL Server VM |<br/>- Setup is relatively complex compared to other migration options.<br/>- Provides a continuous replication option to migrate data (without taking the databases offline).<br/>- Transactional replication has limitations to consider when you're setting up the publisher on the source SQL Server instance. See [Limitations on publishing objects](/sql/relational-databases/replication/publish/publish-data-and-database-objects#limitations-on-publishing-objects) to learn more.<br/>- Capability to [monitor replication activity](/sql/relational-databases/replication/monitor/monitoring-replication) is available. |
+|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations.<br/>- Can accommodate downtime.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target.<br/>- The file formats and data types used in the export or import need to be consistent with table schemas. |
+|[Import Export Wizard/BACPAC](../../database/database-import.md)| - Migrate individual line-of-business application databases.<br/>- Suited for smaller databases.<br/>Doesn't require a separate migration service or tool.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM |<br/>- Requires downtime because data needs to be exported at the source and imported at the destination.<br/>- The file formats and data types used in the export or import need to be consistent with table schemas to avoid truncation or data-type mismatch errors.<br/>- Time taken to export a database with a large number of objects can be significantly higher. |
+|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| - Migrate and/or transform data from source SQL Server databases.<br/>- Merging data from multiple sources of data to Azure SQL Managed Instance is typically for business intelligence (BI) workloads.<br/>- Requires creating data movement pipelines in Data Factory to move data from source to destination.<br/>- [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. |
## Feature interoperability
Migrate SQL Server Integration Services (SSIS) packages and projects in SSISDB t
Only SSIS packages in SSISDB starting with SQL Server 2012 are supported for migration. Convert older SSIS packages before migration. See the [project conversion tutorial](/sql/integration-services/lesson-6-2-converting-the-project-to-the-project-deployment-model) to learn more. - ### SQL Server Reporting Services You can migrate SQL Server Reporting Services (SSRS) reports to paginated reports in Power BI. Use theΓÇ»[RDL Migration Tool](https://github.com/microsoft/RdlMigration) to help prepare and migrate your reports. Microsoft developed this tool to help customers migrate Report Definition Language (RDL) reports from their SSRS servers to Power BI. It's available on GitHub, and it documents an end-to-end walkthrough of the migration scenario.
Beyond the high-availability architecture that's included in SQL Managed Instanc
Use the offline Azure Database Migration Service option to migrate [SQL Agent jobs](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). Otherwise, script the jobs in Transact-SQL (T-SQL) by using SQL Server Management Studio and then manually re-create them on the target SQL managed instance. > [!IMPORTANT]
+>
> Currently, Azure Database Migration Service supports only jobs with T-SQL subsystem steps. Jobs with SSIS package steps have to be manually migrated. ### Logins and groups
By default, Azure Database Migration Service supports migrating only SQL logins.
- Ensuring that the target SQL managed instance has Azure Active Directory (Azure AD) read access. A user who has the Global Administrator role can configure that access via the Azure portal. - Configuring Azure Database Migration Service to enable Windows user or group login migrations. You set this up via the Azure portal, on the **Configuration** page. After you enable this setting, restart the service for the changes to take effect.
-After you restart the service, Windows user or group logins appear in the list of logins available for migration. For any Windows user or group logins that you migrate, you're prompted to provide the associated domain name. Service user accounts (accounts with the domain name NT AUTHORITY) and virtual user accounts (accounts with the domain name NT SERVICE) are not supported. To learn more, see [How to migrate Windows users and groups in a SQL Server instance to Azure SQL Managed Instance using T-SQL](../../managed-instance/migrate-sql-server-users-to-instance-transact-sql-tsql-tutorial.md).
+After you restart the service, Windows user or group logins appear in the list of logins available for migration. For any Windows user or group logins that you migrate, you're prompted to provide the associated domain name. Service user accounts (accounts with the domain name NT AUTHORITY) and virtual user accounts (accounts with the domain name NT SERVICE) aren't supported. To learn more, see [How to migrate Windows users and groups in a SQL Server instance to Azure SQL Managed Instance using T-SQL](../../managed-instance/migrate-sql-server-users-to-instance-transact-sql-tsql-tutorial.md).
Alternatively, you can use the [PowerShell utility](https://www.microsoft.com/download/details.aspx?id=103111) specially designed by Microsoft data migration architects. The utility uses PowerShell to create a T-SQL script to re-create logins and select database users from the source to the target.
-The PowerShell utility automatically maps Windows Server Active Directory accounts to Azure AD accounts, and it can do a UPN lookup for each login against the source Active Directory instance. The utility scripts custom server and database roles, along with role membership and user permissions. Contained databases are not yet supported, and only a subset of possible SQL Server permissions are scripted.
+The PowerShell utility automatically maps Windows Server Active Directory accounts to Azure AD accounts, and it can do a UPN lookup for each login against the source Active Directory instance. The utility scripts custom server and database roles, along with role membership and user permissions. Contained databases aren't yet supported, and only a subset of possible SQL Server permissions is scripted.
### Encryption
When you're migrating databases protected byΓÇ»[Transparent Data Encryption](../
### System databases
-Restore of system databases is not supported. To migrate instance-level objects (stored in the master and msdb databases), script them by using T-SQL and then re-create them on the target managed instance.
+Restore of system databases isn't supported. To migrate instance-level objects (stored in the `master` and `msdb` databases), script them by using T-SQL and then re-create them on the target managed instance.
### In-Memory OLTP (memory-optimized tables) SQL Server provides an In-Memory OLTP capability. It allows usage of memory-optimized tables, memory-optimized table types, and natively compiled SQL modules to run workloads that have high-throughput and low-latency requirements for transactional processing. > [!IMPORTANT]
+>
> In-Memory OLTP is supported only in the Business Critical tier in Azure SQL Managed Instance. It's not supported in the General Purpose tier. If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server instance and you want to migrate to Azure SQL Managed Instance, you should either:
For more assistance, see the following resources that were developed for real-wo
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. - ## Next steps - To start migrating your SQL Server databases to Azure SQL Managed Instance, see the [SQL Server to Azure SQL Managed Instance migration guide](sql-server-to-managed-instance-guide.md).
azure-sql Sql Server To Sql On Azure Vm Individual Databases Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide.md
ms.devlang:
- Previously updated : 03/19/2021+ Last updated : 04/11/2022 # Migration guide: SQL Server to SQL Server on Azure Virtual Machines [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlvm.md)]
-In this guide, you learn how to *discover*, *assess*, and *migrate* your user databases from SQL Server to an instance of SQL Server on Azure Virtual Machines by using backup and restore and log shipping that uses [Data Migration Assistant](/sql/dma/dma-overview) for assessment.
+In this guide, you learn how to *discover*, *assess*, and *migrate* your user databases from SQL Server to an instance of SQL Server on Azure Virtual Machines by tools and techniques based on your requirements.
You can migrate SQL Server running on-premises or on:
For information about extra migration strategies, see the [SQL Server VM migrati
Migrating to SQL Server on Azure Virtual Machines requires the following resources: -- [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).-- An [Azure Migrate project](../../../migrate/create-manage-projects.md).
+- [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- An [Azure Migrate project](../../../migrate/create-manage-projects.md) (only required for SQL Server discovery in your data estate).
- A prepared target [SQL Server on Azure Virtual Machines](../../virtual-machines/windows/create-sql-vm-portal.md) instance that's the same or greater version than the SQL Server source. - [Connectivity between Azure and on-premises](/azure/architecture/reference-architectures/hybrid-networking). - [Choosing an appropriate migration strategy](sql-server-to-sql-on-azure-vm-migration-overview.md#migrate).
For more discovery tools, see the [services and tools](../../../dms/dms-tools-ma
### Assess
+When migrating from SQL Server on-premises to SQL Server on Azure Virtual Machines, it is unlikely that you'll have any compatibility or feature parity issues if the source and target SQL Server versions are the same. If you're *not* upgrading the version of SQL Server, skip this step and move to the [Migrate](#migrate) section.
-After you've discovered all the data sources, use [Data Migration Assistant](/sql/dma/dma-overview) to assess on-premises SQL Server instances migrating to an instance of SQL Server on Azure Virtual Machines to understand the gaps between the source and target instances.
+Before migration, it's still a good practice to run an assessment of your SQL Server databases to identify migration blockers (if any) and the [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) does that before migration.
-> [!NOTE]
-> If you're _not_ upgrading the version of SQL Server, skip this step and move to the [Migrate](#migrate) section.
#### Assess user databases
-Data Migration Assistant assists your migration to a modern data platform by detecting compatibility issues that can affect database functionality in your new version of SQL Server. Data Migration Assistant recommends performance and reliability improvements for your target environment and also allows you to move your schema, data, and login objects from your source server to your target server.
+The [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data to recommend a right-sized SQL Server on Azure Virtual Machines to meet the performance needs of your workload (with the least price).
-To learn more, see [Assessment](/sql/dma/dma-migrateonpremsql).
+To learn more about Azure recommendations, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md).
> [!IMPORTANT]
->Based on the type of assessment, the permissions required on the source SQL Server can be different:
- > - For the *feature parity* advisor, the credentials provided to connect to the source SQL Server database must be a member of the *sysadmin* server role.
- > - For the *compatibility issues* advisor, the credentials provided must have at least `CONNECT SQL`, `VIEW SERVER STATE`, and `VIEW ANY DEFINITION` permissions.
- > - Data Migration Assistant will highlight the permissions required for the chosen advisor before running the assessment.
+>To assess databases using the Azure SQL Migration extension, ensure that the logins used to connect the source SQL Server are members of the sysadmin server role or have CONTROL SERVER permission.
+
+For a version upgrade, use [Data Migration Assistant](/sql/dma/dma-overview) to assess on-premises SQL Server instances if you are upgrading to an instance of SQL Server on Azure Virtual Machines with a higher version to understand the gaps between the source and target versions.
#### Assess the applications
During the assessment of user databases, use Data Migration Assistant to [import
#### Assessments at scale
-If you have multiple servers that require a Data Migration Assistant assessment, you can automate the process by using the [command-line interface](/sql/dma/dma-commandline). Using the interface, you can prepare assessment commands in advance for each SQL Server instance in the scope for migration.
+If you have multiple servers that require Azure readiness assessment, you can automate the process by using scripts using one of the following options. To learn more about using scripting see [Migrate databases at scale using automation](../../../dms/migration-dms-powershell-cli.md).
+- [Az.DataMigration PowerShell module](/powershell/module/az.datamigration)
+- [az datamigration CLI extension](/cli/azure/datamigration)
+- [Data Migration Assistant command-line interface](/sql/dma/dma-commandline)
-For summary reporting across large estates, Data Migration Assistant assessments can now be [consolidated into Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
+For summary reporting across large estates, Data Migration Assistant assessments can also be [consolidated into Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
-#### Refactor databases with Data Migration Assistant
+#### Upgrade databases with Data Migration Assistant
-Based on the Data Migration Assistant assessment results, you might have a series of recommendations to ensure your user databases perform and function correctly after migration. Data Migration Assistant provides details on the impacted objects and resources for how to resolve each issue. Make sure to resolve all breaking changes and behavior changes before you start production migration.
+For upgrade scenario, you might have a series of recommendations to ensure your user databases perform and function correctly after upgrade. Data Migration Assistant provides details on the impacted objects and resources for how to resolve each issue. Make sure to resolve all breaking changes and behavior changes before you start production upgrade.
For deprecated features, you can choose to run your user databases in their original [compatibility](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) mode if you want to avoid making these changes and speed up migration. This action will prevent [upgrading your database compatibility](/sql/database-engine/install-windows/compatibility-certification#compatibility-levels-and-database-engine-upgrades) until the deprecated items have been resolved.
-You need to script all Data Migration Assistant fixes and apply them to the target SQL Server database during the [post-migration](#post-migration) phase.
- > [!CAUTION] > Not all SQL Server versions support all compatibility modes. Check that your [target SQL Server version](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) supports your chosen database compatibility. For example, SQL Server 2019 doesn't support databases with level 90 compatibility (which is SQL Server 2005). These databases would require, at least, an upgrade to compatibility level 100. >
After you've completed the pre-migration steps, you're ready to migrate the user
The following sections provide steps for performing either a migration by using backup and restore or a minimal downtime migration by using backup and restore along with log shipping.
+### Migrate using the Azure SQL Migration extension for Azure Data Studio (minimal downtime)
+
+To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a detailed step-by-step tutorial, see [Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio](../../../dms/tutorial-sql-server-to-virtual-machine-online-ads.md):
+
+1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
+1. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect performance data and get right-sized Azure recommendation.
+1. Select your Azure account and your target SQL Server on Azure Machine from your subscription.
+1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
+1. Create a new Azure Database Migration Service using the wizard in Azure Data Studio. If you have previously created a Azure Database Migration Service using Azure Data Studio, you can reuse the same if desired.
+1. *Optional*: If your backups are on an on-premises network share, download and install [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a machine that can connect to source SQL Server and the location containing the backup files.
+1. Start the database migration and monitor the progress in Azure Data Studio. You can also monitor the progress under the Azure Database Migration Service resource in Azure portal.
+1. Complete the cutover.
+ 1. Stop all incoming transactions to the source database.
+ 1. Make application configuration changes to point to the target database in SQL Server on Azure Virtual Machine.
+ 1. Take any tail log backups for the source database in the backup location specified.
+ 1. Ensure all database backups have the status Restored in the monitoring details page.
+ 1. Select Complete cutover in the monitoring details page.
+ ### Backup and restore To perform a standard migration by using backup and restore:
To perform a standard migration by using backup and restore:
1. Copy your on-premises backup files to your VM by using a remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AzCopy command-line utility](../../../storage/common/storage-use-azcopy-v10.md). (Greater than 2-TB backups are recommended.) 1. Restore full database backups to the SQL Server on Azure Virtual Machines.
-### Log shipping (minimize downtime)
-
-To perform a minimal downtime migration by using backup and restore and log shipping:
-
-1. Set up connectivity to the SQL Server on Azure Virtual Machines based on your requirements. For more information, see [Connect to a SQL Server virtual machine on Azure (Resource Manager)](../../virtual-machines/windows/ways-to-connect-to-sql.md).
-1. Ensure on-premises user databases to be migrated are in full or bulk-logged recovery model.
-1. Perform a full database backup to an on-premises location, and modify any existing full database backups jobs to use the [COPY_ONLY](/sql/relational-databases/backup-restore/copy-only-backups-sql-server) keyword to preserve the log chain.
-1. Copy your on-premises backup files to your VM by using a remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AzCopy command-line utility](../../../storage/common/storage-use-azcopy-v10.md). (Greater than 1-TB backups are recommended.)
-1. Restore full database backups on SQL Server on Azure Virtual Machines.
-1. Set up [log shipping](/sql/database-engine/log-shipping/configure-log-shipping-sql-server) between the on-premises database and SQL Server on Azure Virtual Machines. Be sure not to reinitialize the databases because this task was already completed in the previous steps.
-1. Cut over to the target server.
- 1. Pause or stop applications by using databases to be migrated.
- 1. Ensure user databases are inactive by using [single user mode](/sql/relational-databases/databases/set-a-database-to-single-user-mode).
- 1. When you're ready, perform a log shipping [controlled failover](/sql/database-engine/log-shipping/fail-over-to-a-log-shipping-secondary-sql-server) of on-premises databases to SQL Server on Azure Virtual Machines.
- ### Migrate objects outside user databases More SQL Server objects might be required for the seamless operation of your user databases post migration.
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
Save on costs by bringing your own license with the [Azure Hybrid Benefit licens
## Choose appropriate target Azure Virtual Machines run in many different regions of Azure and also offer a variety of [machine sizes](../../../virtual-machines/sizes.md) and [Storage options](../../../virtual-machines/disks-types.md).
-When determining the correct size of VM and Storage for your SQL Server workload, refer to the [Performance Guidelines for SQL Server on Azure Virtual Machines.](../../virtual-machines/windows/performance-guidelines-best-practices-checklist.md#vm-size). To determine the VM size and storage requirements for your workload. it is recommended that these are sized through a Performance-Based [Azure Migrate Assessment](../../../migrate/concepts-assessment-calculation.md#types-of-assessments). If this is not an available option, see the following article on creating your own [baseline for performance](https://azure.microsoft.com/services/virtual-machines/sql-server/).
+When determining the correct size of VM and Storage for your SQL Server workload, refer to the [Performance Guidelines for SQL Server on Azure Virtual Machines.](../../virtual-machines/windows/performance-guidelines-best-practices-checklist.md#vm-size).
+
+You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized SQL Server on Azure Virtual Machines recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
+
+To determine the VM size and storage requirements for all your workloads in your data estate, it is recommended that these are sized through a Performance-Based [Azure Migrate Assessment](../../../migrate/concepts-assessment-calculation.md#types-of-assessments). If this is not an available option, see the following article on creating your own [baseline for performance](https://azure.microsoft.com/services/virtual-machines/sql-server/).
Consideration should also be made on the correct installation and configuration of SQL Server on a VM. It is recommended to use the [Azure SQL virtual machine image gallery](../../virtual-machines/windows/create-sql-vm-portal.md) as this allows you to create a SQL Server VM with the right version, edition, and operating system. This will also register the Azure VM with the SQL Server [Resource Provider](../../virtual-machines/windows/create-sql-vm-portal.md) automatically, enabling features such as Automated Backups and Automated Patching.
The appropriate approach for your business typically depends on the following fa
- Supportability life cycle of your existing products - Window for application downtime during migration - The following table describes differences in the two migration strategies: <br /> | **Migration strategy** | **Description** | **When to use** | | | | | | **Lift & shift** | Use the lift and shift migration strategy to move the entire physical or virtual SQL Server from its current location onto an instance of SQL Server on Azure VM without any changes to the operating system, or SQL Server version. To complete a lift and shift migration, see [Azure Migrate](../../../migrate/migrate-services-overview.md). <br /><br /> The source server remains online and services requests while the source and destination server synchronize data allowing for an almost seamless migration. | Use for single to very large-scale migrations, even applicable to scenarios such as data center exit. <br /><br /> Minimal to no code changes required to user SQL databases or applications, allowing for faster overall migrations. <br /><br />No additional steps required for migrating the Business Intelligence services such as [SSIS](/sql/integration-services/sql-server-integration-services), [SSRS](/sql/reporting-services/create-deploy-and-manage-mobile-and-paginated-reports), and [SSAS](/analysis-services/analysis-services-overview). |
-|**Migrate** | Use a migrate strategy when you want to upgrade the target SQL Server and/or operating system version. <br /> <br /> Select an Azure VM from Azure Marketplace or a prepared SQL Server image that matches the source SQL Server version. <br/> <br/> Use the [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) to migrate SQL Server database(s) to SQL Server on Azure virtual machines with minimal downtime. | Use when there is a requirement or desire to use features available in newer versions of SQL Server, or if there is a requirement to upgrade legacy SQL Server and/or OS versions that are no longer in support. <br /> <br /> May require some application or user database changes to support the SQL Server upgrade. <br /><br />There may be additional considerations for migrating [Business Intelligence](#business-intelligence) services if in the scope of migration. |
+|**Migrate** | Use a migration strategy when you want to upgrade the target SQL Server and/or operating system version. <br /> <br /> Select an Azure VM from Azure Marketplace or a prepared SQL Server image that matches the source SQL Server version. <br/> <br/> Use the [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) to assess, get recommendations for right-sized Azure configuration (VM series, compute and storage) and migrate SQL Server database(s) to SQL Server on Azure virtual machines with minimal downtime. | Use when there is a requirement or desire to migrate to SQL Server on Azure Virtual Machines, or if there is a requirement to upgrade legacy SQL Server and/or OS versions that are no longer in support. <br /> <br /> May require some application or user database changes to support the SQL Server upgrade. <br /><br />There may be additional considerations for migrating [Business Intelligence](#business-intelligence) services if in the scope of migration. |
## Lift and shift
The following table details all available methods to migrate your SQL Server dat
|**Method** | **Minimum source version** | **Minimum target version** | **Source backup size constraint** | **Notes** | | | | | | |
-| **[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md)** | SQL Server 2005 | SQL Server 2008 | [Azure VM storage limit](../../../index.yml) | This is an easy to use wizard based extension in Azure Data Studio for migrating SQL Server database(s) to SQL Server on Azure virtual machines. Use compression to minimize backup size for transfer. <br /><br /> The Azure SQL Migration extension for Azure Data Studio provides both assessment and migration capabilities in a simple user interface. |
+| **[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md)** | SQL Server 2008 | SQL Server 2008 | [Azure VM storage limit](../../../index.yml) | This is an easy to use wizard based extension in Azure Data Studio for migrating SQL Server database(s) to SQL Server on Azure virtual machines. Use compression to minimize backup size for transfer. <br /><br /> The Azure SQL Migration extension for Azure Data Studio provides assessment, Azure recommendation and migration capabilities in a simple user interface and supports minimal downtime migrations. |
| **[Distributed availability group](sql-server-distributed-availability-group-migrate-prerequisites.md)** | SQL Server 2016| SQL Server 2016 | [Azure VM storage limit](../../../index.yml) | A [distributed availability group](/sql/database-engine/availability-groups/windows/distributed-availability-groups) is a special type of availability group that spans two separate availability groups. The availability groups that participate in a distributed availability group do not need to be in the same location and include cross-domain support. <br /><br /> This method minimizes downtime, use when you have an availability group configured on-premises. <br /><br /> **Automation & scripting**: [T-SQL](/sql/t-sql/statements/alter-availability-group-transact-sql) | | **[Backup to a file](sql-server-to-sql-on-azure-vm-individual-databases-guide.md#migrate)** | SQL Server 2008 SP4 | SQL Server 2008 SP4| [Azure VM storage limit](../../../index.yml) | This is a simple and well-tested technique for moving databases across machines. Use compression to minimize backup size for transfer. <br /><br /> **Automation & scripting**: [Transact-SQL (T-SQL)](/sql/t-sql/statements/backup-transact-sql) and [AzCopy to Blob storage](../../../storage/common/storage-use-azcopy-v10.md) | | **[Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url)** | SQL Server 2012 SP1 CU2 | SQL Server 2012 SP1 CU2| 12.8 TB for SQL Server 2016, otherwise 1 TB | An alternative way to move the backup file to the VM using Azure storage. Use compression to minimize backup size for transfer. <br /><br /> **Automation & scripting**: [T-SQL or maintenance plan](/sql/relational-databases/backup-restore/sql-server-backup-to-url) |
The following is a list of key points to consider when reviewing migration metho
- For optimum data transfer performance, migrate databases and files onto an instance of SQL Server on Azure VM using a compressed backup file. For larger databases, in addition to compression, [split the backup file into smaller files](/sql/relational-databases/backup-restore/back-up-files-and-filegroups-sql-server) for increased performance during backup and transfer. - If migrating from SQL Server 2014 or higher, consider [encrypting the backups](/sql/relational-databases/backup-restore/backup-encryption) to protect data during network transfer.-- To minimize downtime during database migration, use the Always On availability group option. -- To minimize downtime without the overhead of configuring an availability group, use the log shipping option.
+- To minimize downtime during database migration, use the Azure SQL Migration extension in Azure Data Studio or Always On availability group option.
- For limited to no network options, use offline migration methods such as backup and restore, or [disk transfer services](../../../storage/common/storage-solution-large-dataset-low-network.md) available in Azure. - To also change the version of SQL Server on a SQL Server on Azure VM, see [change SQL Server edition](../../virtual-machines/windows/change-sql-server-edition.md).
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
Last updated 04/11/2022
# Configure a site-to-site VPN in vWAN for Azure VMware Solution
-In this article, you'll establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
+In this article, you'll establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premises VPN device with an Azure VMware Solution endpoint.
:::image type="content" source="media/create-ipsec-tunnel/vpn-s2s-tunnel-architecture.png" alt-text="Diagram showing VPN site-to-site tunnel architecture." border="false":::
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
>[!IMPORTANT] >This is an optional step and applies only to policy-based VPNs.
-[Policy-based VPN setups](../virtual-wan/virtual-wan-custom-ipsec-portal.md) require on-premise and Azure VMware Solution networks to be specified, including the hub ranges. These ranges specify the encryption domain of the policy-based VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
+[Policy-based VPN setups](../virtual-wan/virtual-wan-custom-ipsec-portal.md) require on-premises and Azure VMware Solution networks to be specified, including the hub ranges. These ranges specify the encryption domain of the policy-based VPN tunnel on-premises endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
1. In the Azure portal, go to your Virtual WAN hub site and, under **Connectivity**, select **VPN (Site to site)**.
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
1. Select your VPN site name and then select **Connect VPN sites**.
-1. In the **Pre-shared key** field, enter the key previously defined for the on-premise endpoint.
+1. In the **Pre-shared key** field, enter the key previously defined for the on-premises endpoint.
>[!TIP] >If you don't have a previously defined key, you can leave this field blank. A key is generated for you automatically.
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
1. Select **Add** to establish the link.
-1. Test your connection by [creating an NSX-T Data Center segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
+1. Test your connection by [creating an NSX-T Data Center segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premises and Azure VMware Solution endpoints.
>[!NOTE] >Wait approximately 5 minutes before you test connectivity from a client behind your ExpressRoute circuit, for example, a VM in the VNet that you created earlier.
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
This article describes the features of a Backup vault. A Backup vault is a stora
## Storage settings in the Backup vault
-A Backup vault is an entity that stores the backups and recovery points created over time. The Backup vault also contains the backup policies that are associated with the protected virtual machines.
+A Backup vault is an entity that stores the backups and recovery points created over time. The Backup vault also contains the backup policies that are associated with the protected resources.
- Azure Backup automatically handles storage for the vault. Choose the storage redundancy that matches your business needs when creating the Backup vault.
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
The following table compares the features available with each product.
| [Real-time alerts](cdn-real-time-alerts.md) | | | |**&#x2713;** | |||| | **Ease of use** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
-| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
+| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/azure/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
| Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable | | Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2 |gzip, deflate, bzip2 |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
| Azerbaijani | `az` |Γ£ö|Γ£ö|||| | Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö| | Bashkir | `ba` |Γ£ö|||||
+| 🆕Basque | `eu` |✔|||||
| Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö| | Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
| Finnish | `fi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | French (Canada) | `fr-ca` |Γ£ö|Γ£ö||||
+| 🆕Galician | `gl` |✔|||||
| Georgian | `ka` |Γ£ö|||Γ£ö|| | German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| 🆕Somali | `so` |✔|||✔||
+| Somali | `so` |Γ£ö|||Γ£ö||
| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swahili | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
-| 🆕Zulu | `zu` |✔|||||
+| Zulu | `zu` |Γ£ö|||||
> [!NOTE] > Language code `pt` will default to `pt-br`, Portuguese (Brazil).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 11/02/2021 Last updated : 04/12/2022 ms.devlang: csharp, java, javascript, python
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
The Pre-Call API enables developers to programmatically validate a clientΓÇÖs re
## Accessing Pre-Call APIs
-To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. Then you can access the `NetworkTest` feature and run it.
+To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `Diganostics` feature and the `preCallTest` method.
```javascript import { CallClient, Features} from "@azure/communication-calling"; import { AzureCommunicationTokenCredential } from '@azure/communication-common'; const tokenCredential = new AzureCommunicationTokenCredential();
-const networkTest = await callClient.feature(Features.NetworkTest).beginTest(tokenCredential);
+const preCallTest = await callClient.feature(Features.Diganostics).preCallTest(tokenCredential);
```
export declare type CallDiagnosticsResult = {
```
-Individual result objects can be accessed as such using the `networkTest` constant above.
+Individual result objects can be accessed as such using the `preCallTest` constant above.
### Browser support Browser compatibility check. Checks for `Browser` and `OS` compatibility and provides a `Supported` or `NotSupported` value back. ```javascript
-const browserSupport = await networkTest.browserSupport;
+const browserSupport = await preCallTest.browserSupport;
if(browserSupport) { console.log(browserSupport.browser) console.log(browserSupport.os)
Permission check. Checks whether video and audio devices are available from a pe
```javascript
- const deviceAccess = await networkTest.deviceAccess;
+ const deviceAccess = await preCallTest.deviceAccess;
if(deviceAccess) { console.log(deviceAccess.audio) console.log(deviceAccess.video)
Device availability. Checks whether microphone, camera and speaker devices are d
```javascript
- const deviceEnumeration = await networkTest.deviceEnumeration;
+ const deviceEnumeration = await preCallTest.deviceEnumeration;
if(deviceEnumeration) { console.log(deviceEnumeration.microphone) console.log(deviceEnumeration.camera)
Performs a quick call to check in-call metrics for audio and video and provides
```javascript
- const inCallDiagnostics = await networkTest.inCallDiagnostics;
+ const inCallDiagnostics = await preCallTest.inCallDiagnostics;
if(inCallDiagnostics) { console.log(inCallDiagnostics.connected) console.log(inCallDiagnostics.bandWidth)
At this step, there are multiple failure points to watch out for:
- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place. ### Media stats
-For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `NetworkTest` feature. You can subscribe to the call media stats to get full collection of them.
+For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `PreCallTest` feature. You can subscribe to the call media stats to get full collection of them.
## Pricing
communication-services Handle Calling Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/handle-calling-events.md
+
+ Title: Quickstart - Handle voice and video calling events
+
+description: Learn how to handle voice and video calling events using Azure Communication Services.
++++ Last updated : 12/10/2021+++++
+# Quickstart: Handle voice and video calling events
+
+Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services voice and video calling events.
+
+## About Azure Event Grid
+
+[Azure Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to events for [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Communication Service resource. Further details can be found in the [Create an Azure Communication Services resource](../create-communication-resource.md) quickstart.
+- An Azure Communication Services voice and video calling enabled client. [Add voice calling to your app](../voice-video-calling/getting-started-with-calling.md).
+
+## Setting up
+
+### Enable Event Grid resource provider
+
+If you haven't previously used Event Grid in your Azure subscription, you may need to register the Event Grid resource provider following the steps below:
+
+In the Azure portal:
+
+1. Select **Subscriptions** on the left menu.
+2. Select the subscription you're using for Event Grid.
+3. On the left menu, under **Settings**, select **Resource providers**.
+4. Find **Microsoft.EventGrid**.
+5. If not registered, select **Register**.
+
+It may take a moment for the registration to finish. Select **Refresh** to update the status. When **Status** is **Registered**, you're ready to continue.
+
+### Event Grid Viewer deployment
+
+For this quickstart, we will use the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) to view events in near-real time. This will provide the user with the experience of a real-time feed. In addition, the payload of each event should be available for inspection as well.
+
+## Subscribe to voice and video calling events using web hooks
+
+In the portal, navigate to your Azure Communication Services Resource that you created. Inside the Communication Service resource, select **Events** from the left menu of the **Communication Services** page.
++
+Press **Add Event Subscription** to enter the creation wizard.
+
+On the **Create Event Subscription** page, Enter a **name** for the event subscription.
+
+You can subscribe to specific events, to tell Event Grid which of the voice and video events you want to subscribe to, and where to send the events. Select the events you'd like to subscribe to from the dropdown menu. For voice and video calling you'll have the option to choose `Call Started`, `Call Ended`, `Call Participant added` and `Call Participant Removed`.
+
+If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
+
+Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
++
+Select **Web Hook** for **Endpoint type**.
++
+For **Endpoint**, click **Select an endpoint**, and enter the URL of your web app.
+
+In this case, we will use the URL from the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) we set up earlier in the quickstart. The URL for the sample will be in the format: `https://{{site-name}}.azurewebsites.net/api/updates`
+
+Then select **Confirm Selection**.
++
+## Viewing voice and video calling events
+
+### Triggering voice and video calling events
+
+To view event triggers, we must first generate the events.
+
+- `Call Started` events are generated when a Azure Communication Services voice and video call is started. To trigger this event, just start a call attached to your Communication Services resource.
+- `Call Ended` events are generated when a Azure Communication Services voice and video call is ended. To trigger this event, just end a call attached to your Communication Services resource.
+- `Call Participant Added` events are generated when a participant is added to an Azure Communication Services voice and video call. To trigger this event, add a participant to an Azure Communication Services voice and video call attached to your Communication Services resource.
+- `Call Participant Removed` events are generated when a participant is removed from an Azure Communication Services voice and video call. To trigger this event, remove a participant from an Azure Communication Services voice and video call attached to your Communication Services resource.
+
+Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
+
+### Receiving voice and video calling events
+
+Once you complete either action above you will notice that voice and video calling events are sent to your endpoint. These events will show up in the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) we set up at the beginning. You can press the eye icon next to the event to see the entire payload.
+
+Learn more about the [event schemas and other eventing concepts](../../../event-grid/event-schema-communication-services.md).
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
++
+You may also want to:
+
+ - [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md)
+ - [Learn about Event Grid](../../../event-grid/overview.md)
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
You can enable confidential containers in Azure Partners and Open Source Softwar
### Fortanix
-[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://support.fortanix.com/hc/en-us/articles/360049658291-Fortanix-Confidential-Container-on-Azure-Kubernetes-Service).
+[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
![Diagram of Fortanix deployment process, showing steps to move applications to confidential containers and deploy.](./media/confidential-containers/fortanix-confidential-containers-flow.png)
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Title: About Azure DCasv5/ECasv5-series confidential virtual machines (preview)
-description: Azure confidential computing offers confidential virtual machines (confidential VMs) for tenants with high security and confidentiality requirements.
+ Title: DCasv5 and ECasv5 series confidential VMs (preview)
+description: Learn about Azure DCasv5, DCadsv5, ECasv5, and ECadsv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements.
-+ Previously updated : 11/15/2021 Last updated : 3/27/2022
-# About Azure DCasv5/ECasv5-series confidential virtual machines (preview)
+# DCasv5 and ECasv5 series confidential VMs (preview)
> [!IMPORTANT] > Azure DCasv5/ECasv5-series confidential virtual machines are currently in Preview. Use is subject to your [Azure subscription](https://azure.microsoft.com/support/legal/) and terms applicable to "Previews" as detailed in the Universal License Terms for Online Services section of the [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and the [Microsoft Products and Services Data Protection Addendum](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA) ("DPA").
If the compute platform is missing critical settings for your VM's isolation the
Full-disk encryption is optional, because this process can lengthen the initial VM creation time. You can choose between:
+ - A confidential VM with full OS disk encryption before VM deployment that uses platform-managed keys (PMK) or a customer-managed key (CMK).
- A confidential VM without OS disk encryption before VM deployment. For further integrity and protection, confidential VMs offer [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot) by default.
With Secure Boot, trusted publishers must sign OS boot components (including the
Confidential VMs use both the OS disk and a small encrypted virtual machine guest state (VMGS) disk of several megabytes. The VMGS disk contains the security state of the VM's components. Some components include the vTPM and UEFI bootloader. The small VMGS disk might incur a monthly storage cost.
-Starting in 2022, encrypted OS disks will begin to incur higher costs. This change is because encrypted OS disks use more space, and compression isn't possible. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+From July 2022, encrypted OS disks will incur higher costs. This change is because encrypted OS disks use more space, and compression isn't possible. For more information, see [the pricing guide for managed disks](https://azure.microsoft.com/pricing/details/managed-disks/).
## Attestation and TPM
Confidential VMs *don't support*:
- Azure Backup - Azure Site Recovery - Azure Dedicated Host -- Virtual machine scale set
+- Microsoft Azure Virtual Machine Scale Sets for encrypted OS disks
- Capturing an image of a VM - Azure Compute Gallery - Ephemeral OS disks
Confidential VMs *don't support*:
- Accelerated Networking - User-attestable platform reports - Live migration-- Customer-managed keys for OS disk pre-encryption+ ## Next steps
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Title: Create an Azure AMD-based confidential VM with ARM template (preview)
-description: Learn how to quickly create an AMD-based confidential virtual machine (confidential VM) using an ARM template. Deploy the confidential VM from the Azure portal or the Azure CLI.
+description: Learn how to quickly create and deploy an AMD-based DCasv5 or ECasv5 series Azure confidential virtual machine (confidential VM) using an ARM template.
-+ Previously updated : 11/15/2021 Last updated : 3/21/2022 ms.devlang: azurecli
ms.devlang: azurecli
> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-You can use an Azure Resource Manager template (ARM template) to create a [confidential VM](confidential-vm-overview.md) quickly. The confidential VM you create runs on AMD processors backed by AMD SEV-SNP to achieve VM memory encryption and isolation. For more information, see [Confidential VM Overview](confidential-vm-overview.md).
+You can use an Azure Resource Manager template (ARM template) to create an Azure [confidential VM](confidential-vm-overview.md) quickly. Confidential VMs run on AMD processors backed by AMD SEV-SNP to achieve VM memory encryption and isolation. For more information, see [Confidential VM Overview](confidential-vm-overview.md).
This tutorial covers deployment of a confidential VM with a custom configuration.
This tutorial covers deployment of a confidential VM with a custom configuration
- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). - If you want to deploy from the Azure CLI, [install PowerShell](/powershell/azure/install-az-ps) and [install the Azure CLI](/cli/azure/install-azure-cli).
-## Deploy confidential VM template from Azure portal
-
-To create and deploy a confidential VM using an ARM template in the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. [Open the confidential VM ARM template](./quick-create-confidential-vm-portal-amd.md).
-
- 1. For **Subscription**, select an Azure subscription that meets the [prerequisites](#prerequisites).
-
- 1. For **Resource group**, select an existing resource group from the drop-down menu. Or, select **Create new**, enter a unique name, then select **OK**.
-
- 1. For **Region**, select the Azure region in which to deploy the VM.
-
- 1. For **Vm Name**, enter a name for your VM.
-
- 1. For **Vm Location**, select a location for your VM.
-
- > [!NOTE]
- > Confidential VMs are not available in all locations. For currently supported locations, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
-
- 1. For **Vm Size**, select the VM size to use.
-
- 1. For **Os Image Name**, select the OS image to use for your VM.
-
- 1. For **Os Disk Type**, select the OS disk type to use.
-
- 1. For **Admin Username**, enter an administrator username for your VM.
-
- 1. For **Admin Password Or Key**, enter a password for the administrator account. Make sure your password meets the complexity requirements for [Linux VMs](../virtual-machines/linux/faq.yml#what-are-the-password-requirements-when-creating-a-vm-) or [Windows VMs](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
-
- 1. For **Boot Diagnostics**, select whether you want to use boot diagnostics for your VM. The default setting is **false**.
-
- 1. For **Security Type**, select whether you want to use full OS disk encryption before VM deployment. The option **VMGuestStateOnly** doesn't offer OS disk encryption. The option **DiskWithVMGuestState** enables full OS disk encryption using platform-managed keys.
-
- 1. For **Secure Boot Enabled**, select **true**. This setting makes sure only properly signed boot components can load.
-
-1. Select **Review + create** to validate your configuration.
-
-1. Wait for validation to complete. If necessary, fix any validation issues, then select **Review + create** again.
-
-1. In the **Review + create** pane, select **Create** to deploy the VM.
- ## Deploy confidential VM template with Azure CLI
-To create and deploy a confidential VM using an ARM template through the Azure CLI:
+You can deploy a confidential VM template that has optional OS disk confidential encryption through a platform-managed key.
+
+To create and deploy your confidential VM using an ARM template through the Azure CLI:
1. Sign in to your Azure account in the Azure CLI.
To create and deploy a confidential VM using an ARM template through the Azure C
1. Set the variables for your confidential VM. Provide the deployment name (`$deployName`), the resource group (`$resourceGroup`), the VM name (`$vmName`), and the Azure region (`$region`). Replace the sample values with your own information. > [!NOTE]
- > Confidential VMs are not available in all locations. For currently supported locations, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
+ > Confidential VMs are not available in all locations. For currently supported locations, see [which VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
```powershell-interactive $deployName="<deployment-name>"
To create and deploy a confidential VM using an ARM template through the Azure C
az group create -n $resourceGroup -l $region ```
-1. Deploy your VM to Azure using ARM template with custom parameter file
+1. Deploy your VM to Azure using an ARM template with a custom parameter file
```azurecli
To create and deploy a confidential VM using an ARM template through the Azure C
### Define custom parameter file
-When you create your confidential VM using the Azure CLI, you need to define custom parameter file. To create a custom JSON parameter file:
+When you create a confidential VM through the Azure Command-Line Interface (Azure CLI), you need to define a custom parameter file. To create a custom JSON parameter file:
-1. Sign into your Azure account in the Azure CLI.
+1. Sign in to your Azure account through the Azure CLI.
1. Create a JSON parameter file. For example, `azuredeploy.parameters.json`.
-1. Depending on the OS image you're using, copy in the [example Windows parameter file](#example-windows-parameter-file) or the [example Linux parameter file](#example-linux-parameter-file).
+1. Depending on the OS image you're using, copy either the [example Windows parameter file](#example-windows-parameter-file) or the [example Linux parameter file](#example-linux-parameter-file) into your parameter file.
+
+1. Edit the JSON code in the parameter file as needed. For example, update the OS image name (`osImageName`) or the administrator username (`adminUsername`).
+
+1. Configure your security type setting (`securityType`). Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key.
-1. Edit the JSON code in the parameter file as needed. For example, you might want to update the OS image name (`osImageName`), the administrator username (`adminUsername`), and more.
+1. Save your parameter file.
#### Example Windows parameter file
Use this example to create a custom parameter file for a Windows-based confident
"value": "testuser" }, "adminPasswordOrKey": {
- "value": "Password123@@"
+ "value": "<your password>"
} } }
Use this example to create a custom parameter file for a Linux-based confidentia
"value": "sshPublicKey" }, "adminPasswordOrKey": {
- "value": {your ssh public key}
+ "value": <your SSH public key>
} } } ```
+## Deploy confidential VM template with OS disk confidential encryption via customer-managed key
+
+1. Sign in to your Azure account through the Azure CLI.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Set your Azure subscription. Replace `<subscription-id>` with your subscription identifier. Make sure to use a subscription that meets the [prerequisites](#prerequisites).
+
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
+1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant
+ ```azurecli
+ Connect-AzureAD -Tenant "your tenant ID"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ ```
+1. Set up your Azure key vault. For how to use an Azure Key Vault Managed HSM instead, see the next step.
+
+ 1. Create a resource group for your key vault. Your key vault instance and your confidential VM must be in the same Azure region.
+
+ ```azurecli
+ $resourceGroup = <key vault resource group>
+ $region = <Azure region>
+ az group create --name $resourceGroup --location $region
+ ```
+
+ 1. Create a key vault instance with a premium SKU in your preferred region.
+
+ ```azurecli
+ $KeyVault = <name of key vault>
+ az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection
+ ```
+
+ 1. Make sure that you have an **owner** role in this key vault.
+
+ 1. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault.
+
+ ```azurecli
+ $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ az keyvault set-policy --name $KeyVault --object-id $cvmAgent.objectId --key-permissions get release
+ ```
+
+1. (Optional) If you don't want to use an Azure key vault, you can create an Azure Key Vault Managed HSM instead.
+
+ 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
+
+ 1. Enable purge protection on the Azure Managed HSM. This step is required to enable key release.
+
+ ```azurecli
+ az keyvault update-hsm --subscription $subscriptionId -g $resourceGroup --hsm-name $hsm --enable-purge-protection true
+ ```
++
+ 1. Give `Confidential VM Orchestrator` permissions to managed HSM.
+
+ ```azurecli
+ $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.objectId --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName
+ ```
+
+1. Create a new key using Azure Key Vault. For how to use an Azure Managed HSM instead, see the next step.
+
+ 1. Prepare and download the [key release policy](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/skr-policy.json) to your local disk.
+
+ 1. Create a new key.
+
+ ```azurecli
+ $KeyName = <name of key>
+ $KeySize = 3072
+ az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ ```
+
+ 1. Get information about the key that you created.
+
+ ```azurecli
+ $encryptionKeyVaultId = ((az keyvault show -n $KeyVault -g $resourceGroup) | ConvertFrom-Json).id
+ $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid
+ ```
+
+ 1. Deploy a Disk Encryption Set (DES) using a [DES ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployDES.json) (`deployDES.json`).
+
+ ```azurecli
+ $desName = <name of DES>
+ $deployName = <name of deployment>
+ $desArmTemplate = <name of DES ARM template file>
+ az deployment group create `
+ -g $resourceGroup `
+ -n $deployName `
+ -f $desArmTemplate `
+ -p desName=$desName `
+ -p encryptionKeyURL=$encryptionKeyURL `
+ -p encryptionKeyVaultId=$encryptionKeyVaultId `
+ -p region=$region
+ ```
+
+ 1. Assign key access to the DES file.
+
+ ```azurecli
+ $desIdentity= (az disk-encryption-set show -n $desName -g
+ $resourceGroup --query [identity.principalId] -o tsv)
+ az keyvault set-policy -n $KeyVault `
+ -g $resourceGroup `
+ --object-id $desIdentity `
+ --key-permissions wrapkey unwrapkey get
+ ```
+
+ 1. (Optional) Create a new key from an Azure Managed HSM.
+
+ 1. Prepare and download the [key release policy](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/skr-policy.json) to your local disk.
+
+ 1. Create the new key.
+
+ ```azurecli
+ $KeyName = <name of key>
+ $KeySize = 3072
+ az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ ```
+
+ 1. Get information about the key that you created.
+
+ ```azurecli
+ $encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid
+ ```
+
+ 1. Deploy a DES.
+
+ ```azurecli
+ $desName = <name of DES>
+ az disk-encryption-set create -n $desName `
+ -g $resourceGroup `
+ --key-url $encryptionKeyURL
+ ```
+
+ 1. Assign key access to the DES.
+
+ ```azurecli
+ desIdentity=$(az disk-encryption-set show -n $desName -g $resourceGroup --query [identity.principalId] -o tsv)
+ az keyvault set-policy -n $hsm `
+ -g $resourceGroup `
+ --object-id $desIdentity `
+ --key-permissions wrapkey unwrapkey get
+ ```
+
+1. Deploy your confidential VM with the customer-managed key.
+
+ 1. Get the resource ID for the DES.
+
+ ```azurecli
+ $desID = (az disk-encryption-set show -n $desName -g $resourceGroup --query [id] -o tsv)
+ ```
+
+ 1. Deploy your confidential VM using the [confidential VM ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployCPSCVM_cmk.json) (`deployCPSCVM_cmk.json`) and a [deployment parameter file](#example-deployment-parameter-file) (for example, `azuredeploy.parameters.win2022.json`) with the customer-managed key.
+
+ ```azurecli
+ $deployName = <name of deployment>
+ $vmName = <name of confidential VM>
+ $cvmArmTemplate = <name of confidential VM ARM template file>
+ $cvmParameterFile = <name of confidential VM parameter file>
+
+ az deployment group create `
+ -g $resourceGroup `
+ -n $deployName `
+ -f $cvmArmTemplate `
+ -p $cvmParameterFile `
+ -p diskEncryptionSetId=$desID `
+ -p vmName=$vmName
+ ```
+
+1. Connect to your confidential VM to make sure the creation was successful.
+
+### Example deployment parameter file
+
+This is an example parameter file for a Windows Server 2022 Gen 2 confidential VM:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+
+ "vmSize": {
+ "value": "Standard_DC2as_v5"
+ },
+ "osImageName": {
+ "value": "Windows Server 2022 Gen 2"
+ },
+ "osDiskType": {
+ "value": "StandardSSD_LRS"
+ },
+ "securityType": {
+ "value": "DiskWithVMGuestState"
+ },
+ "adminUsername": {
+ "value": "testuser"
+ },
+ "adminPasswordOrKey": {
+ "value": "<Your-Password>"
+ }
+ }
+}
+```
+ ## Next steps > [!div class="nextstepaction"]
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
Title: Create an Azure AMD-based confidential VM in the Azure portal (preview)
description: Learn how to quickly create an AMD-based confidential virtual machine (confidential VM) in the Azure portal using Azure Marketplace images. -+ Previously updated : 11/15/2021 Last updated : 3/27/2022
You can use the Azure portal to create a [confidential VM](confidential-vm-overv
## Prerequisites - An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/).-- If you're using a Linux-based confidential VM, have a BASH shell to use for SSH or install an SSH client, such as [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
+- If you're using a Linux-based confidential VM, use a BASH shell for SSH or install an SSH client, such as [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
+- If Confidential disk encryption with a customer-managed key is required, please run below command to opt-in service principal `Confidential VM Orchestrator` to your tenant.
+
+ ```azurecli
+ Connect-AzureAD -Tenant "your tenant ID"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ ```
## Create confidential VM
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. Under **Disk options**, enable **Confidential compute encryption** if you want to encrypt your VM's OS disk during creation. 1. For **Confidential compute encryption type**, select the type of encryption to use.
+
+ 1. If **Confidential disk encryption with a customer-managed key** is selected, create a **Confidential disk encryption set** before creating your confidential VM.
+
+1. (Optional) If necessary, create a **Confidential disk encryption set** as follows.
+
+ 1. [Create an Azure Key Vault](../key-vault/general/quick-create-portal.md). For the pricing tier, select **Premium (includes support for HSM backed keys)**. Or, create [create an Azure Key Vault managed Hardware Security Module (HSM)](../key-vault/managed-hsm/quick-create-cli.md).
+
+ 1. In the Azure portal, search for and select **Disk Encryption Sets**.
+
+ 1. Select **Create**.
+
+ 1. For **Subscription**, select which Azure subscription to use.
+
+ 1. For **Resource group**, select or create a new resource group to use.
+
+ 1. For **Disk encryption set name**, enter a name for the set.
+
+ 1. For **Region**, select an available Azure region.
+
+ 1. For **Encryption type**, select **Confidential disk encryption with a customer-managed key**.
+
+ 1. For **Key Vault**, select the key vault you already created.
+
+ 1. Under **Key Vault**, select **Create new** to create a new key.
+
+ > [!NOTE]
+ > If you selected an Azure managed HSM previously, [use PowerShell or the Azure CLI to create the new key](../confidential-computing/quick-create-confidential-vm-arm-amd.md) instead.
+
+ 1. For **Name**, enter a name for the key.
+
+ 1. For the key type, select **RSA-HSM**
+
+ 1. Select your key size
+
+ 1. Select **Create** to finish creating the key.
+
+ 1. Select **Review + create** to create new disk encryption set. Wait for the resource creation to complete successfully.
+
+ 1. Go to the disk encryption set resource in the Azure portal.
+
+ 1. Select the pink banner to grant permissions to Azure Key Vault.
+
+ > [!IMPORTANT]
+ > You must perform this step to successfully create the confidential VM.
1. As needed, make changes to settings under the tabs **Networking**, **Management**, **Guest Config**, and **Tags**.
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
The following resources are free during each calendar month, per subscription:
This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/).
+> [!NOTE]
+> If you use Container Apps with [your own virtual network](vnet-custom.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply.
+ ## Resource consumption charges Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure. You're charged for the amount of resources allocated to each replica while it's running.
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
+
+ Title: Observability in Azure Container Apps Preview
+description: Monitor your running app in Azure Container Apps Preview
++++ Last updated : 03/25/2022+++
+# Observability in Azure Container Apps Preview
+
+Azure Container Apps provides built-in observability features that give you a holistic view of the behavior, performance, and health of your running container apps.
+
+These features include:
+
+- Azure Monitor metrics
+- Azure Monitor Log Analytics
+- Azure Monitor Alerts
+
+>[!NOTE]
+> While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications.
+> Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
+
+## Azure Monitor metrics
+
+The Azure Monitor metrics feature allows you to monitor your app's compute and network usage. These metrics are available to view and analyze through the [metrics explorer in the Azure portal](/../azure-monitor/essentials/metrics-getting-started). Metric data is also available through the [Azure CLI](/cli/azure/monitor/metrics), and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+
+### Available metrics for Container Apps
+
+Container Apps provides the following metrics for your container app.
+
+|Title | Description | Metric ID |Unit |
+|||||
+|CPU usage nanocores | CPU usage in nanocores (1,000,000,000 nanocores = 1 core) | UsageNanoCores| nanocores|
+|Memory working set bytes |Working set memory used in bytes |WorkingSetBytes |bytes|
+|Network in bytes|Network received bytes|RxBytes|bytes|
+|Network out bytes|Network transmitted bytes|TxBytes|bytes|
+|Requests|Requests processed|Requests|n/a|
+
+The metrics namespace is `microsoft.app/containerapps`.
+
+### View a snapshot of your app's metrics
+
+Using the Azure portal, navigate to your container apps **Overview** page. The **Monitoring** section displays the current CPU, memory, and network utilization for your container app.
++
+From this view, you can pin one or more charts to your dashboard. When you select a chart, it's opened in the metrics explorer.
+
+### View and analyze metric data with metrics explorer
+
+The Azure Monitor metrics explorer is available from the Azure portal, through the **Metrics** menu option in your container app page or the Azure **Monitor**->**Metrics** page.
+
+The metrics page allows you to create and view charts to display your container apps metrics. Refer to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) to learn more.
+
+When you first navigate to the metrics explorer, you'll see the main page. From here, select the metric that you want to display. You can add more metrics to the chart by selecting **Add Metric** in the upper left.
++
+You can filter your metrics by revision or replica. For example, to filter by a replica, select **Add filter**, then select a replica from the *Value* drop-down. You can also filter by your container app's revision.
++
+You can split the information in your chart by revision or replica. For example, to split by revision, select **Applying splitting** and select **Revision** as the value. Splitting is only available when the chart contains a single metric.
++
+You can view metrics across multiple container apps to view resource utilization over your entire application.
++
+## Azure Monitor Log Analytics
+
+Application logs are available through Azure Monitor Log Analytics. Each Container Apps environment includes a Log Analytics workspace, which provides a common log space for all container apps in the environment.
+
+Application logs, consisting of the logs written to `stdout` and `stderr` from the container(s) in each container app, are collected and stored in the Log Analytics workspace. Additionally, if your container app is using Dapr, log entries from the Dapr sidecar are also collected.
+
+To view these logs, you create Log Analytics queries. The log entries are stored in the ContainerAppConsoleLogs_CL table in the CustomLogs group.
+
+The most commonly used Container Apps specific columns in ContainerAppConsoleLogs_CL:
+
+|Column |Type |Description |
+||||
+|ContainerAppName_s | string | container app name |
+|ContainerGroupName_g| string |replica name|
+|ContainerId|string|container identifier|
+|ContainerImage_s |string| container image name |
+|EnvironmentName_s|string|Container Apps environment name|
+|Log_s |string| log message|
+|RevisionName_s|string|revision name|
+
+You can run Log Analytic queries via the Azure portal, the Azure CLI or PowerShell.
+
+### Log Analytics via the Azure portal
+
+In the Azure portal, logs are available from either the **Monitor**->**Logs** page or by navigating to your container app and selecting the **Logs** menu item. From Log Analytics interface, you can query the logs based on the **CustomLogs>ContainerAppConsoleLogs_CL** table.
++
+Here's an example of a simple query, that displays log entries for the container app named *album-api*.
+
+```kusto
+ContainerAppConsoleLogs_CL
+| where ContainerAppName_s == 'album-api'
+| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s
+| take 100
+```
+
+For more information regarding the Log Analytics interface and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
+
+### Log Analytics via the Azure CLI and PowerShell
+
+Application logs can be queried from the [Azure CLI](/cli/azure/monitor/metrics) and [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+
+Example Azure CLI query to display the log entries for a container app:
+
+```azurecli
+az monitor log-analytics query --workspace --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message, LogLevel_s | take 100" --out table
+```
+
+For more information, see [Viewing Logs](monitor.md#viewing-logs).
+
+## Azure Monitor alerts
+
+You can configure alerts to send notifications based on metrics values and Log Analytics queries. Alerts can be added from the metrics explorer and the Log Analytics interface in the Azure portal.
+
+In the metrics explorer and the Log Analytics interface, alerts are based on existing charts and queries. You can manage your alerts from the **Monitor>Alerts** page. From this page, you can create metric and log alerts without existing metric charts or log queries. To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+
+### Setting alerts in metrics explorer
+
+Metric alerts monitor metric data at set intervals and trigger when an alert rule condition is met. For more information, see [Metric alerts](../azure-monitor/alerts/alerts-metric-overview.md).
+
+In metrics explorer, you can create metric alerts based on Container Apps metrics. Once you create a metric chart, you're able to create alert rules based on the chart's settings. You can create an alert rule by selecting **New alert rule**.
++
+When you create a new alert rule, the rule creation pane is opened to the **Condition** tab. An alert condition is started for you based on the metric that you selected for the chart. You then edit the condition to configure threshold and other settings.
++
+You can add more conditions to your alert rule by selecting the **Add condition** option in the **Create an alert rule** pane.
++
+When you add an alert condition, the **Select a signal** pane is opened. This pane lists the Container Apps metrics from which you can select for the condition.
++
+After you've selected the metric, you can configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
+
+You can add alert splitting to the condition so you can receive individual alerts for specific revisions or replicas.
+
+Example of setting a dimension for a condition:
++
+Once you create the alert rule, it's a resource in your resource group. To manage your alert rules, navigate to **Monitor>Alerts**.
+
+ To learn more about configuring alerts, visit [Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md)
+
+### Setting alerts using Log Analytics queries
+
+You can use Log Analytics queries to periodically monitor logs and trigger alerts based on the results. The Log Analytics interface allows you to add alert rules to your queries. Once you have created and run a query, you're able to create an alert rule.
++
+Selecting **New alert rule** opens the **Create an alert rule** editor, where you can configure the setting for your alerts.
++
+To learn more about creating a log alert, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md)
+
+Enabling splitting will send individual alerts for each dimension you define. Container Apps supports the following alert splitting dimensions:
+
+- app name
+- revision
+- container
+- log message
++
+To learn more about log alerts, refer to [Log alerts in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md).
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+- [Health probes in Azure Container Apps](health-probes.md)
+- [Monitor an App in Azure Container Apps](monitor.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
### Minimum throughput limits
-A Cosmos container (or shared throughput database) must have a minimum throughput of 400 RU/s. As the container grows, Cosmos DB requires a minimum throughput to ensure the database or container has sufficient resource for its operations.
+A Cosmos container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Cosmos DB requires a minimum throughput to ensure the database or container has sufficient resource for its operations.
The current and minimum throughput of a container or a database can be retrieved from the Azure portal or the SDKs. For more information, see [Provision throughput on containers and databases](set-throughput.md).
Example: Suppose you have a database provisioned with 400 RU/s, 15 GB of storage
**Note:** the minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
-In summary, here are the minimum provisioned RU limits.
+In summary, here are the minimum provisioned RU limits when using manual throughput.
| Resource | Default limit | | | |
-| Minimum RUs per container ([dedicated throughput provisioned mode](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 |
-| Minimum RUs per database ([shared throughput provisioned mode](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 RU/s for first 25 containers. |
+| Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 |
+| Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 RU/s for first 25 containers. |
Cosmos DB supports programmatic scaling of throughput (RU/s) per container or database via the SDKs or portal.
-Depending on the current RU/s provisioned and resource settings, each resource can scale synchronously and immediately between the minimum RU/s to up to 100x the minimum RU/s. If the requested throughput value is outside the range, scaling is performed asynchronously. Asynchronous scaling may take minutes to hours to complete depending on the requested throughput and data storage size in the container.
+Depending on the current RU/s provisioned and resource settings, each resource can scale synchronously and immediately between the minimum RU/s to up to 100x the minimum RU/s. If the requested throughput value is outside the range, scaling is performed asynchronously. Asynchronous scaling may take minutes to hours to complete depending on the requested throughput and data storage size in the container. [Learn more.](scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus)
### Serverless
See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
| Minimum RU/s the system can scale to | `0.1 * Tmax`| | Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage| | Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you are billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
-| Minimum autoscale max RU/s for a container | `MAX(4000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
-| Minimum autoscale max RU/s for a database | `MAX(4000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 4000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per additional container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 9000 RU/s (scales between 900 - 9000 RU/s).
+| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
+| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per additional container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
## SQL query limits
cosmos-db Distribute Data Globally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distribute-data-globally.md
Last updated 01/06/2021
+adobe-target: true
# Distribute your data globally with Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
Read more about global distribution in the following articles:
* [Programmable consistency models in Cosmos DB](consistency-levels.md) * [Choose the right consistency level for your application](./consistency-levels.md) * [Consistency levels across Azure Cosmos DB APIs](./consistency-levels.md)
-* [Availability and performance tradeoffs for various consistency levels](./consistency-levels.md)
+* [Availability and performance tradeoffs for various consistency levels](./consistency-levels.md)
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
Previously updated : 05/25/2021 Last updated : 03/29/2022 # Azure Cosmos DB free tier
cosmos-db How To Choose Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-choose-offer.md
description: Learn how to choose between standard (manual) provisioned throughpu
Previously updated : 08/19/2020 Last updated : 04/01/2022
Whether you plan to use standard (manual) or autoscale, here's what you should c
If you provision standard (manual) RU/s at the entry point of 400 RU/s, you won't be able to consume above 400 RU/s, unless you manually change the throughput. You'll be billed for 400 RU/s at the standard (manual) provisioned throughput rate, per hour.
-If you provision autoscale throughput at the entry point of max RU/s of 4000 RU/s, the resource will scale between 400 to 4000 RU/s. Since the autoscale throughput billing rate per RU/s is 1.5x of the standard (manual) rate, for hours where the system has scaled down to the minimum of 400 RU/s, your bill will be higher than if you provisioned 400 RU/s manually. However, with autoscale, at any time, if your application traffic spikes, you can consume up to 4000 RU/s with no user action required. In general, you should weigh the benefit of being able to consume up to the max RU/s at any time with the 1.5x rate of autoscale.
+If you provision autoscale throughput with max RU/s of 4000 RU/s, the resource will scale between 400 to 4000 RU/s. Since the autoscale throughput billing rate per RU/s is 1.5x of the standard (manual) rate, for hours where the system has scaled down to the minimum of 400 RU/s, your bill will be higher than if you provisioned 400 RU/s manually. However, with autoscale, at any time, if your application traffic spikes, you can consume up to 4000 RU/s with no user action required. In general, you should weigh the benefit of being able to consume up to the max RU/s at any time with the 1.5x rate of autoscale.
Use the Azure Cosmos DB [capacity calculator](estimate-ru-with-capacity-planner.md) to estimate your throughput requirements.
cosmos-db Limit Total Account Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/limit-total-account-throughput.md
description: Learn how to limit the total throughput provisioned on your Azure C
Previously updated : 11/04/2021 Last updated : 03/31/2022
When creating a new Azure Cosmos DB account from the portal, you have the option
:::image type="content" source="./media/limit-total-account-throughput/create-account.png" alt-text="Screenshot of the Azure portal showing how to limit total account throughput when creating a new account" border="true":::
-Checking this option will limit your account's total throughput to 4,000 RU/s. You can change this value after your account has been created.
+Checking this option will limit your account's total throughput to 1,000 RU/s for a [free tier account](free-tier.md) and 4,000 RU/s for a regular, non-free tier account. You can change this value after your account has been created.
### Existing account
cosmos-db Provision Throughput Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md
Previously updated : 05/18/2021 Last updated : 04/01/2022
The use cases of autoscale include:
* **Variable or unpredictable workloads:** When your workloads have variable or unpredictable spikes in usage, autoscale helps by automatically scaling up and down based on usage. Examples include retail websites that have different traffic patterns depending on seasonality; IOT workloads that have spikes at various times during the day; line of business applications that see peak usage a few times a month or year, and more. With autoscale, you no longer need to manually provision for peak or average capacity.
-* **New applications:** If you're developing a new application and not sure about the throughput (RU/s) you need, autoscale makes it easy to get started. You can start with the autoscale entry point of 400 - 4000 RU/s, monitor your usage, and determine the right RU/s over time.
+* **New applications:** If you're developing a new application and not sure about the throughput (RU/s) you need, autoscale makes it easy to get started. You can start with the autoscale entry point of 100 - 1000 RU/s, monitor your usage, and determine the right RU/s over time.
* **Infrequently used applications:** If you have an application that's only used for a few hours several times a day, week, or month ΓÇö such as a low-volume application/web/blog site ΓÇö autoscale adjusts the capacity to handle peak usage and scales down when it's over.
When configuring containers and databases with autoscale, you specify the maximu
Each hour, you will be billed for the highest throughput `T` the system scaled to within the hour.
-The entry point for autoscale maximum throughput `Tmax` starts at 4000 RU/s, which scales between 400 - 4000 RU/s. You can set `Tmax` in increments of 1000 RU/s and change the value at any time.
+The entry point for autoscale maximum throughput `Tmax` starts at 1000 RU/s, which scales between 100 - 1000 RU/s. You can set `Tmax` in increments of 1000 RU/s and change the value at any time.
## Enable autoscale on existing resources
For any value of `Tmax`, the database or container can store a total of `0.01 *
For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 500 GB of data. If you exceed 500 GB - e.g. storage is now 600 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s).
-When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 4000 (scales between 400 - 4000 RU/s), as long as you don't exceed 40 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
+When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 10 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-max-ru-s-on-the-database-or-container--) for more information.
## Comparison ΓÇô containers configured with manual vs autoscale throughput For more detail, see this [documentation](how-to-choose-offer.md) on how to choose between standard (manual) and autoscale throughput.
cosmos-db Relational Nosql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/relational-nosql.md
Last updated 12/16/2019 -
+adobe-target: true
# Understanding the differences between NoSQL and relational databases
Learn how to manage your Azure Cosmos account and other concepts:
* [VNET service endpoint for your Azure Cosmos account](how-to-configure-vnet-service-endpoint.md) * [IP-firewall for your Azure Cosmos account](how-to-configure-firewall.md) * [How-to add and remove Azure regions to your Azure Cosmos account](how-to-manage-database-account.md)
-* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
+* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-autoscale-throughput.md
Previously updated : 05/18/2021 Last updated : 04/01/2022
Use [version 3.9 or higher](https://www.nuget.org/packages/Microsoft.Azure.Cosmo
CosmosClient cosmosClient = new CosmosClient(Endpoint, PrimaryKey); // Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
//Create the database with autoscale enabled database = await cosmosClient.CreateDatabaseAsync(DatabaseName, throughputProperties: autoscaleThroughputProperties);
Database database = await cosmosClient.GetDatabase("DatabaseName");
// Container and autoscale throughput settings ContainerProperties autoscaleContainerProperties = new ContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.CreateAutoscaleThroughput(1000); //Set autoscale max RU/s
// Create the container with autoscale enabled container = await database.CreateContainerAsync(autoscaleContainerProperties, autoscaleThroughputProperties);
CosmosAsyncClient client = new CosmosClientBuilder()
.buildAsyncClient(); // Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
//Create the database with autoscale enabled CosmosAsyncDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).block().getDatabase();
CosmosClient client = new CosmosClientBuilder()
.buildClient(); // Autoscale throughput settings
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
//Create the database with autoscale enabled CosmosDatabase database = client.createDatabase(databaseName, autoscaleThroughputProperties).getDatabase();
CosmosAsyncDatabase database = client.createDatabase("DatabaseName").block().get
// Container and autoscale throughput settings CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
// Create the container with autoscale enabled CosmosAsyncContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
CosmosDatabase database = client.createDatabase("DatabaseName").getDatabase();
// Container and autoscale throughput settings CosmosContainerProperties autoscaleContainerProperties = new CosmosContainerProperties("ContainerName", "/partitionKey");
-ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(4000); //Set autoscale max RU/s
+ThroughputProperties autoscaleThroughputProperties = ThroughputProperties.createAutoscaledThroughput(1000); //Set autoscale max RU/s
// Create the container with autoscale enabled CosmosContainer container = database.createContainer(autoscaleContainerProperties, autoscaleThroughputProperties, new CosmosContainerRequestOptions())
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-query-sdk.md
Previously updated : 04/01/2022 Last updated : 04/11/2022 ms.devlang: csharp, java
IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
> [!NOTE]
-> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slowed they can potentially be.
+> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slower they can potentially be.
### Avoid recreating the iterator unnecessarily
cosmos-db Understand Your Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/understand-your-bill.md
Previously updated : 08/26/2021 Last updated : 03/31/2022
With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of stor
### Billing example - container with autoscale throughput
+> [!TIP]
+> When using autoscale, the entry point scale range you can set is 100 - 1000 RU/s. If you want to use autoscale and keep your free tier account completely free, create either one container with this scale range, or a shared throughput database with up to 25 containers inside. The example below illustrates how billing works if you provision throughput higher than the 100 - 1000 RU/s scale range.
+ - Let's suppose in a free tier account, we create a container with autoscale enabled, with a maximum RU/s of 4000 RU/s. This resource will automatically scale between 400 RU/s - 4000 RU/s. - Suppose in hour 1 through hour 10, the resource is scaled to 1000 RU/s. During hour 11, the resource scales up to 1600 RU/s and then back down to 1000 RU/s within the hour. - In hours 1 through 10, you will be billed $0 for throughput, as the 1000 RU/s were covered by free tier.
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
Emails are sent to different people depending on your purchase method:
- EA customers - Emails are sent to the notification contacts set on the EA portal or Enterprise Administrators who are automatically enrolled to receive usage notifications. - Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators.-- Cloud Solution Provider customers - Emails are sent to the partner notification contact.
+- Cloud Solution Provider customers - Emails are sent to the partner notification contact. This notification isn't currently supported for Microsoft Customer Agreement subscriptions (CSP Azure Plan subscription).
## Next steps - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md)
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Title: Analyze unexpected Azure charges
-description: Learn how to analyze unexpected charges for your Azure subscription.
+ Title: Identify anomalies and unexpected changes in cost
+
+description: Learn how to identify anomalies and unexpected changes in cost.
-+ Previously updated : 10/07/2021 Last updated : 04/02/2022
-# Analyze unexpected charges
+# Identify anomalies and unexpected changes in cost
-The cloud resource infrastructure that you've built for your organization is likely complex. Many Azure resource types can have different types of charges. Azure resources might be owned by different teams in your organization and they might have different billing model types that apply to various resources. To gain a better understanding of the charges, begin your analysis using one or more of the strategies in the following sections.
+The article helps you identify anomalies and unexpected changes in your cloud costs using Cost Management and Billing. You'll start with anomaly detection for subscriptions in cost analysis to identify any atypical usage patterns based on your cost and usage trends. You'll then learn how to drill into cost information to find and investigate cost spikes and dips.
-## Review invoice for resource responsible for charge
+In general, there are three types of changes that you might want to investigate:
-How you purchase your Azure services helps you determine the methodology and tools that are available to you as you identify the resource associated with a charge. To determine which methodology applies to you, first [determine your Azure offer type](../costs/understand-cost-mgt-data.md#determine-your-offer-type). Then, identify your customer category in the list of [supported Azure offers](../costs/understand-cost-mgt-data.md#supported-microsoft-azure-offers).
+- New costsΓÇöFor example, a resource that was started or added such as a virtual machine. New costs often appear as a cost starting from zero.
+- Removed costsΓÇöFor example, a resource that was stopped or deleted. Removed costs often appear as costs ending in zero.
+- Changed costs (increased or decreased)ΓÇöFor example, a resource was changed in some way that caused a cost increase or decrease. Some changes, like resizing a virtual machine, might be surfaced as a new meter that replaces a removed meter, both under the same resource.
-The following articles provide detailed steps that explain how to review your bill based on your customer type. In each article there are instructions about how to download a CSV file containing usage and cost details for a given billing period.
+## Identify cost anomalies
-- [Pay-As-You-Go bill review process](review-individual-bill.md#charges)-- [Enterprise Agreement bill review process](review-enterprise-agreement-bill.md)-- [Microsoft Customer Agreement review process](review-customer-agreement-bill.md#analyze-your-azure-usage-charges)-- [Microsoft Partner Agreement review process](review-partner-agreement-bill.md#analyze-your-azure-usage-charges)
+The cloud comes with the promise of significant cost savings compared to on-premises costs. However, savings require diligence to proactively plan, govern, and monitor your cloud solutions. Even with proactive processes, cost surprises can still happen. For example, you might notice that something has changed, but you're not sure what. Using Cost Management anomaly detection for your subscriptions can help minimize surprises.
-Your Azure bill aggregates charges for the month on a per-_meter_ basis. Meters are used to track a resource's usage over time and are used to calculate your bill. When you create a single Azure resource, like a virtual machine, one or more-meter instances are created for the resource.
+Whether you know if you have any existing cost anomalies or not, Cost analysis will inform you if it finds anything unusual as part of Insights. If not, Cost analysis will show No anomalies detected.
-Filter the usage CSV file based on the _MeterName_ as shown on the bill that you want to analyze to see all line items that apply to the meter. The _InstanceID_ for the line item corresponds to the actual Azure resource that generated the charge.
+### View anomalies in Cost analysis
-When you've identified the resource in question, you can use Cost analysis in Cost Management to further analyze the costs related to the resource. To learn more about using cost analysis, see [Start analyzing costs](../costs/quick-acm-cost-analysis.md).
+Anomaly detection is available in Cost analysis (preview) when you select a subscription scope. You'll see your anomaly status as part of **Insights**. And as with [other insights](https://azure.microsoft.com/blog/azure-cost-management-and-billing-updates-february-2021/#insights), the experience is simple.
-## Review invoiced charges in Cost analysis
+In the Azure portal, navigate to Cost Management from Azure Home. Select a subscription scope and then in the left menu, select **Cost analysis**. In the view list, select any view under **Preview views**. In the following example, the **Resources** preview view is selected. If you have a cost anomaly, you'll see an insight.
-To view your invoice details in the Azure portal, navigate to Cost analysis for the scope associated with the invoice that you're analyzing. Select the **Invoice details** view. Invoice details show you the charges as seen on the invoice.
-[![Example showing invoice details](./media/analyze-unexpected-charges/invoice-details.png)](./media/analyze-unexpected-charges/invoice-details.png#lightbox)
+If you don't have any anomalies, you'll see a **No anomalies detected** insight, confirming the dates that were evaluated.
-Viewing invoice details, you can identify the service that has unexpected costs and determine which resources are directly associated with the resource in Cost analysis. For example, if you want to analyze charges for the Virtual Machines service, navigate to the **Accumulated cost** view. Then, set the granularity to **Daily** and filter charges **Service name: Virtual machines** and group charges by **Resource**.
-[![Example showing accumulated costs for virtual machines](./media/analyze-unexpected-charges/virtual-machines.png)](./media/analyze-unexpected-charges/virtual-machines.png#lightbox)
+### Drill into anomaly details
-## Identify spikes in cost over time
+To drill into the underlying data for something that has changed, select the insight link to open a view in classic cost analysis and review your daily usage by resource group for the time range that was evaluated.
-Sometimes you might not know what recent costs resulted in changes to your billed charges. To understand what changed, you can use Cost analysis to [see a daily or monthly breakdown of costs over time](../costs/cost-analysis-common-uses.md#view-costs-per-day-or-by-month). After you create the view, group your charges by either **Service** or **Resource** to identify the changes. You can also change your view to a **Line** chart to better visualize the data.
+Continuing from the previous example of the anomaly labeled **Daily run rate down 748% on Sep 28**, let's examine its details after the link is selected. The following example image shows details about the anomaly. Notice the large increase in costs, a cost spike, and eventual drop in from a temporary, short-lived resource.
-![Example showing costs over time in cost analysis](./media/analyze-unexpected-charges/costs-over-time.png)
-## Determine resource pricing and billing model
+Cost anomalies are evaluated for subscriptions daily and compare the day's total cost to a forecasted total based on the last 60 days to account for common patterns in your recent usage. For example, spikes every Monday. Anomaly detection runs 36 hours after the end of the day (UTC) to ensure a complete data set is available.
-A single resource can accrue charges across multiple Azure products and services. View the [Azure pricing by product](https://azure.microsoft.com/pricing/#product-pricing) page to learn more about the pricing for each Azure service. For example, a single virtual machine (VM) created in Azure can have the following meters created to track its usage. Each might have different pricing.
+Anomaly detection is available to every subscription monitored using the cost analysis preview. To enable anomaly detection for your subscriptions, open the cost analysis preview and select your subscription from the scope selector at the top of the page. You'll see a notification informing you that your subscription is onboarded and you'll start to see your anomaly detection status within 24 hours.
-- Compute Hours-- IP Address Hours-- Data Transfer In-- Data Transfer Out-- Standard Managed Disk-- Standard Managed Disk Operations-- Standard IO-Disk-- Standard IO-Block Blob Read-- Standard IO-Block Blob Write-- Standard IO-Block Blob Delete
+## Manually find unexpected cost changes
-When the VM is created, each meter begins emitting usage records. The usage and the meter's price are tracked in the Azure metering system. You can see the meters that were used to calculate your bill in the usage CSV file.
+Let's look at a more detailed example of finding a change in cost. When you navigate to Cost analysis and then select a subscription scope, you'll start with the **Accumulated costs** view. The following screenshot shows an example of what you might see.
-## Find people responsible for the resource and engage
-Often, the team responsible for a given resource will know about changes that were made for a resource. Engaging them is useful as you identify why charges might appear. For example, the owning team may have recently created the resource, updated its SKU (thereby changing the resource rate) or increased the load on the resource due to code changes. Continue reading the following sections for more techniques to determine who owns a resource.
+With the default view and current month (March 2022), the example image doesn't show any dips or spikes.
+
+Change the view to **Daily costs** and then expand the date range to Last year (2021). Then, set the granularity to **Monthly**. In the following image, notice that there's a significant increase in costs for the `arcticmustang` resource group starting in July.
++
+Let's examine the increase in cost for the resource group more fully. To drill into the time frame of the change, change the date range. In the following example, we set a custom date range from June to July 2021 and then set the Granularity to **Daily**. In the example, the daily cost for the resource group was about $4.56. On June 30, the cost increased to $20.68. Later on July 1 and after, the daily cost went to $30.22.
++
+So far, we've found an increase in cost for the `articmustang` resource group at the end of June and the beginning of July. You might notice that the cost increase spanned over two days. The change took two days because a change in the middle of a day doesn't show the full effect of that change until the following full day.
+
+Let's continue drilling into the data to find out more about the cost increase. Select the item that increased in cost (`articmustang`) to automatically set a filter for the resource group name. Then, change the **Group by** list to **Resource**. Then set the date range to a smaller period. For example, June 28 to July 4. In the following example image, the increase in cost is clearly shown. The type of resource is shown as _microsoft.network/virtualnetworkgateways_.
++
+Next, select the resource in the chart that increased in cost `articring` to set another filter for the resource. Now, costs are shown for just that resource. Then, set the **Group by** list to **Meter**.
++
+In the example above, you see that the virtual private network resource named VpnGw1 stopped getting used on June 30. On June 30, a more expensive virtual private network resource named VpnGw3 started getting used.
+
+At this point, you know what changed and the value that costs changed. However, you might not know _why_ the change happened. At this point, you should contact the people that created or used the resource. Continue to the next section to learn more.
+
+## Find people responsible for changed resource use
+
+Using Cost analysis, you might have found resources that had sudden changes in usage. However, it might not be obvious who is responsible for the resource or why the change was made. Often, the team responsible for a given resource will know about changes that were made to a resource. Engaging them is useful as you identify why charges might appear. For example, the owning team may have recently created the resource, updated its SKU (thereby changing the resource rate), or increased the load on the resource due to code changes.
+
+The [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md) article for Azure Resource Graph might help you to find additional information about configuration changes to resources.
+
+Continue reading the following sections for more techniques to determine who owns a resource.
### Analyze the audit logs for the resource
-If you have permissions to view a resource, you should be able to access its audit logs. Review the logs to find the user who was responsible for the most recent changes to a resource. To learn more, see [View and retrieve Azure Activity log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+If you have permission to view a resource, you should be able to access its audit logs. Review the logs to find the user who was responsible for the most recent changes to a resource. To learn more, see [View and retrieve Azure Activity log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log).
### Analyze user permissions to the resource's parent scope
-People that have write access to a subscription or resource group typically have information about the resources were created. They should be able to explain the purpose of a resource or point you to the person who knows. To identify the people with permissions for a Subscription scope, see [Check access for a user to Azure resources](../../role-based-access-control/check-access.md). You can use a similar process for resource groups.
+People that have write access to a subscription or resource group typically have information about the resources that were created or updated. They should be able to explain the purpose of a resource or point you to the person who knows. To identify the people with permissions for a subscription scope, see [Check access for a user to Azure resources](../../role-based-access-control/check-access.md). You can use a similar process for billing scopes, resource groups, and management groups.
+
+### Examine tagged resources
+
+If you have an existing policy of [tagging resources](../costs/cost-mgt-best-practices.md#tag-shared-resources), the resource might be tagged with identifying information. For example, resources might be tagged with owner, cost center, or development environment information. If you don't already have a resource tagging policy in place, consider adopting one to help identify resources in the future.
## Get help to identify charges
-If you've used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, please [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+If you've used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
## Next steps
data-factory Data Factory Azure Datalake Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-datalake-connector.md
Last updated 10/22/2021 -+ # Copy data to and from Data Lake Storage Gen1 by using Data Factory
For details about the Data Factory classes used in the code, see the [AzureDataL
1. Make sure the `subscriptionId` and `resourceGroupName` you specify in the linked service `typeProperties` are indeed the ones that your data lake account belongs to.
-2. Make sure you grant at least **Reader** role to the user or service principal on the data lake account. Here is how to make it:
+1. Grant, at a minimun, the **Reader** role to the user or service principal on the data lake account.
- 1. Go to the Azure portal -> your Data Lake Store account
- 2. Click **Access control (IAM)** on the blade of the Data Lake Store
- 3. Click **Add role assignment**
- 4. Set **Role** as **Reader**, and select the user or the service principal you use for copy to grant access
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-3. If you don't want to grant **Reader** role to the user or service principal, alternative is to [explicitly specify an execution location](data-factory-data-movement-activities.md#global) in copy activity with the location of your Data Lake Store. Example:
+1. If you don't want to grant the **Reader** role to the user or service principal, an alternative is to [explicitly specify an execution location](data-factory-data-movement-activities.md#global) in copy activity with the location of your Data Lake Store. Example:
```json {
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
To configure the network for a 2-node device, follow these steps on the first no
1. In the **Advanced networking** page, choose the topology for cluster and the storage traffic between nodes from the following options: - **Switchless**. Use this option when high-speed switches aren't available for storage and clustering traffic.
- - **Use switches and NIC teaming**. Use this option when you need port level redundancy through teaming. NIC Teaming allows you to group two physical ports on the device node, Port 3 and Port 4 in this case, into two software-based virtual network interfaces. These teamed network interfaces provide fast performance and fault tolerance in the event of a network interface failure. For more information, see [NIC teaming on Windows Server](/windows-server/networking/technologies/nic-teaming/nic-teaming).
+ - **Use switches and NIC teaming**. Use this option when you need port level redundancy through teaming. NIC Teaming allows you to group two physical ports on the device node, Port 3 and Port 4 in this case, into two software-based virtual network interfaces. These teamed network interfaces provide fast performance and fault tolerance in the event of a network interface failure. For more information, see [NIC teaming on Windows Server](/windows-server/networking/windows-server-supported-networking-scenarios#bkmk_nicteam).
- **Use switches without NIC teaming**. Use this option if you need an extra port for workload traffic and port level redundancy is not required. ![Local web UI "Network" page with "Use switches and NIC teaming" option selected](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/select-network-topology-1m.png)
databox Data Box Deploy Copy Data Via Copy Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-copy-service.md
Previously updated : 03/11/2021 Last updated : 04/04/2021 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure. # Tutorial: Use the data copy service to copy data into Azure Data Box (preview)
-This tutorial describes how to ingest data by using the data copy service without an intermediate host. The data copy service runs locally on Microsoft Azure Data Box, connects to your network-attached storage (NAS) device via SMB, and copies data to Data Box.
+This tutorial describes how to ingest data by using the data copy service without an intermediate host. The data copy service runs locally on Microsoft Azure Data Box, connects to your network-attached storage (NAS) device via SMB, and copies data to Data Box.
Use the data copy service:
To copy data by using the data copy service, you need to create a job:
|**Destination type** |Select the target storage type from the list: **Block Blob**, **Page Blob**, **Azure Files**, or **Block Blob (Archive)**. | |**Destination container/share** |Enter the name of the container or share that you want to upload data to in your destination storage account. The name can be a share name or a container name. For example, use `myshare` or `mycontainer`. You can also enter the name in the format `sharename\directory_name` or `containername\virtual_directory_name`. | |**Copy files matching pattern** | You can enter the file-name matching pattern in the following two ways:<ul><li>**Use wildcard expressions:** Only `*` and `?` are supported in wildcard expressions. For example, the expression `*.vhd` matches all the files that have the `.vhd` extension. Similarly, `*.dl?` matches all the files with either the extension `.dl` or that start with `.dl`, such as `.dll`. Likewise, `*foo` matches all the files whose names end with `foo`.<br>You can directly enter the wildcard expression in the field. By default, the value you enter in the field is treated as a wildcard expression.</li><li>**Use regular expressions:** POSIX-based regular expressions are supported. For example, the regular expression `.*\.vhd` will match all the files that have the `.vhd` extension. For regular expressions, provide the `<pattern>` directly as `regex(<pattern>)`. For more information about regular expressions, go to [Regular expression language - a quick reference](/dotnet/standard/base-types/regular-expression-language-quick-reference).</li><ul>|
- |**File optimization** |When this feature is enabled, files smaller than 1 MB are packed during ingestion. This packing speeds up the data copy for small files. It also saves a significant amount of time when the number of files far exceeds the number of directories.</br>If you use file optimization:<ul><li>After you run prepare to ship, you can [download a BOM file](data-box-logs.md#inspect-bom-during-prepare-to-ship), which lists the original file names, to help you ensure that all the right files are copied.</li><li>Don't delete the packed files, whose file names begin with "ADB_PACK_". If you delete a packed file, the original file isn't uploaded during future data copies.</li><li>Don't copy the same files that you copy with the Copy Service via other protocols such as SMB, NFS, or REST API. Using different protocols can result in conflicts and failure during data uploads. </li></ul> |
+ |**File optimization** |When this feature is enabled, files smaller than 1 MB are packed during ingestion. This packing speeds up the data copy for small files. It also saves a significant amount of time when the number of files far exceeds the number of directories.</br>If you use file optimization:<ul><li>After you run prepare to ship, you can [download a BOM file](data-box-logs.md#inspect-bom-during-prepare-to-ship), which lists the original file names, to help you ensure that all the right files are copied.</li><li>Don't delete the packed files, whose file names begin with "ADB_PACK_". If you delete a packed file, the original file isn't uploaded during future data copies.</li><li>Don't copy the same files that you copy with the Copy Service via other protocols such as SMB, NFS, or REST API. Using different protocols can result in conflicts and failure during data uploads. </li><li>File optimization is not supported for Azure Files. To see what timestamps, file attributes, and ACLs are copied for a non-optimized data copy job, view the [transferred metadata](data-box-file-acls-preservation.md). </li></ul> |
4. Select **Start**. The inputs are validated, and if the validation succeeds, then the job starts. It might take a few minutes for the job to start.
databox Data Box File Acls Preservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-file-acls-preservation.md
Previously updated : 01/21/2022 Last updated : 04/11/2022
Azure Data Box lets you preserve access control lists (ACLs), timestamps, and file attributes when sending data to Azure. This article describes the metadata that you can transfer when copying data to Data Box via Server Message Block (SMB) to upload it to Azure Files.
-Specific steps are provided to copy metadata with Windows and Linux data copy tools. Metadata isn't preserved when transferring data to blob storage.
+## Transferred metadata
-In this article, the ACLs, timestamps, and file attributes that are transferred are referred to collectively as *metadata*.
+ACLs, timestamps, and file attributes are the metadata that is transferred when the data from Data Box is uploaded to Azure Files. In this article, ACLs, timestamps, and file attributes are referred to collectively as *metadata*.
-## Transferred metadata
+The metadata can be copied with Windows and Linux data copy tools. Metadata isn't preserved when transferring data to blob storage.
-The following metadata is transferred when data from the Data Box is uploaded to Azure Files.
+The subsequent sections of the article discuss in detail as to how the timestamps, file attributes, and ACLs are transferred when the data from Data Box is uploaded to Azure Files.
-#### Timestamps
+## Timestamps
The following timestamps are transferred: - CreationTime
The following timestamps are transferred:
The following timestamp isn't transferred: - LastAccessTime
-
-#### File attributes
+
+## File attributes
File attributes on both files and directories are transferred unless otherwise noted.
The following file attributes aren't transferred:
Read-only attributes on directories aren't transferred.
-#### ACLs
+## ACLs
<!--ACLs DEFINITION
Transfer of ACLs is enabled by default. You might want to disable this setting i
> [!NOTE] > Files with ACLs containing conditional access control entry (ACE) strings are not copied. This is a known issue. To work around this, copy these files to the Azure Files share manually by mounting the share and then using a copy tool that supports copying ACLs.
-**ACLs transfer over SMB**
+### ACLs transfer over SMB
During an [SMB file transfer](./data-box-deploy-copy-data.md), the following ACLs are transferred: -- Discretionary ACLs (DACLs) and system ACLs (SACLs) for directories and files that you copy to your Data Box
+- Discretionary ACLs (DACLs) and system ACLs (SACLs) for directories and files that you copy to your Data Box.
- If you use a Linux client, only Windows NT ACLs are transferred.<!--Kyle asked: What are Windows NT ACLs.-->
-ACLs aren't transferred when you [copy data over NFS](./data-box-deploy-copy-data-via-nfs.md) or [use the data copy service](data-box-deploy-copy-data-via-copy-service.md). The data copy service reads data directly from your shares and can't read ACLs.
+### ACLs transfer over Data Copy Service
+
+During a [data copy service file transfer](data-box-deploy-copy-data-via-copy-service.md), the following ACLs are transferred:
+
+- Discretionary ACLs (DACLs) and system ACLs (SACLs) for directories and files that you copy to your Data Box.
+
+To copy SACLs from your files, you must provide credentials for a user with **SeBackupPrivilege**. Users in the Administrators or Backup Operators group will have this privilege by default
+
+If you do not have **SeBackupPrivilege**:
+- You will not be able to copy SACLs for Azure Files copy service jobs.
+- You may experience access issues and receive this error in the error log: *Could not read SACLs from share due to insufficient privileges*.
+
+ For more information, learn more about [SeBackupPrivilege](/windows/win32/secauthz/privilege-constants).
+
+### ACLs transfer over NFS
+
+ACLs aren't transferred when you copy data over [NFS](data-box-deploy-copy-data-via-nfs.md).
+
-**Default ACLs transfer**
+### Default ACLs transfer
Even if your data copy tool doesn't copy ACLs, the default ACLs on directories and files are transferred to Azure Files when you use a Windows client. The default ACLs aren't transferred when you use a Linux client.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
There are four triggers for an image scan:
- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image. -- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for container Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
- **Continuous scan**- This trigger has two modes:
- - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
- (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
Defender for Cloud filters, and classifies findings from the scanner. When an im
### View vulnerabilities for running images
-The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registeries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the Not applicable tab.
+The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registeries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the **Not applicable** tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable" lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
The following describes the components necessary in order to receive the full pr
## FAQ - Defender for Containers - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)
+- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-set-vmss)
+- [Does Microsoft Defender for Containers support AKS without scale set (default)?](#does-microsoft-defender-for-containers-support-aks-without-scale-set-default)
+- [Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?](#do-i-need-to-install-the-log-analytics-vm-extension-on-my-aks-nodes-for-security-protection)
### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender
### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)? Yes.
-### Does Microsoft Defender for Containers support AKS without scale set (default) ?
+### Does Microsoft Defender for Containers support AKS without scale set (default)?
No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported. ### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Defender for Cloud is offered in two modes:
## FAQ - Pricing and billing -- [Microsoft Defender for Cloud's enhanced security features](#microsoft-defender-for-clouds-enhanced-security-features)
- - [What are the benefits of enabling enhanced security features?](#what-are-the-benefits-of-enabling-enhanced-security-features)
- - [FAQ - Pricing and billing](#faqpricing-and-billing)
- - [How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?](#how-can-i-track-who-in-my-organization-enabled-a-microsoft-defender-plan-in-defender-for-cloud)
- - [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud)
- - [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription)
- - [Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers-in-my-subscription)
- - [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
- - [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)
- - [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)
- - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)
- - [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
- - [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)
- - [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
- - [Next steps](#next-steps)
-
+- [How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?](#how-can-i-track-who-in-my-organization-enabled-a-microsoft-defender-plan-in-defender-for-cloud)
+- [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud)
+- [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription)
+- [Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers-in-my-subscription)
+- [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
+- [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)
+- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)
+- [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)
+- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
+- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)
+- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud? Azure Subscriptions may have multiple administrators with permissions to change the pricing settings. To find out which user made a change, use the Azure Activity Log.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
The discount will be effective starting from the approval date, and won't take p
## Does Microsoft Defender for Servers support the new unified Microsoft Defender for Endpoint agent for Windows Server 2012 R2 and 2016?
-In October 2021, we released [a new Microsoft Defender for Endpoint solution stack](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292) to public preview for Windows Server 2012 R2 and 2016. The new solution stack does not use or require installation of the Microsoft Monitoring Agent (MMA).
-
-The new version of Microsoft Defender for Endpoint is deployed by Defender for Servers Plan 1 for Windows Server 2012 R2 and 2016.
+Defender for Servers Plan 1 deploys [the new Microsoft Defender for Endpoint solution stack](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292) for Windows Server 2012 R2 and 2016, which does not use or require installation of the Microsoft Monitoring Agent (MMA).
### How do I switch from a third-party EDR tool? Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Last updated 02/22/2022
-# Migrate databases with Azure SQL Migration extension for Azure Data Studio (Preview)
+# Migrate databases with Azure SQL Migration extension for Azure Data Studio
The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure.
+The key benefits of using the Azure SQL Migration extension for Azure Data Studio are:
+1. Assess your SQL Server databases for Azure readiness or to identify any migration blockers before migrating them to Azure. You can assess SQL Server databases running on both Windows and Linux Operating System using the Azure SQL Migration extension.
+1. Get right-sized Azure recommendation based on performance data collected from your source SQL Server databases. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](ads-sku-recommend.md).
+1. Perform online (minimal downtime) and offline database migrations using an easy-to-use wizard. To see step-by-step tutorial, see sample [Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS](tutorial-sql-server-managed-instance-online-ads.md).
+1. Monitor all migrations started in Azure Data Studio from the Azure portal. To learn more, see [Monitor database migration progress from the Azure portal](#monitor-database-migration-progress-from-the-azure-portal).
+1. Leverage the capabilities of the Azure SQL Migration extension to assess and migrate databases at scale using automation with Azure PowerShell and Azure CLI. To learn more, see [Migrate databases at scale using automation](migration-dms-powershell-cli.md).
+ ## Architecture of Azure SQL Migration extension for Azure Data Studio Azure Database Migration Service (DMS) is one of the core components in the overall architecture. DMS provides a reliable migration orchestrator to enable database migrations to Azure SQL.
Azure Database Migration Service prerequisites that are common across all suppor
- We recommend up to 10 concurrent database migrations per self-hosted integration runtime on a single machine. To increase the number of concurrent database migrations, scale out self-hosted runtime up to four nodes or create separate self-hosted integration runtime on different machines. - Configure self-hosted integration runtime to auto-update to automatically apply any new features, bug fixes, and enhancements that are released. To learn more, see [Self-hosted Integration Runtime Auto-update](../data-factory/self-hosted-integration-runtime-auto-update.md).
+## Monitor database migration progress from the Azure portal
+When you migrate database(s) using the Azure SQL Migration extension for Azure Data Studio, the migrations are orchestrated by the Azure Database Migration Service that was selected in the wizard. To monitor database migrations from the Azure portal,
+- Open the [Azure portal](https://portal.azure.com/)
+- Search for your Azure Database Migration Service by the resource name
+ :::image type="content" source="media/migration-using-azure-data-studio/search-dms-portal.png" alt-text="Search Azure Database Migration Service resource in portal":::
+- Select the **Monitor migrations** tile in the **Overview** page to view the details of your database migrations.
+ :::image type="content" source="media/migration-using-azure-data-studio/dms-ads-monitor-portal.png" alt-text="Monitor migrations in Azure portal":::
++ ## Known issues and limitations - Overwriting existing databases using DMS in your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine isn't supported. - Configuring high availability and disaster recovery on your target to match source topology is not supported by DMS.
Azure Database Migration Service prerequisites that are common across all suppor
- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below as target versions are not supported currently. - Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations.-- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
-> [!IMPORTANT]
-> **Known issue when migrating multiple databases to SQL Server on Azure VM:** Concurrently migrating multiple databases to the same SQL Server on Azure VM results in migration failures for most databases. Ensure you only migrate a single database to a SQL Server on Azure VM at any point in time.
+- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
## Pricing - Azure Database Migration Service is free to use with the Azure SQL Migration extension in Azure Data Studio. You can migrate multiple SQL Server databases using the Azure Database Migration Service at no charge for using the service or the Azure SQL Migration extension.
Azure Database Migration Service prerequisites that are common across all suppor
- Provide your own machine or on-premises server to install Azure Data Studio. - A self-hosted integration runtime is needed to access database backups from your on-premises network share.
+## Regional Availability
+For the list of Azure regions that support database migrations using the Azure SQL Migration extension for Azure Data studio (powered by Azure DMS), see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration)
+ ## Next steps - For an overview and installation of the Azure SQL Migration extension, see [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with DMS (Preview)
+# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with DMS
You can use the Azure SQL Migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Managed Instance. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
After all database backups are restored on Azure SQL Managed Instance, an automa
* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](../azure-sql/managed-instance/restore-sample-database-quickstart.md). * For information about SQL Managed Instance, see [What is SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md). * For information about connecting apps to SQL Managed Instance, see [Connect applications](../azure-sql/managed-instance/connect-application-instance.md).+
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS (preview)
+# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS
Use the Azure SQL Migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS (Preview)
+# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS
Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS (Preview)
+# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS
Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/overview.md
# What is Azure Event Grid on Azure IoT Edge?+
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
+ Event Grid on IoT Edge brings the power and flexibility of Azure Event Grid to the edge. Create topics, publish events, and subscribe multiple destinations whether they're modules on the same device, other edge devices, or services in the cloud. As in the cloud, the Event Grid on IoT Edge module handles routing, filtering, and reliable delivery of events at scale. Filter events to ensure that only relevant events are sent to different event handlers using advanced string, numerical, and boolean filters. Retry logic makes sure that the event reaches the target destination even if it's not available at the time of publish. It allows you to use Event Grid on IoT Edge as a powerful store and forward mechanism.
Report any issues with using Event Grid on IoT Edge at [https://github.com/Azure
* [Publish, subscribe to events in cloud](pub-sub-events-webhook-cloud.md) * [Forward events to Event Grid cloud](forward-events-event-grid-cloud.md) * [Forward events to IoTHub](forward-events-iothub.md)
-* [React to Blob Storage events locally](react-blob-storage-events-locally.md)
+* [React to Blob Storage events locally](react-blob-storage-events-locally.md)
event-grid Handler Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-functions.md
We recommend that you use the first approach (Event Grid trigger) as it has the
- Event Grid automatically validates Event Grid triggers. With generic HTTP triggers, you must implement the [validation response](webhook-event-delivery.md) yourself. - Event Grid automatically adjusts the rate at which events are delivered to a function triggered by an Event Grid event based on the perceived rate at which the function can process events. This rate match feature averts delivery errors that stem from the inability of a function to process events as the functionΓÇÖs event processing rate can vary over time. To improve efficiency at high throughput, enable batching on the event subscription. For more information, see [Enable batching](#enable-batching).
- > [!NOTE]
- > Currently, you can't use an Event Grid trigger for a function app when the event is delivered in the **CloudEvents** schema. Instead, use an HTTP trigger.
- ## Tutorials |Title |Description |
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Title: 'Azure ExpressRoute: Configure ExpressRoute Direct'
description: Learn how to use Azure PowerShell to configure Azure ExpressRoute Direct to connect directly to the Microsoft global network. - Last updated 12/14/2020
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
Reference the recently created ExpressRoute Direct resource, input a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path is not referenced, the document will download to the current directory.
+### Azure PowerShell
+ ```powershell New-AzExpressRoutePortLOA -ExpressRoutePort $ERDirect -CustomerName TestCustomerName -Destination "C:\Users\SampleUser\Downloads" ```
Reference the recently created ExpressRoute Direct resource, input a customer na
Written Letter of Authorization To: C:\Users\SampleUser\Downloads\LOA.pdf ```
+### Cloud Shell
+
+1. Replace the `<USERNAME>` with the username displayed in the prompt, then run the command to generate the Letter of Authorization. Use the exact path define in the command.
+
+ ```azurepowershell-interactive
+ New-AzExpressRoutePortLOA -ExpressRoutePort $ERDirect -CustomerName TestCustomerName -Destination /home/USERNAME/loa.pdf
+ ```
+
+1. Select the **Upload/Download** button and then select **Download**. Select the `loa.pdf` file and select Download.
+
+ :::image type="content" source="./media/expressroute-howto-erdirect/download.png" alt-text="Screenshot of download button from Azure Cloud Shell.":::
+ ## <a name="state"></a>Change Admin State of links This process should be used to conduct a Layer 1 test, ensuring that each cross-connection is properly patched into each router for primary and secondary.
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
Enable connectivity between your on-premises networks. There are separate sets o
1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
+ > [!NOTE]
+ > The Global Reach configuration is bidirectional. Once you create the connection from one circuit the other circuit will also have the configuration.
+ >
+ ### ExpressRoute circuits in different Azure subscriptions If the two circuits aren't in the same Azure subscription, you'll need authorization. In the following configuration, authorization is generated from circuit 2's subscription. The authorization key is then passed to circuit 1.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[X2nsat Inc.](https://www.x2nsat.com/expressroute/)** |Coresite |Silicon Valley, Silicon Valley 2| | **Zain** |Equinix |London| | **[Zertia](https://www.zertia.es)**| Level 3 | Madrid |
-| **[Zirro](https://zirro.com/services/)**| Cologix, Equinix | Montreal, Toronto |
+| **Zirro**| Cologix, Equinix | Montreal, Toronto |
## Connectivity through datacenter providers
expressroute Expressroute Optimize Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-optimize-routing.md
To optimize routing for both office users, you need to know which prefix is from
![ExpressRoute Case 1 solution - use BGP Communities](./media/expressroute-optimize-routing/expressroute-case1-solution.png) > [!NOTE]
-> The same technique, using Local Preference, can be applied to routing from customer to Azure Virtual Network. We don't tag BGP Community value to the prefixes advertised from Azure to your network. However, since you know which of your Virtual Network deployment is close to which of your office, you can configure your routers accordingly to prefer one ExpressRoute circuit to another.
+> The same technique, using Local Preference, can be applied to routing from customer to Azure virtual network when using private peering. Microsoft doesn't tag BGP community values to the prefixes advertised from Azure to your network. However, since you know which of your virtual network deployment is close to which of your office, you can configure your routers accordingly to prefer one ExpressRoute circuit over another.
> >
The solution is simple. Since you know where the VNets and the circuits are, you
> [!NOTE] > You can also influence routing from VNet to your on-premises network, if you have multiple ExpressRoute circuits, by configuring the weight of a connection instead of applying AS PATH prepending, a technique described in the second scenario above. For each prefix, we will always look at the connection weight before the AS Path length when deciding how to send traffic. >
->
+>
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
A Private AS Number is allowed with public peering.
## Dynamic route exchange Routing exchange will be over eBGP protocol. EBGP sessions are established between the MSEEs and your routers. Authentication of BGP sessions is not a requirement. If required, an MD5 hash can be configured. See the [Configure routing](how-to-routefilter-portal.md) and [Circuit provisioning workflows and circuit states](expressroute-workflows.md) for information about configuring BGP sessions.
-## Autonomous System numbers
+## Autonomous System numbers (ASN)
Microsoft uses AS 12076 for Azure public, Azure private and Microsoft peering. We have reserved ASNs from 65515 to 65520 for internal use. Both 16 and 32 bit AS numbers are supported. There are no requirements around data transfer symmetry. The forward and return paths may traverse different router pairs. Identical routes must be advertised from either sides across multiple circuit pairs belonging to you. Route metrics are not required to be identical.
expressroute Using Expressroute For Microsoft365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/using-expressroute-for-microsoft365.md
documentationcenter: na
- Previously updated : 4/29/2021 Last updated : 4/12/2022 # Using ExpressRoute for routing Microsoft 365 traffic
-An ExpressRoute circuit provides private connectivity to Microsoft backbone network.
-* It offers *Private peering* to connect to private endpoints of your IaaS deployment in Azure regions
-* Also, it offers *Microsoft peering* to connect to public endpoints of IaaS, PaaS, and SaaS services in Microsoft network.
-
-For more information about ExpressRoute, see the [Introduction to ExpressRoute][ExR-Intro] article.
+An ExpressRoute circuit provides private connectivity to the Microsoft backbone network.
+* It offers *Private peering* to connect to private endpoints of your IaaS deployment in Azure regions.
+* Also, it offers *Microsoft peering* to connect to public endpoints of IaaS, PaaS, and SaaS services in the Microsoft network.
-Often there's a confusion whether ExpressRoute can be used or not for routing Microsoft 365 SaaS traffic.
+For more information about ExpressRoute, see the [Introduction to ExpressRoute][ExR-Intro] article.
-* One side argument: ExpressRoute does offer Microsoft peering, using which you can reach most of the public endpoints in Microsoft network.
-In fact, using a *Route Filter* you can select Microsoft 365 service prefixes that need to be advertised via Microsoft peering to your on-premises network.
-These routes advertisement enables routing Microsoft 365 service traffic over the ExpressRoute circuit.
-* The counter argument: Microsoft 365 is a distributed service. It is designed to enable customers all over the world to connect to the service using the Internet.
-So, it's recommended not to use ExpressRoute for Microsoft 365.
+Often time there's confusion about whether or not ExpressRoute can be used for routing Microsoft 365 SaaS traffic. ExpressRoute offers Microsoft peering, which allows you to access most public endpoints in the Microsoft network. With the use of a *Route Filter*, you can select Microsoft 365 service prefixes that you want to advertise over Microsoft peering to your on-premises network. These routes enable routing Microsoft 365 service traffic over the ExpressRoute circuit.
-The goals of this article are:
-* to provide technical reasoning for the arguments, and
-* objectively discuss when to use ExpressRoute for routing Microsoft 365 traffic and when not to use it.
+In this article, you'll learn about when it's necessary to use ExpressRoute to route Microsoft 365 traffic.
## Network requirements of Microsoft 365 traffic
-Microsoft 365 service often includes real-time traffic such as voice & video calls, online meetings, and real-time collaboration. This real-time traffic has stringent network performance requirements in terms of latency and jitter. Within certain limits of network latency, jitter can be effectively handled using buffer at the client device. Network latency is a function of physical distance traffic need to travel, link bandwidth, and network processing latency.
+
+Microsoft 365 services often include real-time traffic such as voice & video calls, online meetings, and real-time collaboration. This real-time traffic has stringent network performance requirements in terms of latency and jitter. Within certain limits of network latency, jitter can be effectively handled by using a buffer at the client device. Network latency is a function of physical distance traffic need to travel, the link bandwidth, and the network processing latency.
## Network optimization features of Microsoft 365
-Microsoft strives to optimize network performance of all the cloud applications both in terms of architecture and features. To begin with, Microsoft owns one of the largest global networks, which is optimized to achieve the core objective of offering best network performance. Microsoft network is software defined, and it's a "Cold Potato" network. "Cold Potato" network in the sense, it attracts and egress traffic as close as possible to client-device/customer-network. Besides, Microsoft network is highly redundant and highly available. For more information about architecture optimization, see [How Microsoft builds its fast and reliable global network][MGN].
+Microsoft strives to optimize network performance of all the cloud applications both in terms of architecture and features. To start, Microsoft owns one of the largest global networks, which is optimized to achieve the core objective of offering the best network performance. Microsoft's network is software defined, and uses a method called "cold potato" routing. In a "cold potato" traffic ingress and egress as close as possible to client-device/customer-network. Microsoft's network is designed with redundancy and is highly available. For more information about architecture optimization, see [How Microsoft builds its fast and reliable global network][MGN].
To address the stringent network latency requirements, Microsoft 365 shortens route length by:
-* dynamically routing the end-user connection to the nearest Microsoft 365 entry point, and
-* from the entry point efficiently routing them within the Microsoft global network to the nearest (and authorized) Microsoft 365 data center.
+* Dynamically routing the end-user connection to the nearest Microsoft 365 entry point.
+* From the entry point, traffic is efficiently routed within the Microsoft's global network to the nearest Microsoft 365 data center.
-The Microsoft 365 entry points are serviced by Azure Front Door (AFD). AFD is a widely distributed service present at Microsoft global edge network and it helps to create fast, secure, and highly scalable SaaS applications. To further understand how AFD accelerates web application performance, see [What is Azure Front Door?][AFD]. While choosing the nearest Microsoft 365 data center, Microsoft does take into consideration data sovereignty regulations within the geo-political region.
+Microsoft 365 entry points are serviced by Azure Front Door. Azure Front Door is a widely distributed service present at Microsoft global edge network that creates a fast, secure, and highly scalable SaaS applications. For more information about how Azure Front Door accelerates web application performance, see [What is Azure Front Door?][AFD]. When choosing the nearest Microsoft 365 data center, Microsoft takes into consideration data sovereignty regulations within the geo-political region.
## What is geo-pinning connections?
-Between a client-server when you force the traffic to flow through certain network device(s) located in a geographical location, then it's referred to as geo-pinning the network connections. Traditional network architecture, with the underlying design principle that the clients-servers are statically located, commonly geo-pins the connections.
-For example, when you force your enterprise Internet connections traverse through your corporate network, and egress from a central location (typically via a set of proxy-servers or firewalls), you're geo-pinning the Internet connections.
-
-Similarly, in SaaS application architecture if you force route the traffic through an intermediate datacenter (for example, cloud security) in a region or via one or more intermediate network devices (for example, ExpressRoute) in a specific location then you're geo-pinning the SaaS connections.
+When you force a client-server to pass traffic through certain network device(s) located in a geographical location, that is referred to as geo-pinning the network connection. In a traditional network architecture, the underlying design principle is that the clients-servers are statically located which commonly geo-pins connections.
-## When not to use ExpressRoute for Microsoft 365?
+For example, when you force your enterprise Internet connections to traverse through your corporate network. The egress is from a central location, typically via a set of proxy-servers or firewalls, you're geo-pinning the Internet connections. Another example of geo-pinning is when you have a SaaS application architecture that you force traffic through an intermediate datacenter in a region or using one or more intermediate network devices.
-Because of its ability to dynamically shorten the route length and dynamically choose the closest server datacenter depending on the location of the clients, Microsoft 365 is said to be designed for the Internet.
-Besides, certain Microsoft 365 traffic is routed only through the Internet.
-When you have your SaaS clients widely distributed across a region or globally, and if you geo-pin the connections to a particular location then you are forcing the clients further away from the geo-pined location to experience higher network latency.
-Higher network latency results in suboptimal network performance and poor application performance.
+## When is ExpressRoute not appropriate for Microsoft 365?
-Therefore, in scenarios where you have widely distributed SaaS clients or clients that are highly mobile, you don't want to geo-pin connections by any means including forcing the traffic through an ExpressRoute circuit in a specific peering location.
+Microsoft 365 has the ability to dynamically shorten the route length and dynamically choose the closest server datacenter depending on the location of the clients. Microsoft 365 is said to be designed for the Internet.
+Some Microsoft 365 traffic can only be routed through the Internet.
+When you have your SaaS clients widely distributed across a region or globally, and you're geo-pinning the connections to a particular location then you're forcing your clients further away from the geo-pined location to experience higher network latency. The higher network latency can result in suboptimal network performance and poor application performance.
+Therefore, in scenarios where you have widely distributed SaaS clients or clients that are mostly mobile, you don't want to geo-pin connections by any means including forcing the traffic through an ExpressRoute circuit in a specific peering location.
## When to use ExpressRoute for Microsoft 365? The following are some of the reasons why you may want to use ExpressRoute for routing Microsoft 365 traffic:
-* Your SaaS clients are concentrated in a geo-location and the most optimal way to connect to Microsoft global network is via ExpressRoute circuits
-* Your SaaS clients are concentrated in multiple global locations and each location has its own ExpressRoute circuits that provide optimal connectivity to Microsoft global network
-* You're required by law to route cloud-bound traffic via private connections
-* You're required to route all the SaaS traffic to a geo-pinned centralized location (be it a private or a public datacenter) and the optimal way to connect the centralized location to the Microsoft global network is via ExpressRoute
-* For some of your static SaaS clients only ExpressRoute provides optimal connectivity, while for the other clients you use Internet
-While you use ExpressRoute, you can apply the route filter associated with Microsoft peering of ExpressRoute to route only a subset of Microsoft 365 services and/or Azure PaaS services over the ExpressRoute circuit. For more information, see [Tutorial: Configure route filters for Microsoft peering][ExRRF].
+* Your SaaS clients are concentrated in a geo-location and the most optimal way to connect to Microsoft global network is using ExpressRoute.
+* Your SaaS clients are concentrated in multiple global locations and each location has its own ExpressRoute connection that provides optimal connectivity to Microsoft's global network.
+* You're required by law to route cloud-bound traffic with a private connection.
+* You're required to route all the SaaS traffic to a geo-pinned centralized location whether it be a private or a public datacenter. The only optimal way to connect the centralized location to the Microsoft global network is by using ExpressRoute.
+* For some of your static SaaS clients only ExpressRoute can provide optimal connectivity, while for the other clients they can use the Internet.
+
+When you're using ExpressRoute, you can apply a route filter to Microsoft peering to only advertise a subset of Microsoft 365 services and/or Azure PaaS services prefixes over the ExpressRoute circuit. For more information, see [Tutorial: Configure route filters for Microsoft peering][ExRRF].
## Next steps
governance Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/extension-for-vscode.md
Title: Azure Policy extension for Visual Studio Code description: Learn how to use the Azure Policy extension for Visual Studio Code to look up Azure Resource Manager aliases. Previously updated : 09/01/2021 Last updated : 04/12/2022 + # Use Azure Policy extension for Visual Studio Code > Applies to Azure Policy extension version **0.1.2** and newer
-Learn how to use the Azure Policy extension for Visual Studio Code to look up
+Learn how to use the Azure Policy extension for Visual Studio Code (VS Code) to look up
[aliases](../concepts/definition-structure.md#aliases), review resources and policy definitions, export objects, and evaluate policy definitions. First, we'll describe how to install the Azure Policy extension in Visual Studio Code. Then we'll walk through how to look up aliases.
-The Azure Policy extension for Visual Studio Code can be installed on Linux, Mac, and Windows.
+The Azure Policy extension for Visual Studio Code can be installed on Linux, macOS, and Windows.
## Prerequisites
The following items are required for completing the steps in this article:
## Install and configure the Azure Policy extension
-After you meet the prerequisites, you can install Azure Policy extension for Visual Studio Code by
+After you meet the prerequisites, you can install the [Azure Policy extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=AzurePolicy.azurepolicyextension) by
following these steps: 1. Open Visual Studio Code.
following these steps:
For a national cloud user, follow these steps to set the Azure environment first:
-1. Select **File\Preferences\Settings**.
+1. Select **File** > **Preferences** > **Settings**.
1. Search on the following string: _Azure: Cloud_ 1. Select the nation cloud from the list:
definitions, follow these steps:
1. Start the subscription command from the Command Palette or the window footer.
- - Command Palette:
+ - Command Palette
From the menu bar, go to **View** > **Command Palette**, and enter **Azure: Select Subscriptions**.
definitions, follow these steps:
### Search for and view resources The Azure Policy extension lists resources in the selected subscriptions by Resource Provider and by
-resource group in the **Resources** pane. The treeview includes the following groupings of resources
+resource group in the **Resources** pane. The tree view includes the following groupings of resources
within the selected subscription or at the subscription level: - **Resource Providers**
resource with the following steps:
From the Azure Policy extension, hover over the **Resources** panel and select the ellipsis, then select **Search Resources**.
- - Command Palette:
+ - Command Palette
From the menu bar, go to **View** > **Command Palette**, and enter **Azure Policy: Search Resources**.
resource with the following steps:
### Discover aliases for resource properties When a resource is selected, whether through the search interface or by selecting it in the
-treeview, the Azure Policy extension opens the JSON file representing that resource and all its
+tree view, the Azure Policy extension opens the JavaScript Object Notation (JSON) file representing that resource and all its
Azure Resource Manager property values. Once a resource is open, hovering over the Resource Manager property name or value displays the
matching aliases.
### Search for and view policy definitions and assignments
-The Azure Policy extension lists policy types and policy assignments as a treeview for the
+The Azure Policy extension lists policy types and policy assignments as a tree view for the
subscriptions selected to be displayed in the **Policies** pane. Customers with hundreds or thousands of policy definitions or assignments in a single subscription may prefer a searchable way to locate their policy definitions or assignments. The Azure Policy extension makes it possible to
search for a specific policy or assignment with the following steps:
From the Azure Policy extension, hover over the **Policies** panel and select the ellipsis, then select **Search Policies**.
- - Command Palette:
+ - Command Palette
From the menu bar, go to **View** > **Command Palette**, and enter **Azure Policy: Search Policies**.
search for a specific policy or assignment with the following steps:
policy definition or policy assignment. When selecting a policy or assignment, whether through the search interface or by selecting it in
-the treeview, the Azure Policy extension opens the JSON that represents the policy or assignment and
+the tree view, the Azure Policy extension opens the JSON that represents the policy or assignment and
all its Resource Manager property values. The extension can validate the opened Azure Policy JSON schema.
example:
The VS Code extension can create a policy definition from an existing [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3
-[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). The YAML
+[constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). The YAML Ain't Markup Language (YAML)
file must be open in VS Code for the Command Palette to be an option. 1. Open a valid OPA GateKeeper v3 constraint template YAML file.
From the menu bar, go to **View** > **Command Palette**, and then enter **Azure:
## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Study the [Azure Policy definition structure](../concepts/definition-structure.md).
+- Read [Understanding policy effects](../concepts/effects.md).
- Understand how to [programmatically create policy definitions](programmatically-create.md). - Learn how to [remediate non-compliant resources](remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Grasp what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/connect-install-beeline.md
Title: Connect to or install Apache Beeline - Azure HDInsight
+ Title: Connect to Hive using Beeline or install Beeline locally to connect from your local - Azure HDInsight
description: Learn how to connect to the Apache Beeline client to run Hive queries with Hadoop on HDInsight. Beeline is a utility for working with HiveServer2 over JDBC. Last updated 04/07/2021
-# Connect to Apache Beeline on HDInsight or install it locally
+# Connect to Hive using Beeline or install Beeline locally to connect from your local
-[Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineΓÇôNewCommandLineShell) is a Hive client that is included on the head nodes of your HDInsight cluster. This article describes how to connect to the Beeline client installed on your HDInsight cluster across different types of connections. It also discusses how to [Install the Beeline client locally](#install-beeline-client).
+[Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineΓÇôNewCommandLineShell) is a Hive client that is included on the head nodes of your HDInsight cluster. This article describes how to connect to Hive using the Beeline client installed on your HDInsight cluster across different types of connections. It also discusses how to [Install the Beeline client locally](#install-beeline-client).
## Types of connections
beeline -u 'jdbc:hive2://clustername-int.azurehdinsight.net:443/;ssl=true;transp
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
-#### From cluster head or inside Azure Virtual Network with Apache Spark
+#### From cluster head node or inside Azure Virtual Network with Apache Spark
When connecting directly from the cluster head node, or from a resource inside the same Azure Virtual Network as the HDInsight cluster, port `10002` should be used for Spark Thrift server instead of `10001`. The following example shows how to connect directly to the head node:
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Last updated 03/21/2022-+ # Features
Below is a summary of the supported RESTful capabilities. For more information o
| update | Yes | Yes | | | update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes |
-| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We've included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
-| patch (conditional) | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We've included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).
+| patch | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch) only.
+| patch (conditional) | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch) only. |
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Last updated 02/15/2022-+ # FHIR REST API capabilities for Azure API for FHIR
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three ways to Patch resources: JSON Patch, XML Patch, and FHIRPath Patch. The FHIR Service support both JSON Patch and FHIRPath Patch along with Conditional JSON Patch and Conditional FHIRPath Patch (which allows you to Patch a resource based on a search criteria instead of a resource ID). To walk through some examples, refer to the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http) and the [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http) for each approach. For additional details, read the [HL7 documentation for patch operations with FHIR](https://www.hl7.org/fhir/http.html#patch).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
-### Testing Patch
+### Patch with FHIRPath Patch
-Within Patch, there's a test operation that allows you to validate that a condition is true before doing the patch. For example, if you wanted to set a patient deceased, only if they weren't already marked as deceased, you could use the example below:
+This method of patch is the most powerful as it leverages [FHIRPath](https://hl7.org/fhirpath/) for selecting which element to target. One common scenario is using FHIRPath Patch to update an element in a list without knowing the order of the list. For example, if you want to delete a patientΓÇÖs home telecom information without knowing the index, you can use the example below.
-PATCH `http://{FHIR-SERVICE-NAME}/Patient/{PatientID}`
+PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/>
+Content-type: `application/fhir+json`
+
+```
+{
+ "resourceType": "Parameters",
+ "parameter": [
+ {
+ "name": "operation",
+ "part": [
+ {
+ "name": "type",
+ "valueCode": "delete"
+ },
+ {
+ "name": "path",
+ "valueString": "Patient.telecom.where(use = 'home')"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Any FHIRPath Patch operations must have the `application/fhir+json` Content-Type header set. FHIRPatch Patch supports add, insert, delete, remove, and move operations. FHIRPatch Patch operations also can be easily integrated into Bundles. For more examples, look at the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
+
+### Patch with JSON Patch
+
+JSON Patch in the FHIR Service conforms to the well-used [specification defined by the Internet Engineering Task Force](https://datatracker.ietf.org/doc/html/rfc6902). The payload format does not use FHIR resources and instead uses a JSON document leveraging JSON-Pointers for element selection. JSON Patch is more compact and has a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased only if they're not already marked as deceased, you can use the example below.
+
+PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/>
Content-type: `application/json-patch+json` ``` [ {
- ΓÇ£opΓÇ¥: ΓÇ£testΓÇ¥,
- ΓÇ£pathΓÇ¥: ΓÇ£/deceasedBooleanΓÇ¥,
- ΓÇ£valueΓÇ¥: false
+ "op": "test",
+ "path": "/deceasedBoolean",
+ "value": false
}, {
- ΓÇ£opΓÇ¥: ΓÇ£replaceΓÇ¥
- ΓÇ£pathΓÇ¥: ΓÇ£/deceasedBooleanΓÇ¥,
- ΓÇ£valueΓÇ¥: true
+ "op": "replace",
+ "path": "/deceasedBoolean",
+ "value": true
} ]- ```
-### Patch in Bundles
+Any JSON Patch operations must have the `application/json-patch+json` Content-Type header set. JSON Patch supports add, remove, replace, copy, move, and test operations. For more examples, look at the sample [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http).
+
+#### JSON Patch in Bundles
-By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch isn't a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
+By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and the JSON Patch payload isn't a FHIR resource. To work around this, we'll use Binary resources with a Content-Type of `"application/json-patch+json"` and the base64 encoding of the JSON payload inside of a Bundle. For information about this workaround, view this topic on the [FHIR Chat Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
In the example below, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64.
-POST `https://{FHIR-SERVICE-NAME}/`
-content-type: `application/json`
+POST `https://{FHIR-SERVICE-HOST-NAME}/`<br/>
+Content-Type: `application/json`
``` {
- ΓÇ£resourceTypeΓÇ¥: ΓÇ£BundleΓÇ¥
- ΓÇ£idΓÇ¥: ΓÇ£bundle-batchΓÇ¥,
- ΓÇ£typeΓÇ¥: ΓÇ£batchΓÇ¥
- ΓÇ£entryΓÇ¥: [
+ "resourceType": "Bundle",
+ "id": "bundle-batch",
+ "type": "batch",
+ "entry": [
{
- ΓÇ£fullUrlΓÇ¥: ΓÇ£Patient/{PatientID}ΓÇ¥,
- ΓÇ£resourceΓÇ¥: {
- ΓÇ£resourceTypeΓÇ¥: ΓÇ£BinaryΓÇ¥,
- ΓÇ£contentTypeΓÇ¥: ΓÇ£application/json-patch+jsonΓÇ¥,
- ΓÇ£dataΓÇ¥: "W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9nZW5kZXIiLCJ2YWx1ZSI6ImZlbWFsZSJ9XQ=="
+ "fullUrl": "Patient/{PatientID}",
+ "resource": {
+ "resourceType": "Binary",
+ "contentType": "application/json-patch+json",
+ "data": "W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9nZW5kZXIiLCJ2YWx1ZSI6ImZlbWFsZSJ9XQ=="
},
- ΓÇ£requestΓÇ¥: {
- ΓÇ£methodΓÇ¥: ΓÇ£PATCHΓÇ¥,
- ΓÇ£urlΓÇ¥: ΓÇ£Patient/{PatientID}ΓÇ¥
+ "request": {
+ "method": "PATCH",
+ "url": "Patient/{PatientID}"
} } ] }- ``` ## Next steps
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-fhir-portal-quickstart.md
Title: 'Quickstart: Deploy Azure IoT Connector for FHIR (preview) using Azure portal' description: In this quickstart, you'll learn how to deploy, configure, and use the Azure IoT Connector for FHIR feature of Azure API for FHIR using the Azure portal. -+ Previously updated : 02/15/2022 Last updated : 04/11/2022
Device mapping template transforms device data into a normalized schema. On the
On the **Device mapping** page, add the following script to the JSON editor and select **Save**. ```json
-{
- "templateType": "CollectionContent",
- "template": [
- {
- "templateType": "IotJsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@Body.telemetry.HeartRate)]",
- "patientIdExpression": "$.Properties.iotcentral-device-id",
- "values": [
- {
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotCentralJsonPathContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@telemetry.HeartRate)]",
+ "patientIdExpression": "$.deviceId",
+ "values": [
+ {
"required": "true",
- "valueExpression": "$.Body.telemetry.HeartRate",
+ "valueExpression": "$.telemetry.HeartRate",
"valueName": "hr"
- }
+ }
] } }
- ]
+ ]
} ```
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Last updated 03/21/2022 -+ # Release notes: Azure API for FHIR
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Conditional patch |[#2163](https://github.com/microsoft/fhir-server/pull/2163) | |Added conditional patch audit event. |[#2213](https://github.com/microsoft/fhir-server/pull/2213) |
-|Allow JSON patch in bundles | [JSON patch in bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md#patch-in-bundles)|
+|Allow JSON patch in bundles | [JSON patch in bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md#json-patch-in-bundles)|
| :-- | : | |Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) |
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Last updated 03/01/2022-+ # Supported FHIR Features
Below is a summary of the supported RESTful capabilities. For more information o
| update | Yes | Yes | | | update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes |
-| patch | Yes | Yes | Support for [JSON Patch](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch) only. |
-| patch (conditional) | Yes | Yes |
+| patch | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch) only. |
+| patch (conditional) | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch) only. |
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Title: FHIR Rest API capabilities for Azure Health Data Services FHIR service
+ Title: FHIR REST API capabilities for Azure Health Data Services FHIR service
description: This article describes the RESTful interactions and capabilities for Azure Health Data Services FHIR service. Last updated 03/09/2022-+
-# FHIR Rest API capabilities for Azure Health Data Services FHIR service
+# FHIR REST API capabilities for Azure Health Data Services FHIR service
In this article, we'll cover some of the nuances of the RESTful interactions of Azure Health Data Services FHIR service (hereby called FHIR service).
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service support JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three ways to patch resources: JSON Patch, XML Patch, and FHIRPath Patch. The FHIR Service support both JSON Patch and FHIRPath Patch along with Conditional JSON Patch and Conditional FHIRPath Patch (which allows you to patch a resource based on a search criteria instead of a resource ID). To walk through some examples, refer to the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http) and the [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http) for each approach. For additional details, read the [HL7 documentation for patch operations with FHIR](https://www.hl7.org/fhir/http.html#patch).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
-### Testing Patch
+### Patch with FHIRPath Patch
-Within Patch, there's a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased (only if they're not already marked as deceased) you can use the example below:
+This method of patch is the most powerful as it leverages [FHIRPath](https://hl7.org/fhirpath/) for selecting which element to target. One common scenario is using FHIRPath Patch to update an element in a list without knowing the order of the list. For example, if you want to delete a patientΓÇÖs home telecom information without knowing the index, you can use the example below.
-PATCH `http://{FHIR-SERVICE-NAME}/Patient/{PatientID}`
+PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/>
+Content-type: `application/fhir+json`
+
+```
+{
+ "resourceType": "Parameters",
+ "parameter": [
+ {
+ "name": "operation",
+ "part": [
+ {
+ "name": "type",
+ "valueCode": "delete"
+ },
+ {
+ "name": "path",
+ "valueString": "Patient.telecom.where(use = 'home')"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Any FHIRPath Patch operations must have the `application/fhir+json` Content-Type header set. FHIRPatch Patch supports add, insert, delete, remove, and move operations. FHIRPatch Patch operations also can be easily integrated into Bundles. For more examples, look at the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
+
+### Patch with JSON Patch
+
+JSON Patch in the FHIR Service conforms to the well-used [specification defined by the Internet Engineering Task Force](https://datatracker.ietf.org/doc/html/rfc6902). The payload format does not use FHIR resources and instead uses a JSON document leveraging JSON-Pointers for element selection. JSON Patch is more compact and has a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased only if they're not already marked as deceased, you can use the example below.
+
+PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/>
Content-type: `application/json-patch+json` ```
Content-type: `application/json-patch+json`
"value": true } ]- ```
-### Patch in Bundles
+Any JSON Patch operations must have the `application/json-patch+json` Content-Type header set. JSON Patch supports add, remove, replace, copy, move, and test operations. For more examples, look at the sample [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http).
+
+#### JSON Patch in Bundles
-By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch isn't a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
+By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and the JSON Patch payload isn't a FHIR resource. To work around this, we'll use Binary resources with a Content-Type of `"application/json-patch+json"` and the base64 encoding of the JSON payload inside of a Bundle. For information about this workaround, view this topic on the [FHIR Chat Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
In the example below, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64.
-POST `https://{FHIR-SERVICE-NAME}/`
-content-type: `application/json`
+POST `https://{FHIR-SERVICE-HOST-NAME}/`<br/>
+Content-Type: `application/json`
``` {
content-type: `application/json`
} ] }- ``` ## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 03/21/2022-+ # Release notes: Azure Health Data Services
Azure Health Data Services is a set of managed API services based on open standa
|Conditional patch | [#2163](https://github.com/microsoft/fhir-server/pull/2163) | |Added conditional patch audit event. | [#2213](https://github.com/microsoft/fhir-server/pull/2213) |
-|Allow JSON patch in bundles | [JSON patch in bundles](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#patch-in-bundles)|
+|Allow JSON patch in bundles | [JSON patch in bundles](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#json-patch-in-bundles)|
| :- | -:| |Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) |
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-administer.md
This article describes how you can manage application by changing application name and URL, upload an image, and delete an application in your Azure IoT Central application.
-To access and use the **Administration** section, you must be in the **Administrator** role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically assigned to the **Administrator** role for that application.
+To access and use the **Settings > Application** and **Settings > Customization** sections, you must be in the **Administrator** role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically assigned to the **Administrator** role for that application.
## Change application name and URL
-In the **Application Settings** page, you can change the name and URL of your application, then select **Save**.
+In the **Application > Management** page, you can change the name and URL of your application, then select **Save**.
-![Application settings page](media/howto-administer/image-a.png)
+![Application management page](media/howto-administer/image-a.png)
If your administrator creates a custom theme for your application, this page includes an option to hide the **Application Name** in the UI. This option is useful if the application logo in the custom theme includes the application name. For more information, see [Customize the Azure IoT Central UI](./howto-customize-ui.md).
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
If your IoT Central application recognizes the device ID in the forwarded messag
To deploy the device bridge to your subscription:
-1. In your IoT Central application, navigate to the **Administration > Device Connection** page.
+1. In your IoT Central application, navigate to the **Permissions > Device connection groups** page.
1. Make a note of the **ID Scope**. You use this value when you deploy the device bridge.
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
To configure device file uploads:
1. Navigate to the **Application** section in your application.
-1. Select **Device file upload**.
+1. Select **Device file storage**.
1. Select the storage account and container to use. If the storage account is in a different Azure subscription from your application, enter a storage account connection string.
To configure device file uploads:
If you want to disable device file uploads to your IoT Central application:
-1. Navigate to the **Administration** section in your application.
+1. Navigate to the **Application** section in your application.
-1. Select **Device file upload**.
+1. Select **Device file storage**.
1. Select **Delete**.
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
You can create a copy of any application, minus any device instances, device dat
Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md). :::image type="content" source="media/howto-create-iot-central-application/app-copy-2.png" alt-text="Screenshot that shows the Copy Application settings page.":::
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
The following screenshot shows an organization hierarchy definition in IoT Centr
## Create a hierarchy
-To start using organizations, you need to define your organization hierarchy. Each organization in the hierarchy acts as a logical container where you place devices, save dashboards and device groups, and invite users. To create your organizations, go to the **Administration** section in your IoT Central application, select the **Organizations** tab, and select either **+ New** or use the context menu for an existing organization. To create one or many organizations at a time, select **+ Add another organization**:
+To start using organizations, you need to define your organization hierarchy. Each organization in the hierarchy acts as a logical container where you place devices, save dashboards and device groups, and invite users. To create your organizations, go to the **Permissions** section in your IoT Central application, select the **Organizations** tab, and select either **+ New** or use the context menu for an existing organization. To create one or many organizations at a time, select **+ Add another organization**:
:::image type="content" source="media/howto-create-organization/create-organizations-hierarchy.png" alt-text="Screenshot that shows the options for creating an organization hierarchy.":::
Then select the permissions for the role:
After you've created your organization hierarchy and assigned devices to organizations, invite users to your application and assign them to organizations.
-To invite a user, navigate to **Administration > Users**. Enter their email address, the organization they're assigned to, and the role or roles the user is a member of. The organization you select filters the list of available roles to make sure you assign the user to a valid role:
+To invite a user, navigate to **Permissions > Users**. Enter their email address, the organization they're assigned to, and the role or roles the user is a member of. The organization you select filters the list of available roles to make sure you assign the user to a valid role:
:::image type="content" source="media/howto-create-organization/assign-user-organization.png" alt-text="Screenshot that shows how to assign a user to an organization and role.":::
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
This article describes how you can add, edit, and delete users in your Azure IoT Central application. The article also describes how to manage roles in your application.
-To access and use the **Administration** section, you must be in the **App Administrator** role for an Azure IoT Central application or in a custom role that includes administration permissions. If you create an Azure IoT Central application, you're automatically added to the **App Administrator** role for that application.
+To access and use the **Permissions** section, you must be in the **App Administrator** role for an Azure IoT Central application or in a custom role that includes administration permissions. If you create an Azure IoT Central application, you're automatically added to the **App Administrator** role for that application.
## Add users
The user who creates an application is automatically assigned to the **App Admin
### App Builder
-Users in the **App Builder** role can manage every part of the app, but can't make changes on the Administration or Continuous Data Export tabs.
+Users in the **App Builder** role can manage every part of the app, but can't make changes on the **Application** or **Data Export** tabs.
### App Operator
Users in the **Org Viewer** role can view items such as devices and their data,
## Create a custom role
-If your solution requires finer-grained access controls, you can create roles with custom sets of permissions. To create a custom role, navigate to the **Roles** page in the **Administration** section of your application, and choose one of these options:
+If your solution requires finer-grained access controls, you can create roles with custom sets of permissions. To create a custom role, navigate to the **Roles** page in the **Permissions** section of your application, and choose one of these options:
- Select **+ New**, add a name and description for your role, and select **Application** or **Organization** as the role type. This option lets you create a role definition from scratch. - Navigate to an existing role and select **Copy**. This option lets you start with an existing role definition that you can customize.
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
az login
``` ### Get the Application ID of your IoT Central app
-In **Administration/Application Settings**, copy the **Application ID**. You use this value in later steps.
+In **Application > Management**, copy the **Application ID**. You use this value in later steps.
### Monitor messages Monitor the messages that are being sent to your IoT Central app from your devices. The output includes all headers and annotations.
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Before you set up this scenario, you need to get some connection settings from y
1. Sign in to your IoT Central application.
-1. Navigate to **Administration > Device connection**.
+1. Navigate to **Permissions > Device connection groups**.
1. Make a note of the **ID scope**. You use this value later.
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
To learn how to monitor your IoT Edge fleet remotely by using Azure Monitor and
## Tools
-Many of the tools you use as an administrator are available in the **Administration** section of each IoT Central application. You can also use the following tools to complete some administrative tasks:
+Many of the tools you use as an administrator are available in the **Security** and **Settings** sections of each IoT Central application. You can also use the following tools to complete some administrative tasks:
- [Azure Command-Line Interface (CLI) or PowerShell](howto-manage-iot-central-from-cli.md) - [Azure portal](howto-manage-iot-central-from-portal.md)
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
There are three ways to register a device in an IoT Central application:
Optionally, you can require an operator to approve the device before it starts sending data. > [!TIP]
- > On the **Administration > Device connection** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
+ > On the **Permissions > Device connection groups** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
You only need to register a device once in your IoT Central application.
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
Here's how:
If you're not going to continue to use this application, delete your application with the following steps:
-1. From the left pane of your Azure IoT Central app, select **Administration**.
-1. Select **Application** > **Management** > **Delete**.
+1. From the left pane of your Azure IoT Central app, select **Application**.
+1. Select **Management > Delete**.
## Next steps
iot-central Tutorial Health Data Triage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-health-data-triage.md
If you're not going to continue to use this application, delete your resources w
1. From the Azure portal, you can delete the Event Hub and Logic Apps resources that you created.
-1. For your IoT Central application, go to the Administration tab and select **Delete**.
-
+1. For your IoT Central application, go to the **Application > Management** tab and select **Delete**.
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
You could add some addition graphics resources to further customize the dashboar
## Clean up resources
-If you've finished with your IoT Central application, you can delete it by signing in to the application and navigating to the **Application Settings** page in the **Administration** section.
+If you've finished with your IoT Central application, you can delete it by signing in to the application and navigating to the **Management** page in the **Application** section.
If you want to keep the application but reduce the costs associated with it, disable the data export that's sending telemetry to your event hub.
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
Use the sample rule as inspiration to define rules that are more appropriate for
If you're not going to continue to use this application, delete the application template. Go to **Application** > **Management**, and select **Delete**. ## Next steps
iot-develop Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md
Title: IoT Plug and Play conventions | Microsoft Docs
description: Description of the conventions IoT Plug and Play expects devices to use when they send telemetry and properties, and handle commands and property updates. Previously updated : 07/10/2020 Last updated : 04/06/2022
# IoT Plug and Play conventions
-IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub.
+IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub, AMQP is supported by IoT Hub and available in some device SDKs.
Devices can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime.
The device or module should confirm that it received the property by sending a r
- `av` - an acknowledgment version that refers to the `$version` of the desired property. You can find this value in the desired property JSON payload. - `ad` - an optional acknowledgment description.
+### Acknowledgment responses
+
+When reporting writable properties the device should compose the acknowledgment message, using the four fields described above, to indicate the actual device state, as described in this table
++
+|Status(ac)|Version(av)|Value(value)|Description(av)|
+|:|:|:|:|
+|200|Desired version|Desired value|Desired property value accepted|
+|202|Desired version|Value accepted by the device|Desired property value accepted, update in progress (should finish with 200)|
+|203|0|Value set by the device|Property set from the device, not reflecting any desired|
+|400|Desired version|Actual value used by the device|Desired property value not accepted|
+|500|Desired version|Actual value used by the device|Exception when applying the property|
+ When a device starts up, it should request the device twin, and check for any writable property updates. If the version of a writable property increased while the device was offline, the device should send a reported property response to confirm that it received the update.
-When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the hub. In this case, the device should set `av` to `1`. For example:
+When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the hub. In this case, the device can send the default value with `av` to `0` and `ac` to `203`. For example:
```json "reported": { "targetTemperature": { "value": 20.0,
- "ac": 200,
- "av": 1,
+ "ac": 203,
+ "av": 0,
"ad": "initialize" } }
When the device reaches the target temperature it sends the following message:
"targetTemperature": { "value": 20.0, "ac": 200,
- "av": 3,
+ "av": 4,
"ad": "Reached target temperature" } }
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Check to see which versions of IoT Edge are available.
If you want to update to the most recent version of IoT Edge, use the following command which also updates the identity service to the latest version: ```bash
- sudo apt-get install aziot-edge
+ sudo apt-get install aziot-edge defender-iot-micro-agent-edge
```
+It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](/azure/defender-for-iot/device-builders/overview).
<!-- end 1.2 --> :::moniker-end
When you're ready, follow these steps to update IoT Edge on your devices:
sudo apt-get remove iotedge ```
-1. Install the most recent version of IoT Edge, along with the IoT identity service.
+1. Install the most recent version of IoT Edge, along with the IoT identity service and the Microsoft Defender for IoT micro agent for Edge.
```bash
- sudo apt-get install aziot-edge
+ sudo apt-get install aziot-edge defender-iot-micro-agent-edge
```
+It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](/azure/defender-for-iot/device-builders/overview).
1. Import your old config.yaml file into its new format, and apply the configuration info.
If you're installing IoT Edge, rather than upgrading an existing installation, u
View the latest [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases).
-Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
+Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
Previously updated : 10/25/2021 Last updated : 03/28/2022
You can see the hierarchy of certificate depth represented in the screenshot:
## Next steps
-[Understand Azure IoT Edge modules](iot-edge-modules.md)
-
-[Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md)
+* For more information about how to install certificates on an IoT Edge device and reference them from the config file, see [Manage certificate on an IoT Edge device](how-to-manage-device-certificates.md).
+* [Understand Azure IoT Edge modules](iot-edge-modules.md)
+* [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md)
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](/azure/defender-for-iot/device-builders/overview.md).
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](/azure/defender-for-iot/device-builders/overview).
| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) | | [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
iot-hub Iot Hub Devguide Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-pricing.md
Previously updated : 03/11/2019 Last updated : 04/07/2022
## Charges per operation
-| Operation | Billing information |
+Use the following table to help determine which operations are charged. All billable operations are charged in 4K-byte blocks on basic and standard tier IoT hubs. Operations are metered in 0.5K-byte chunks on free tier IoT hubs. Details for each category are provided in the **Billing information** column. This column includes the following information:
+
+- Details of how billable operations are metered on basic and standard tier IoT hubs. Not all operations are available in the basic tier.
+- The operations that result in charges, with either:
+ - A link to the REST API documentation if it exists.
+ - The operation endpoint if REST API documentation isn't available, or if the operation is only available over MQTT and/or AMQP. The endpoint value omits the leading reference to the target IoT hub; `{fully-qualified-iothubname}.azure-devices.net`.
+- One or more terms in *italics* following each operation (or endpoint). These terms represent billable operations that are charged against quota for your IoT hub. You may see these terms supplied as part of a quota usage insight when you initiate a support request on Azure portal. They may also be returned by customer support. You can use the table below to cross-reference these terms with the corresponding operation to help you understand quota usage and billing for your IoT solution. For more information, see [Example 4](#example-4).
+
+
+| Operation category | Billing information |
| | - |
-| Identity registry operations <br/> (create, retrieve, list, update, delete) | Not charged. |
-| Device-to-cloud messages | Successfully sent messages are charged in 4-KB chunks on ingress into IoT Hub. For example, a 6-KB message is charged 2 messages. |
-| Cloud-to-device messages | Successfully sent messages are charged in 4-KB chunks, for example a 6-KB message is charged 2 messages. |
-| File uploads | File transfer to Azure Storage is not metered by IoT Hub. File transfer initiation and completion messages are charged as messaged metered in 4-KB increments. For example, transferring a 10-MB file is charged as two messages in addition to the Azure Storage cost. |
-| Direct methods | Successful method requests are charged in 4-KB chunks, and responses are charged in 4-KB chunks as additional messages. Requests to disconnected devices are charged as messages in 4-KB chunks. For example, a method with a 4-KB body that results in a response with no body from the device is charged as two messages. A method with a 6-KB body that results in a 1-KB response from the device is charged as two messages for the request plus another message for the response. |
-| Device and module twin reads | Twin reads from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, reading a 8-KB twin is charged as 2 messages. |
-| Device and module twin updates (tags and properties) | Twin updates from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, reading a 12-KB twin is charged as 3 messages. |
-| Device and module twin queries | Queries are charged as messages depending on the result size in 4-KB chunks. |
-| Jobs operations <br/> (create, update, list, delete) | Not charged. |
-| Jobs per-device operations | Jobs operations (such as twin updates, and methods) are charged as normal. For example, a job resulting in 1000 method calls with 1-KB requests and empty-body responses is charged 1000 messages. |
-| Keep-alive messages | When using AMQP or MQTT protocols, messages exchanged to establish the connection and messages exchanged in the negotiation are not charged. |
+| Identity registry operations <br/> (create, update, get, list, delete, bulk update, statistics) | Not charged. |
+| Device-to-cloud messages | Successfully sent messages are charged in 4-KB chunks on ingress into IoT Hub. For example, a 100-byte message is charged as one message, and a 6-KB message is charged as two messages. <br/><br/> [Send Device Event](/rest/api/iothub/device/send-device-event), Either *Device to Cloud Telemetry* or *Device to Cloud Telemetry Routing* depending on whether the IoT hub has message routing features configured. In either case, messages are only charged on ingress into IoT Hub. |
+| Cloud-to-device messages | Successfully sent messages are charged in 4-KB chunks. For example, a 6-KB message is charged as 2 messages. <br/><br/> [Receive Device Bound Notification](/rest/api/iothub/device/receive-device-bound-notification), *Cloud To Device Command* |
+| File uploads | File transfer to Azure Storage is not metered by IoT Hub. File transfer initiation and completion messages are charged as messaged metered in 4-KB increments. For example, transferring a 10-MB file is charged as two messages in addition to the Azure Storage cost. <br/><br/> [Create File Upload Sas Uri](/rest/api/iothub/device/create-file-upload-sas-uri), *Device To Cloud File Upload* <br/> [Update File Upload Status](/rest/api/iothub/device/update-file-upload-status), *Device To Cloud File Upload* |
+| Direct methods | Successful method requests are charged in 4-KB chunks, and responses are charged in 4-KB chunks as additional messages. Requests or responses with no payload are charged as one message. For example, a method with a 4-KB body that results in a response with no payload from the device is charged as two messages. A method with a 6-KB body that results in a 1-KB response from the device is charged as two messages for the request plus another message for the response. Requests to disconnected devices are charged as messages in 4-KB chunks plus one message for a response that indicates the device is not online. <br/><br/> [Device - Invoke Method](/rest/api/iothub/service/devices/invoke-method), *Device Direct Invoke Method*, <br/> [Module - Invoke Method](/rest/api/iothub/service/modules/invoke-method), *Module Direct Invoke Method* |
+| Device and module twin reads | Twin reads from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, reading an 8-KB twin is charged as 2 messages. <br/><br/> [Get Twin](/rest/api/iothub/service/devices/get-twin), *Get Twin* <br/> [Get Module Twin](/rest/api/iothub/service/modules/get-twin), *Get Module Twin* <br/><br/> Read device and module twins from a device: <br/> **Endpoint**: `/devices/{id}/twin` ([MQTT](iot-hub-mqtt-support.md#retrieving-a-device-twins-properties), AMQP only), *D2C Get Twin* <br/> **Endpoint**: `/devices/{deviceid}/modules/{moduleid}/twin` (MQTT, AMQP only), *Module D2C Get Twin* |
+| Device and module twin updates (tags and properties) | Twin updates from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, a 12-KB update to a twin is charged as 3 messages. <br/><br/> [Update Twin](/rest/api/iothub/service/devices/update-twin), *Update Twin* <br/> [Update Module Twin](/rest/api/iothub/service/modules/update-twin), *Update Module Twin* <br/> [Replace Twin](/rest/api/iothub/service/devices/replace-twin), *Replace Twin* <br/> [Replace Module Twin](/rest/api/iothub/service/modules/replace-twin), *Replace Module Twin* <br/><br/> Update device or module twin reported properties from a device: <br/> **Endpoint**: `/twin/PATCH/properties/reported/` ([MQTT](iot-hub-mqtt-support.md#update-device-twins-reported-properties), AMQP only), *D2 Patch ReportedProperties* or *Module D2 Patch ReportedProperties* <br/><br/> Receive desired properties update notifications on a device: <br/> **Endpoint**: `/twin/PATCH/properties/desired/` ([MQTT](iot-hub-mqtt-support.md#receiving-desired-properties-update-notifications), AMQP only), *D2C Notify DesiredProperties* or *Module D2C Notify DesiredProperties* |
+| Device and module twin queries | Queries are charged as messages depending on the result size in 4-KB chunks. <br/><br/> [Get Twins](/rest/api/iothub/service/query/get-twins) (query against **devices** or **devices.modules** collections), *Query Devices* <br/><br/> Queries against **jobs** collection are not charged. |
+| Digital twin reads | Digital twin reads from the solution back end are charged as messages in 4-KB chunks. For example, reading an 8-KB twin is charged as 2 messages. <br/><br/> [Get Digital Twin](/rest/api/iothub/service/digital-twin/get-digital-twin), *Get Digital Twin* |
+| Digital twin updates | Digital twin updates from the solution back end are charged as messages in 4-KB chunks. For example, a 12-KB update to a twin is charged as 3 messages. <br/><br/> [Update Digital Twin](/rest/api/iothub/service/digital-twin/update-digital-twin), *Patch Digital Twin* |
+| Digital twin commands | Successful commands are charged in 4-KB chunks, and responses are charged in 4-KB chunks as additional messages. Requests or responses with no body are charged as one message. For example, a command with a 4-KB body that results in a response with no body from the device is charged as two messages. A command with a 6-KB body that results in a 1-KB response from the device is charged as two messages for the command plus another message for the response. Commands to disconnected devices are charged as messages in 4-KB chunks plus one message for a response that indicates the device is not online. <br/><br/> [Invoke Component Command](/rest/api/iothub/service/digital-twin/invoke-component-command), *Digital Twin Component Command* <br/> [Invoke Root Level Command](/rest/api/iothub/service/digital-twin/invoke-root-level-command), *Digital Twin Root Command* |
+| Jobs operations <br/> (create, cancel, get, query) | Not charged. |
+| Jobs per-device operations | Jobs operations (such as twin updates, and methods) are charged as normal in 4-KB chunks. For example, a job resulting in 1000 method calls with 1-KB requests and empty-payload responses is charged 2000 messages (one message each for the request and response * 1000). <br/><br/> *Update Twin Device Job* <br/> *Invoke Method Device Job* |
+| Configuration operations <br/> (create, update, get, list, delete, test query) | Not charged.|
+| Configuration per-device operations | Configuration operations are charged as messages in 4-KB chunks. Responses are not charged. For example, an apply configuration operation with a 6-KB body is charged as two messages. <br/><br/> [Apply on Edge Device](/rest/api/iothub/service/configuration/apply-on-edge-device), *Configuration Service Apply*. |
+| Keep-alive messages | When using AMQP or MQTT protocols, messages exchanged to establish the connection and messages exchanged in the negotiation or to keep the connection open and alive are not charged. |
+| Device streams (preview) | Device streams is in preview and operations are not yet charged. <br/><br/> **Endpoint**: `/twins/{deviceId}/streams/{streamName}`, *Device Streams* <br/> **Endpoint**: `/twins/{deviceId}/modules/{moduleId}/streams/{streamName}`, *Device Streams Module* |
> [!NOTE] > All sizes are computed considering the payload size in bytes (protocol framing is ignored). For messages, which have properties and body, the size is computed in a protocol-agnostic way. For more information, see [IoT Hub message format](iot-hub-devguide-messages-construct.md).
+>
+> Maximum message sizes differ for different types of operations. To learn more, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
+>
+> For some operations, you can use batching and compression strategies to reduce costs. For an example using device-to-cloud telemetry, see [Example #3](#example-3).
## Example #1
A device sends one 1-KB device-to-cloud message per minute to IoT Hub, which is
The device consumes:
-* One message * 60 minutes * 24 hours = 1440 messages per day for the device-to-cloud messages.
-* Two request plus response * 6 times per hour * 24 hours = 288 messages for the methods.
+- One message * 60 minutes * 24 hours = 1440 messages per day for the device-to-cloud messages.
+
+- Two messages (request plus response) * 6 times per hour * 24 hours = 288 messages for the methods.
This calculation gives a total of 1728 messages per day.
A device sends one 100-KB device-to-cloud message every hour. It also updates it
The device consumes:
-* 25 (100 KB / 4 KB) messages * 24 hours for device-to-cloud messages.
-* Two messages (1 KB / 0.5 KB) * six times per day for device twin updates.
+- 25 (100 KB / 4 KB) messages * 24 hours for device-to-cloud messages.
+
+- One message (1 KB / 4 KB) * six times per day for device twin updates.
+
+This calculation gives a total of 606 messages per day.
+
+The solution back end consumes 4 messages (14 KB / 4 KB) to read the device twin, plus one message (512 / 4 KB) to update it, for a total of 5 messages.
+
+In total, the device and the solution back end consume 611 messages per day.
+
+## Example #3
+
+Depending on your scenario, batching messages can reduce your quota usage.
+
+For example, consider a device that has a sensor that only generates 100 bytes of data each time it's read:
+
+- If the device batches 40 sensor reads into a single device-to-cloud message with a 4-KB payload (40 * 100 bytes), then only one message is charged against quota. If the device reads the sensor 40 times each hour and batches those reads into a single device-to-cloud message per hour, it would send 24 messages/day.
+
+- If the device sends a device-to-cloud message with a 100-byte payload for each sensor read, then it consumes 40 messages against quota for the same amount of data. If the device reads the sensor 40 times each hour and sends each message individually, it would send 960 messages/day (40 messages * 24).
+
+Your batching strategy will depend on your scenario and on how time-critical the data is. If you're sending large amounts of data, you can also consider implementing data compression to further reduce the impact on message quota.
+
+## Example #4
+
+When you open a support request on Azure portal, diagnostics specific to your reported issue are run. The result is displayed as an insight on the **Solutions** tab of your request. One such insight reports quota usage for your IoT hub using the terms in italics in the table above. Whether this particular insight is returned will depend on the results of the diagnostics performed on your IoT hub for the problem you're reporting. If the quota usage insight is reported, you can use the table above to cross-reference the reported usage term or terms with the operation(s) that they refer to.
+
+For example, the following screenshot shows a support request initiated for a problem with device-to-cloud telemetry.
-This calculation gives a total of 612 messages per day.
-The solution back end consumes 28 messages (14 KB / 0.5 KB) to read the device twin, plus one message to update it, for a total of 29 messages.
+After selecting **Next Solutions**, the quota usage insight is returned by the diagnostics under **IoT Hub daily message quota breakdown**. It shows the breakdown for device to cloud messages sent to the IoT hub. In this case, message routing is enabled on the IoT hub, so the messages are shown as *Device to Cloud Telemetry Routing*. Be aware that the quota usage insight may not be returned for the same problem on a different IoT hub. What is returned will depend on the activity and state of that IoT hub.
-In total, the device and the solution back end consume 641 messages per day.
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
+
+ Title: Azure Quickstart - Create an Azure key vault and a secret using Bicep | Microsoft Docs
+description: Quickstart showing how to create Azure key vaults, and add secrets to the vaults using Bicep.
++
+tags: azure-resource-manager
++++ Last updated : 04/08/2022+
+#Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
++
+# Quickstart: Set and retrieve a secret from Azure Key Vault using Bicep
+
+[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, certificates, and other secrets. This quickstart focuses on the process of deploying a Bicep file to create a key vault and a secret.
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* Your Azure AD user object ID is needed by the template to configure permissions. The following procedure gets the object ID (GUID).
+
+ 1. Run the following Azure PowerShell or Azure CLI command by select **Try it**, and then paste the script into the shell pane. To paste the script, right-click the shell, and then select **Paste**.
+
+ # [CLI](#tab/CLI)
+ ```azurecli-interactive
+ echo "Enter your email address that is used to sign in to Azure:" &&
+ read upn &&
+ az ad user show --id $upn --query "objectId" &&
+ echo "Press [ENTER] to continue ..."
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+ ```azurepowershell-interactive
+ $upn = Read-Host -Prompt "Enter your email address used to sign in to Azure"
+ (Get-AzADUser -UserPrincipalName $upn).Id
+ Write-Host "Press [ENTER] to continue..."
+ ```
+
+
+
+ 2. Write down the object ID. You need it in the next section of this quickstart.
+
+## Review the Bicep file
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/key-vault-create/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [**Microsoft.KeyVault/vaults**](/azure/templates/microsoft.keyvault/vaults): create an Azure key vault.
+* [**Microsoft.KeyVault/vaults/secrets**](/azure/templates/microsoft.keyvault/vaults/secrets): create a key vault secret.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters keyVaultName=<vault-name> objectID=<object-id>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -keyVaultName "<vault-name>" -objectID "<object-id>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<vault-name\>** with the name of the key vault. Replace **\<object-id\>** with the object ID of a user, service principal, or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the list of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+You can either use the Azure portal to check the key vault and the secret, or use the following Azure CLI or Azure PowerShell script to list the secret created.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter your key vault name:" &&
+read keyVaultName &&
+az keyvault secret list --vault-name $keyVaultName &&
+echo "Press [ENTER] to continue ..."
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$keyVaultName = Read-Host -Prompt "Enter your key vault name"
+Get-AzKeyVaultSecret -vaultName $keyVaultName
+Write-Host "Press [ENTER] to continue..."
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a key vault and a secret using Bicep and then validated the deployment. To learn more about Key Vault and Bicep, continue on to the articles below.
+
+- Read an [Overview of Azure Key Vault](../general/overview.md)
+- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md)
+- Review the [Key Vault security overview](../general/security-features.md)
load-balancer Ipv6 Configure Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/ipv6-configure-template-json.md
+
+ Title: Deploy an IPv6 dual stack application with Basic Load Balancer in Azure virtual network - Resource Manger template
+
+description: This article shows how to deploy an IPv6 dual stack application in Azure virtual network using Azure Resource Manager VM templates.
+
+documentationcenter: na
+++++ Last updated : 03/31/2020+++
+# Deploy an IPv6 dual stack application with Basic Load Balancer in Azure - Template
+
+This article provides a list of IPv6 configuration tasks with the portion of the Azure Resource Manager VM template that applies to. Use the template described in this article to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer that includes a dual stack virtual network with IPv4 and IPv6 subnets, a Basic Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, network security group, and public IPs.
+
+To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, see [Deploy an IPv6 dual stack application with Standard Load Balancer - Template](../ipv6-configure-standard-load-balancer-template-json.md).
+
+## Required configurations
+
+Search for the template sections in the template to see where they should occur.
+
+### IPv6 addressSpace for the virtual network
+
+Template section to add:
+
+```JSON
+ "addressSpace": {
+ "addressPrefixes": [
+ "[variables('vnetv4AddressRange')]",
+ "[variables('vnetv6AddressRange')]"
+```
+
+### IPv6 subnet within the IPv6 virtual network addressSpace
+
+Template section to add:
+```JSON
+ {
+ "name": "V6Subnet",
+ "properties": {
+ "addressPrefix": "[variables('subnetv6AddressRange')]"
+ }
+
+```
+
+### IPv6 configuration for the NIC
+
+Template section to add:
+```JSON
+ {
+ "name": "ipconfig-v6",
+ "properties": {
+ "privateIPAllocationMethod": "Dynamic",
+ "privateIPAddressVersion":"IPv6",
+ "subnet": {
+ "id": "[variables('v6-subnet-id')]"
+ },
+ "loadBalancerBackendAddressPools": [
+ {
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers','loadBalancer'),'/backendAddressPools/LBBAP-v6')]"
+ }
+```
+
+### IPv6 network security group (NSG) rules
+
+```JSON
+ {
+ "name": "default-allow-rdp",
+ "properties": {
+ "description": "Allow RDP",
+ "protocol": "Tcp",
+ "sourcePortRange": "33819-33829",
+ "destinationPortRange": "5000-6000",
+ "sourceAddressPrefix": "fd00:db8:deca:deed::/64",
+ "destinationAddressPrefix": "fd00:db8:deca:deed::/64",
+ "access": "Allow",
+ "priority": 1003,
+ "direction": "Inbound"
+ }
+```
+
+## Conditional configuration
+
+If you're using a network virtual appliance, add IPv6 routes in the Route Table. Otherwise, this configuration is optional.
+
+```JSON
+ {
+ "type": "Microsoft.Network/routeTables",
+ "name": "v6route",
+ "apiVersion": "[variables('ApiVersion')]",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "routes": [
+ {
+ "name": "v6route",
+ "properties": {
+ "addressPrefix": "fd00:db8:deca:deed::/64",
+ "nextHopType": "VirtualAppliance",
+ "nextHopIpAddress": "fd00:db8:ace:f00d::1"
+ }
+```
+
+## Optional configuration
+
+### IPv6 Internet access for the virtual network
+
+```JSON
+{
+ "name": "LBFE-v6",
+ "properties": {
+ "publicIPAddress": {
+ "id": "[resourceId('Microsoft.Network/publicIPAddresses','lbpublicip-v6')]"
+ }
+```
+
+### IPv6 Public IP addresses
+
+```JSON
+ {
+ "apiVersion": "[variables('ApiVersion')]",
+ "type": "Microsoft.Network/publicIPAddresses",
+ "name": "lbpublicip-v6",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "publicIPAllocationMethod": "Dynamic",
+ "publicIPAddressVersion": "IPv6"
+ }
+```
+
+### IPv6 Front end for Load Balancer
+
+```JSON
+ {
+ "name": "LBFE-v6",
+ "properties": {
+ "publicIPAddress": {
+ "id": "[resourceId('Microsoft.Network/publicIPAddresses','lbpublicip-v6')]"
+ }
+```
+
+### IPv6 Back-end address pool for Load Balancer
+
+```JSON
+ "backendAddressPool": {
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers', 'loadBalancer'), '/backendAddressPools/LBBAP-v6')]"
+ },
+ "protocol": "Tcp",
+ "frontendPort": 8080,
+ "backendPort": 8080
+ },
+ "name": "lbrule-v6"
+```
+
+### IPv6 load balancer rules to associate incoming and outgoing ports
+
+```JSON
+ {
+ "name": "ipconfig-v6",
+ "properties": {
+ "privateIPAllocationMethod": "Dynamic",
+ "privateIPAddressVersion":"IPv6",
+ "subnet": {
+ "id": "[variables('v6-subnet-id')]"
+ },
+ "loadBalancerBackendAddressPools": [
+ {
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers','loadBalancer'),'/backendAddressPools/LBBAP-v6')]"
+ }
+```
+
+## Sample VM template JSON
+To deploy an IPv6 dual stack application with Basic Load Balancer in Azure virtual network using Azure Resource Manager template, view sample template [here](https://azure.microsoft.com/resources/templates/ipv6-in-vnet/).
+
+## Next steps
+
+You can find details about pricing for [public IP addresses](https://azure.microsoft.com/pricing/details/ip-addresses/), [network bandwidth](https://azure.microsoft.com/pricing/details/bandwidth/), or [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
+
+ Title: Deploy IPv6 dual stack application - Basic Load Balancer - CLI
+
+description: Learn how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure CLI.
+++ Last updated : 03/31/2022+++
+# Deploy an IPv6 dual stack application using Basic Load Balancer - CLI
+
+This article shows you how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure CLI that includes a dual stack virtual network with a dual stack subnet, a Basic Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs.
+
+To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, see [Deploy an IPv6 dual stack application with Standard Load Balancer using Azure CLI](../virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md).
+++
+- This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
+
+Before you can create your dual-stack virtual network, you must create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *DsResourceGroup01* in the *eastus* location:
+
+```azurecli-interactive
+az group create \
+--name DsResourceGroup01 \
+--location eastus
+```
+
+## Create IPv4 and IPv6 public IP addresses for load balancer
+To access your IPv4 and IPv6 endpoints on the Internet, you need IPv4 and IPv6 public IP addresses for the load balancer. Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip). The following example creates IPv4 and IPv6 public IP address named *dsPublicIP_v4* and *dsPublicIP_v6* in the *DsResourceGroup01* resource group:
+
+```azurecli-interactive
+# Create an IPV4 IP address
+az network public-ip create \
+--name dsPublicIP_v4 \
+--resource-group DsResourceGroup01 \
+--location eastus \
+--sku BASIC \
+--allocation-method dynamic \
+--version IPv4
+
+# Create an IPV6 IP address
+az network public-ip create \
+--name dsPublicIP_v6 \
+--resource-group DsResourceGroup01 \
+--location eastus \
+--sku BASIC \
+--allocation-method dynamic \
+--version IPv6
+
+```
+
+## Create public IP addresses for VMs
+
+To remotely access your VMs on the internet, you need IPv4 public IP addresses for the VMs. Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip).
+
+```azurecli-interactive
+az network public-ip create \
+--name dsVM0_remote_access \
+--resource-group DsResourceGroup01 \
+--location eastus \
+--sku BASIC \
+--allocation-method dynamic \
+--version IPv4
+
+az network public-ip create \
+--name dsVM1_remote_access \
+--resource-group DsResourceGroup01 \
+--location eastus \
+--sku BASIC \
+--allocation-method dynamic \
+--version IPv4
+```
+
+## Create Basic Load Balancer
+
+In this section, you configure dual frontend IP (IPv4 and IPv6) and the back-end address pool for the load balancer and then create a Basic Load Balancer.
+
+### Create load balancer
+
+Create the Basic Load Balancer with [az network lb create](/cli/azure/network/lb) named **dsLB** that includes a frontend pool named **dsLbFrontEnd_v4**, a backend pool named **dsLbBackEndPool_v4** that is associated with the IPv4 public IP address **dsPublicIP_v4** that you created in the preceding step.
+
+```azurecli-interactive
+az network lb create \
+--name dsLB \
+--resource-group DsResourceGroup01 \
+--sku Basic \
+--location eastus \
+--frontend-ip-name dsLbFrontEnd_v4 \
+--public-ip-address dsPublicIP_v4 \
+--backend-pool-name dsLbBackEndPool_v4
+```
+
+### Create IPv6 frontend
+
+Create an IPV6 frontend IP with [az network lb frontend-ip create](/cli/azure/network/lb/frontend-ip#az-network-lb-frontend-ip-create). The following example creates a frontend IP configuration named *dsLbFrontEnd_v6* and attaches the *dsPublicIP_v6* address:
+
+```azurecli-interactive
+az network lb frontend-ip create \
+--lb-name dsLB \
+--name dsLbFrontEnd_v6 \
+--resource-group DsResourceGroup01 \
+--public-ip-address dsPublicIP_v6
+
+```
+
+### Configure IPv6 back-end address pool
+
+Create a IPv6 back-end address pools with [az network lb address-pool create](/cli/azure/network/lb/address-pool#az-network-lb-address-pool-create). The following example creates back-end address pool named *dsLbBackEndPool_v6* to include VMs with IPv6 NIC configurations:
+
+```azurecli-interactive
+az network lb address-pool create \
+--lb-name dsLB \
+--name dsLbBackEndPool_v6 \
+--resource-group DsResourceGroup01
+```
+
+### Create a health probe
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe) to monitor the health of the virtual machines.
+
+```azurecli-interactive
+az network lb probe create -g DsResourceGroup01 --lb-name dsLB -n dsProbe --protocol tcp --port 3389
+```
+
+### Create a load balancer rule
+
+A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic, along with the required source and destination port.
+
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create). The following example creates load balancer rules named *dsLBrule_v4* and *dsLBrule_v6* and balances traffic on *TCP* port *80* to the IPv4 and IPv6 frontend IP configurations:
+
+```azurecli-interactive
+az network lb rule create \
+--lb-name dsLB \
+--name dsLBrule_v4 \
+--resource-group DsResourceGroup01 \
+--frontend-ip-name dsLbFrontEnd_v4 \
+--protocol Tcp \
+--frontend-port 80 \
+--backend-port 80 \
+--probe-name dsProbe \
+--backend-pool-name dsLbBackEndPool_v4
++
+az network lb rule create \
+--lb-name dsLB \
+--name dsLBrule_v6 \
+--resource-group DsResourceGroup01 \
+--frontend-ip-name dsLbFrontEnd_v6 \
+--protocol Tcp \
+--frontend-port 80 \
+--backend-port 80 \
+--probe-name dsProbe \
+--backend-pool-name dsLbBackEndPool_v6
+
+```
+
+## Create network resources
+Before you deploy some VMs, you must create supporting network resources - availability set, network security group, virtual network, and virtual NICs.
+### Create an availability set
+To improve the availability of your app, place your VMs in an availability set.
+
+Create an availability set with [az vm availability-set create](/cli/azure/vm/availability-set). The following example creates an availability set named *dsAVset*:
+
+```azurecli-interactive
+az vm availability-set create \
+--name dsAVset \
+--resource-group DsResourceGroup01 \
+--location eastus \
+--platform-fault-domain-count 2 \
+--platform-update-domain-count 2
+```
+
+### Create network security group
+
+Create a network security group for the rules that will govern inbound and outbound communication in your VNET.
+
+#### Create a network security group
+
+Create a network security group with [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create)
++
+```azurecli-interactive
+az network nsg create \
+--name dsNSG1 \
+--resource-group DsResourceGroup01 \
+--location eastus
+
+``