Updates from: 09/15/2022 01:13:37
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Ad External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-external-identities-videos.md
Previously updated : 02/09/2021 Last updated : 09/13/2022
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Previously updated : 03/23/2021 Last updated : 09/13/2022
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
Previously updated : 04/24/2021 Last updated : 09/13/2022
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
Previously updated : 7/07/2021 Last updated : 09/13/2022
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
Previously updated : 07/22/2020 Last updated : 09/13/2022
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
Previously updated : 08/27/2020 Last updated : 09/13/2022
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
Previously updated : 07/22/2020 Last updated : 09/13/2022
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Previously updated : 10/26/2020 Last updated : 09/13/2022
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
Previously updated : 11/23/2020 Last updated : 09/13/2022
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
Previously updated : 01/20/2021 Last updated : 09/13/2022
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
Previously updated : 10/25/2020 Last updated : 09/13/2022
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Previously updated : 08/20/2020 Last updated : 09/13/2022
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Previously updated : 12/09/2020 Last updated : 09/13/2022
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 08/17/2022 Last updated : 09/13/2022
Here are some factors for you to consider when choosing Microsoft passwordless t
||**Windows Hello for Business**|**Passwordless sign-in with the Authenticator app**|**FIDO2 security keys**| |:-|:-|:-|:-|
-|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices running Android 6.0 or above.)|Windows 10, version 1903 or later<br>Azure Active Directory|
+|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices running Android 8.0 or above.)|Windows 10, version 1903 or later<br>Azure Active Directory|
|**Mode**|Platform|Software|Hardware| |**Systems and devices**|PC with a built-in Trusted Platform Module (TPM)<br>PIN and biometrics recognition |PIN and biometrics recognition on phone|FIDO2 security devices that are Microsoft compatible| |**User experience**|Sign in using a PIN or biometric recognition (facial, iris, or fingerprint) with Windows devices.<br>Windows Hello authentication is tied to the device; the user needs both the device and a sign-in component such as a PIN or biometric factor to access corporate resources.|Sign in using a mobile phone with fingerprint scan, facial or iris recognition, or PIN.<br>Users sign in to work or personal account from their PC or mobile phone.|Sign in using FIDO2 security device (biometrics, PIN, and NFC)<br>User can access device based on organization controls and authenticate based on PIN, biometrics using devices such as USB security keys and NFC-enabled smartcards, keys, or wearables.|
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 08/30/2022 Last updated : 09/14/2022
Depending on user activity, the data file can become outdated quickly. Any chang
### Install MFA Server update Run the new installer on the Primary MFA Server. Before you upgrade a server, remove it from load balancing or traffic sharing with other MFA Servers. You don't need to uninstall your current MFA Server before running the installer. The installer performs an in-place upgrade using the current installation path (for example, C:\Program Files\Multi-Factor Authentication Server). If you're prompted to install a Microsoft Visual C++ 2015 Redistributable update package, accept the prompt. Both the x86 and x64 versions of the package are installed. It isn't required to install updates for User portal, Web SDK, or AD FS Adapter.
-After the installation is complete, it can take several minutes for the datafile to be upgraded. During this time, the User portal may have issues connecting to the MFA Service. **Don't restart the MFA Service, or the MFA Server during this time.** This behavior is normal. Once the upgrade is complete, the primary serverΓÇÖs main service will again be functional.
+After the installation is complete, it can take several minutes for the datafile to be upgraded. During this time, the User portal may have issues connecting to the MFA Service. **Don't restart the MFA Service, or the MFA Server during this time.** This behavior is normal. Once the upgrade is complete, the primary serverΓÇÖs main service will again be functional.
-You can check \Program Files\Multi-Factor Authentication Server\Logs\MultiFactorAuthSvc.log to make sure the upgrade is complete. You should see **Completed performing tasks to upgrade from 23 to 24**.
+You can check \Program Files\Multi-Factor Authentication Server\Logs\MultiFactorAuthSvc.log to see progress and make sure the upgrade is complete. **Completed performing tasks to upgrade from 23 to 24**.
+
+If you have thousands of users, you might schedule the upgrade during a maintenance window and take the User portal offline during this time. To estimate how long the upgrade will take, plan on around 4 minutes per 10,000 users. You can minimize the time by cleaning up disabled or inactive users prior to the upgrade.
>[!NOTE] >After you run the installer on your primary server, secondary servers may begin to log **Unhandled SB** entries. This is due to schema changes made on the primary server that will not be recognized by secondary servers. These errors are expected. In environments with 10,000 users or more, the amount of log entries can increase significantly. To mitigate this issue, you can increase the file size of your MFA Server logs, or upgrade your secondary servers.
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 07/19/2022 Last updated : 09/13/2022
The Azure AD accounts can be in the same tenant or different tenants. Guest acco
To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met: - Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity. -- Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 6.0 or greater.
+- Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 8.0 or greater.
- For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android. - For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in: - balas@contoso.com
To learn about Azure AD authentication and passwordless methods, see the followi
- [Learn how passwordless authentication works](concept-authentication-passwordless.md) - [Learn about device registration](../devices/overview.md)-- [Learn about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
+- [Learn about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
Previously updated : 12/18/2020 Last updated : 09/13/2022
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
Previously updated : 12/18/2020 Last updated : 09/13/2022
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
To enhance your Azure AD implementation, you can also add paid capabilities by u
>[!Note] >For the pricing options of these licenses, see [Azure Active Directory Pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). >
->Azure Active Directory Premium P1 and Premium P2 are not currently supported in China. For more information about Azure AD pricing, contact the [Azure Active Directory Forum](https://azure.microsoft.com/support/community/?product=active-directory).
+>For more information about Azure AD pricing, contact the [Azure Active Directory Forum](https://azure.microsoft.com/support/community/?product=active-directory).
- **Azure Active Directory Free.** Provides user and group management, on-premises directory synchronization, basic reports, self-service password change for cloud users, and single sign-on across Azure, Microsoft 365, and many popular SaaS apps.
active-directory Resilience B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2c.md
Previously updated : 11/30/2020 Last updated : 09/13/2022
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Previously updated : 12/18/2020 Last updated : 09/13/2022
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Previously updated : 03/13/2020 Last updated : 09/13/2022
The user will need to [re-enroll](/windows/security/identity-protection/hello-fo
Windows 7 and 8.1 devices are not affected by this issue after UPN changes.
+## Mobile Application Management (MAM) app protection policies known issues and workarounds
+
+**Known Issues**
+
+Your organization may use [MAM app protection policies](https://docs.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
+MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state.
+
+**Work Around**
+
+IT admins should [issue a selective wipe](https://docs.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
+ ## Microsoft Authenticator known issues and workarounds Your organization might require the use of the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to sign in and access organizational applications and data. Although a username might appear in the app, the account isn't set up to function as a verification method until the user completes the registration process.
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
The Azure Active Directory team regularly updates Azure AD Connect Health with new features and functionality. This article lists the versions and features that have been released. > [!NOTE]
-> Connect Health agents are updated automatically when new version is released. Please ensure the auto-upgrade settings is enabled from Azure portal.
+> Azure AD Connect Health agents are updated automatically when new version is released.
> Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md)
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| | | | | Azure AD threat intelligence | Offline | This risk detection indicates some activity that is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | | Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
-| Unusual addition of credentials to an OAuth app | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-addition-of-credentials-to-an-oauth-app). This detection identifies the suspicious addition of privileged credentials to an OAuth app. This can indicate that an attacker has compromised the app, and is using it for malicious activity. |
+| Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
-| Leaked Credentials (public preview) | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
+| Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
+| Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that might be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
## Identify risky workload identities
Some of the key questions to answer during your investigation include:
The [Azure Active Directory security operations guide for Applications](../fundamentals/security-operations-applications.md) provides detailed guidance on the above investigation areas.
-Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk or confirm the account as compromised in the Risky workload identities (preview) report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins.
+Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities (preview) report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins.
:::image type="content" source="media/concept-workload-identity-risk/confirm-compromise-or-dismiss-risk.png" alt-text="Confirm workload identity compromise or dismiss the risk in the Azure portal." lightbox="media/concept-workload-identity-risk/confirm-compromise-or-dismiss-risk.png":::
active-directory Howto Identity Protection Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md
Previously updated : 08/23/2022 Last updated : 09/13/2022
Microsoft Graph is the Microsoft unified API endpoint and the home of [Azure Act
To successfully complete this tutorial, make sure you have the required prerequisites: -- Microsoft Graph PowerShell SDK is installed. Follow the [installation guide](/powershell/microsoftgraph/installation?view=graph-powershell-1.0) for more info on how to do this.
+- Microsoft Graph PowerShell SDK is installed. For more information, see the article [Install the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true).
- Identity Protection is available in the beta version of Microsoft Graph PowerShell. Run the following command to set your profile to beta.+ ```powershell # Connect to Graph beta Endpoint Select-MgProfile -Name 'beta' ```+ - Microsoft Graph PowerShell using a global administrator role and the appropriate permissions. The IdentityRiskEvent.Read.All, IdentityRiskyUser.ReadWrite.All Or IdentityRiskyUser.ReadWrite.All delegated permissions are required. To set the permissions to IdentityRiskEvent.Read.All and IdentityRiskyUser.ReadWrite.All, run:+ ```powershell Connect-MgGraph -Scopes "IdentityRiskEvent.Read.All","IdentityRiskyUser.ReadWrite.All" ```
-Or, if you use app-only authentication, you may follow this [guide](/powershell/microsoftgraph/app-only?view=graph-powershell-1.0&tabs=azure-portal). To register an application with the required application permissions, prepare a certificate and run:
+If you use app-only authentication, see the article [Use app-only authentication with the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/app-only?view=graph-powershell-1.0&tabs=azure-portal&preserve-view=true). To register an application with the required application permissions, prepare a certificate and run:
+ ```powershell Connect-MgGraph -ClientID YOUR_APP_ID -TenantId YOUR_TENANT_ID -CertificateName YOUR_CERT_SUBJECT ## Or -CertificateThumbprint instead of -CertificateName ``` ## List risky detections using PowerShell+ You can retrieve the risk detections by the properties of a risk detection in Identity Protection.+ ```powershell # List all anonymizedIPAddress risk detections Get-MgRiskDetection -Filter "RiskType eq 'anonymizedIPAddress'" | Format-Table UserDisplayName, RiskType, RiskLevel, DetectedDateTime
Get-MgRiskDetection -Filter "RiskType eq 'anonymizedIPAddress'" | Format-Table U
Get-MgRiskDetection -Filter "UserDisplayName eq 'User01' and Risklevel eq 'high'" | Format-Table UserDisplayName, RiskType, RiskLevel, DetectedDateTime ```+ ## List risky users using PowerShell+ You can retrieve the risky users and their risky histories in Identity Protection. + ```powershell # List all high risk users Get-MgRiskyUser -Filter "RiskLevel eq 'high'" | Format-Table UserDisplayName, RiskDetail, RiskLevel, RiskLastUpdatedDateTime
Get-MgRiskyUser -Filter "RiskLevel eq 'high'" | Format-Table UserDisplayName, Ri
Get-MgRiskyUserHistory -RiskyUserId 375844b0-2026-4265-b9f1-ee1708491e05| Format-Table RiskDetail, RiskLastUpdatedDateTime, @{N="RiskDetection";E={($_). Activity.RiskEventTypes}}, RiskState, UserDisplayName ```
-## Confirm users compromised using Powershell
+
+## Confirm users compromised using PowerShell
+ You can confirm users compromised and flag them as high risky users in Identity Protection.+ ```powershell # Confirm Compromised on two users Confirm-MgRiskyUserCompromised -UserIds "577e09c1-5f26-4870-81ab-6d18194cbb51","bf8ba085-af24-418a-b5b2-3fc71f969bf3" ```
-## Dimiss risky users using Powershell
+
+## Dismiss risky users using PowerShell
+ You can bulk dismiss risky users in Identity Protection.+ ```powershell # Get a list of high risky users which are more than 90 days old $riskyUsers= Get-MgRiskyUser -Filter "RiskLevel eq 'high'" | where RiskLastUpdatedDateTime -LT (Get-Date).AddDays(-90) # bulk dimmiss the risky users Invoke-MgDismissRiskyUser -UserIds $riskyUsers.Id ```+ ## Next steps - [Get started with the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started)
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Previously updated : 11/12/2020 Last updated : 09/13/2022
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Previously updated : 10/12/2020 Last updated : 09/13/2022
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Previously updated : 10/12/2020 Last updated : 09/13/2022
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Previously updated : 04/20/2021 Last updated : 09/13/2022
active-directory How To View Applied Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md
++
+ Title: How to view applied conditional access policies in the Azure AD sign-in logs | Microsoft Docs
+description: Learn how to view applied conditional access policies in the Azure AD sign-in logs
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 09/14/2022++++++
+# How to: View applied conditional access policies in the Azure AD sign-in logs
+
+With conditional access policies, you can control, how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what impact your conditional access policies have on sign-ins to your tenant, so that you can take action if necessary. The sign-in logs in Azure AD provide you with the information you need to assess the impact of your policies.
+
+
+This article explains how you can get access to the information about applied conditional access policies.
++
+## What you should know
+
+As an Azure AD administrator, you can use the sign-in logs to:
+
+- Troubleshoot sign in problems
+- Check on feature performance
+- Evaluate security of a tenant
+
+Some scenarios require you to get an understanding for how your conditional access policies were applied to a sign-in event. Common examples include:
+
+- **Helpdesk administrators** who need to look at applied conditional access policies to understand if a policy is the root cause of a ticket opened by a user.
+
+- **Tenant administrators** who need to verify that conditional access policies have the intended impact on the users of a tenant.
++
+You can access the sign-in logs using the Azure portal, MS Graph, and PowerShell.
+++
+## Required administrator roles
++
+To see applied conditional access policies in the sign-in logs, administrators must have permissions to:
+
+- View sign-in logs
+- View conditional access policies
+
+The least privileged built-in role that grants both permissions is the **Security Reader**. As a best practice, your global administrator should add the **Security Reader** role to the related administrator accounts.
++
+The following built in roles grant permissions to read conditional access policies:
+
+- Global Administrator
+
+- Global Reader
+
+- Security Administrator
+
+- Security Reader
+
+- Conditional Access Administrator
++
+The following built in roles grant permission to view sign-in logs:
+
+- Global Administrator
+
+- Security Administrator
+
+- Security Reader
+
+- Global Reader
+
+- Reports Reader
++
+## Permissions for client apps
+
+If you use a client app to pull sign-in logs from Graph, your app needs permissions to receive the **appliedConditionalAccessPolicy** resource from Graph. As a best practice, assign **Policy.Read.ConditionalAccess** because it's the least privileged permission. Any of the following permissions is sufficient for a client app to access applied CA policies in sign-in logs through Graph:
+
+- Policy.Read.ConditionalAccess
+
+- Policy.ReadWrite.ConditionalAccess
+
+- Policy.Read.All
+
+
+
+## Permissions for PowerShell
+
+Like any other client app, the Microsoft Graph PowerShell module needs client permissions to access applied conditional access policies in the sign-in logs. To successfully pull applied conditional access in the sign-in logs, you must consent to the necessary permissions with your administrator account for MS Graph PowerShell. As a best practice, consent to:
+
+- Policy.Read.ConditionalAccess
+- AuditLog.Read.All
+- Directory.Read.All
+
+These permissions are the least privileged permissions with the necessary access.
+
+To consent to the necessary permissions, use:
+
+` Connect-MgGraph -Scopes Policy.Read.ConditionalAccess, AuditLog.Read.All, Directory.Read.All `
+
+To view the sign-in logs, use:
+
+`Get-MgAuditLogSignIn `
+
+The output of this cmdlet contains a **AppliedConditionalAccessPolicies** property that shows all the conditional access policies applied to the sign-in.
+
+For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://docs.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0).
+
+The AzureAD Graph PowerShell module doesn't support viewing applied conditional access policies; only the Microsoft Graph PowerShell module returns applied conditional access policies.
+
+## Confirming access
+
+In the **Conditional Access** tab, you see a list of conditional access policies applied to that sign-in event.
++
+To confirm that you have admin access to view applied conditional access policies in the sign-ins logs, do:
+
+1. Navigate to the Azure portal.
+
+2. In the top-right corner, select your directory, and then select **Azure Active Directory** in the left navigation pane.
+
+3. In the **Monitoring** section, select **Sign-in logs**.
+
+4. Click an item in the sign-in row table to bring up the Activity Details: Sign-ins context pane.
+
+5. Click on the Conditional Access tab in the context pane. If your screen is small, you may need to click the ellipsis […] to see all context pane tabs.
++++
+## Next steps
+
+* [Sign-ins error codes reference](./concept-sign-ins.md)
+* [Sign-ins report overview](concept-sign-ins.md)
active-directory Arena Eu Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arena-eu-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Arena EU'
+description: Learn how to configure single sign-on between Azure Active Directory and Arena EU.
++++++++ Last updated : 09/06/2022++++
+# Tutorial: Azure AD SSO integration with Arena EU
+
+In this tutorial, you'll learn how to integrate Arena EU with Azure Active Directory (Azure AD). When you integrate Arena EU with Azure AD, you can:
+
+* Control in Azure AD who has access to Arena EU.
+* Enable your users to be automatically signed-in to Arena EU with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Arena EU single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Arena EU supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Arena EU from the gallery
+
+To configure the integration of Arena EU into Azure AD, you need to add Arena EU from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Arena EU** in the search box.
+1. Select **Arena EU** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
+
+## Configure and test Azure AD SSO for Arena EU
+
+Configure and test Azure AD SSO with Arena EU using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Arena EU.
+
+To configure and test Azure AD SSO with Arena EU, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Arena EU SSO](#configure-arena-eu-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Arena EU test user](#create-arena-eu-test-user)** - to have a counterpart of B.Simon in Arena EU that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Arena EU** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.europe.arenaplm.com/`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Arena EU** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy the appropriate configuration URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Arena EU.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Arena EU**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Arena EU SSO
+
+To configure single sign-on on **Arena EU** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Arena EU support team](mailto:arena-support@ptc.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Arena EU test user
+
+In this section, you create a user called Britta Simon at Arena EU. Work with [Arena EU support team](mailto:arena-support@ptc.com) to add the users in the Arena EU platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Arena EU Sign-on URL where you can initiate the login flow.
+
+* Go to Arena EU Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Arena EU for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Arena EU tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Arena EU for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Arena EU you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Concur Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-tutorial.md
Previously updated : 08/26/2021 Last updated : 09/13/2022
In this tutorial, you'll learn how to integrate Concur with Azure Active Directo
* Enable your users to be automatically signed-in to Concur with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
+> [!NOTE]
+> The guidance provided in this article does not cover the new **Manage Single Sign-On** offering that is available from SAP Concur as of mid 2019.
+> This new self-service SSO offering relies on **IdP initiated** sign-in which the current gallery app does not allow, due to the **Sign on URL** not being optional.
+> The **Sign on URL** must be empty for IdP initiated sign-in via MyApps portal to work as intended.
+> For these reason you must start out with a custom non-galleryp application to set up SSO when using the **Manage Single Sign-On** feature in SAP Concur.
+ ## Prerequisites To get started, you need the following items:
active-directory Zola Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zola-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Zola'
+description: Learn how to configure single sign-on between Azure Active Directory and Zola.
++++++++ Last updated : 09/06/2022++++
+# Tutorial: Azure AD SSO integration with Zola
+
+In this tutorial, you'll learn how to integrate Zola with Azure Active Directory (Azure AD). When you integrate Zola with Azure AD, you can:
+
+* Control in Azure AD who has access to Zola.
+* Enable your users to be automatically signed-in to Zola with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Zola single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Zola supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Zola from the gallery
+
+To configure the integration of Zola into Azure AD, you need to add Zola from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zola** in the search box.
+1. Select **Zola** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
+
+## Configure and test Azure AD SSO for Zola
+
+Configure and test Azure AD SSO with Zola using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Zola.
+
+To configure and test Azure AD SSO with Zola, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zola SSO](#configure-zola-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zola test user](#create-zola-test-user)** - to have a counterpart of B.Simon in Zola that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Zola** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Reply URL** textbox, type the URL:
+ `https://zola-prod.auth.eu-west-3.amazoncognito.com/saml2/idpresponse`
+
+ b. In the **Sign-on URL** textbox, type the URL:
+ `https://app.zola.fr`
+
+ c. In the **Relay State** textbox, type the URL:
+ `https://app.zola.fr/version-test/dashboard-v2`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Zola** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy the appropriate configuration URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zola.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zola**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Zola SSO
+
+To configure single sign-on on **Zola** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Zola support team](mailto:tech@zola.fr). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Zola test user
+
+In this section, you create a user called Britta Simon at Zola. Work with [Zola support team](mailto:tech@zola.fr) to add the users in the Zola platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Zola Sign-on URL where you can initiate the login flow.
+
+* Go to Zola Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Zola tile in the My Apps, this will redirect to Zola Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Zola you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
Previously updated : 4/26/2021 Last updated : 09/13/2022
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
Title: API Server VNet Integration in Azure Kubernetes Service (AKS)
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with API Server VNet Integration Previously updated : 06/27/2022 Last updated : 09/09/2022
An Azure Kubernetes Service (AKS) cluster with API Server VNet Integration configured projects the API server endpoint directly into a delegated subnet in the VNet where AKS is deployed. This enables network communication between the API server and the cluster nodes without any required private link or tunnel. The API server will be available behind an Internal Load Balancer VIP in the delegated subnet, which the nodes will be configured to utilize. By using API Server VNet Integration, you can ensure network traffic between your API server and your node pools remains on the private network only. -- [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## API server connectivity The control plane or API server is in an Azure Kubernetes Service (AKS)-managed Azure subscription. A customer's cluster or node pool is in the customer's subscription. The server and the virtual machines that make up the cluster nodes can communicate with each other through the API server VIP and pod IPs that are projected into the delegated subnet.
-At this time, API Server VNet integration is only supported for private clusters. Unlike standard public clusters, the agent nodes communicate directly with the private IP address of the ILB VIP for communication to the API server without using DNS. External clients needing to communicate with the cluster should follow the same private DNS setup methodology as standard [private clusters](private-clusters.md).
+API Server VNet Integration is supported for public or private clusters, and public access can be added or removed after cluster provisioning. Unlike non-VNet integrated clusters, the agent nodes always communicate directly with the private IP address of the API Server Internal Load Balancer (ILB) IP without using DNS. All node to API server traffic is kept on private networking and no tunnel is required for API server to node connectivity. Out-of-cluster clients needing to communicate with the API server can do so normally if public network access is enabled. If public network access is disabled, they should follow the same private DNS setup methodology as standard [private clusters](private-clusters.md).
## Region availability API Server VNet Integration is available in the following regions at this time: -- canary regions - eastus2 - northcentralus - westcentralus
API Server VNet Integration is available in the following regions at this time:
## Prerequisites
-* Azure CLI with aks-preview extension 0.5.67 or later.
+* Azure CLI with aks-preview extension 0.5.97 or later.
* If using ARM or the REST API, the AKS API version must be 2022-04-02-preview or later. ### Install the aks-preview CLI extension
When the feature has been registered, refresh the registration of the *Microsoft
az provider register --namespace Microsoft.ContainerService ```
-## Create an AKS Private cluster with API Server VNet Integration using Managed VNet
+## Create an AKS cluster with API Server VNet Integration using Managed VNet
-AKS clusters with API Server VNet Integration can be configured in either managed VNet or bring-your-own VNet mode.
+AKS clusters with API Server VNet Integration can be configured in either managed VNet or bring-your-own VNet mode. They can be created as either public clusters (with API server access available via a public IP) or private clusters (where the API server is only accessible via private VNet connectivity), and can be toggled between these two states without redeploying.
### Create a resource group
Create a resource group or use an existing resource group for your AKS cluster.
az group create -l westus2 -n <resource-group> ```
-### Deploy the cluster
+### Deploy a public cluster
+
+```azurecli-interactive
+az aks create -n <cluster-name> \
+ -g <resource-group> \
+ -l <location> \
+ --network-plugin azure \
+ --enable-apiserver-vnet-integration
+```
+
+The `--enable-apiserver-vnet-integration` flag configures API Server VNet integration for Managed VNet mode.
+
+### Deploy a private cluster
```azurecli-interactive az aks create -n <cluster-name> \
az aks create -n <cluster-name> \
--enable-apiserver-vnet-integration ```
-Where `--enable-private-cluster` is a mandatory flag for a private cluster, and `--enable-apiserver-vnet-integration` configures API Server VNet integration for Managed VNet mode.
+The `--enable-private-cluster` flag is mandatory for a private cluster, and `--enable-apiserver-vnet-integration` configures API Server VNet integration for Managed VNet mode.
## Create an AKS Private cluster with API Server VNet Integration using bring-your-own VNet
az role assignment create --scope <cluster-subnet-resource-id> \
--assignee <managed-identity-client-id> ```
-### Create the AKS cluster
+### Deploy a public cluster
+
+```azurecli-interactive
+az aks create -n <cluster-name> \
+ -g <resource-group> \
+ -l <location> \
+ --network-plugin azure \
+ --enable-apiserver-vnet-integration \
+ --vnet-subnet-id <cluster-subnet-resource-id> \
+ --apiserver-subnet-id <apiserver-subnet-resource-id> \
+ --assign-identity <managed-identity-resource-id>
+```
+
+### Deploy a private cluster
```azurecli-interactive az aks create -n <cluster-name> \
az aks create -n <cluster-name> \
--assign-identity <managed-identity-resource-id> ```
-## Limitations
-* Existing AKS clusters cannot be converted to API Server VNet Integration clusters at this time.
-* Only [private clusters](private-clusters.md) are supported at this time.
+## Convert an existing AKS cluster to API Server VNet Integration
+
+Existing AKS public clusters can be converted to API Server VNet Integration clusters by supplying an API server subnet that meets the requirements above (in the same VNet as the cluster nodes, permissions granted for the AKS cluster identity, and size of at least /28). This is a one-way migration; clusters cannot have API Server VNet Integration disabled after it has been enabled.
+
+This upgrade will perform a node-image version upgrade on all node pools - all workloads will be restarted as all nodes will undergo a rolling image upgrade.
+
+> [!WARNING]
+> Converting a cluster to API Server VNet Integration will result in a change of the API Server IP address, though the hostname will remain the same. If the IP address of the API server has been configured in any firewalls or network security group rules, those rules may need to be updated.
+
+```azurecli-interactive
+az aks update -n <cluster-name> \
+ -g <resource-group> \
+ --enable-apiserver-vnet-integration \
+ --apiserver-subnet-id <apiserver-subnet-resource-id>
+```
+
+## Enable or disable private cluster mode on an existing cluster with API Server VNet Integration
+
+AKS clusters configured with API Server VNet Integration can have public network access/private cluster mode enabled or disabled without redeploying the cluster. The API server hostname will not change, but public DNS entries will be modified or removed as appropriate.
+
+### Enable private cluster mode
+
+```azurecli-interactive
+az aks update -n <cluster-name> \
+ -g <resource-group> \
+ --enable-private-cluster
+```
+
+### Disable private cluster mode
+
+```azurecli-interactive
+az aks update -n <cluster-name> \
+ -g <resource-group> \
+ --disable-private-cluster
+```
+
+## Limitations
+
+* Existing AKS private clusters cannot be converted to API Server VNet Integration clusters at this time.
* [Private Link Service][private-link-service] will not work if deployed against the API Server injected addresses at this time, so the API server cannot be exposed to other virtual networks via private link. To access the API server from outside the cluster network, utilize either [VNet peering][virtual-network-peering] or [AKS run command][command-invoke]. <!-- LINKS - internal -->
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Two types of resources are reserved:
- 6% of the next 112 GB of memory (up to 128 GB) - 2% of any memory above 128 GB
+>[!NOTE]
+> AKS reserves an additional 2GB for system process in Windows nodes that are not part of the calculated memory.
+ Memory and CPU allocation rules: * Keep agent nodes healthy, including some hosting system pods critical to cluster health. * Cause the node to report less allocatable memory and CPU than it would if it were not part of a Kubernetes cluster.
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
Title: Multi-instance GPU Node pool (preview)
+ Title: Multi-instance GPU Node pool
description: Learn how to create a Multi-instance GPU Node pool and schedule tasks on it
Nvidia's A100 GPU can be divided in up to seven independent instances. Each inst
This article will walk you through how to create a multi-instance GPU node pool on Azure Kubernetes Service clusters and schedule tasks. - ## GPU Instance Profile GPU Instance Profiles define how a GPU will be partitioned. The following table shows the available GPU Instance Profile for the `Standard_ND96asr_v4`, the only instance type that supports the A100 GPU at this time.
az aks nodepool add \
--name mignode \ --resourcegroup myresourcegroup \ --cluster-name migcluster \
- --node-size Standard_ND96asr_v4 \
+ --node-vm-size Standard_ND96asr_v4 \
--gpu-instance-profile MIG1g ```
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview)
-description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
+ Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview)
+description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
# Web Application Routing (Preview)
-The Web Application Routing solution makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster. When the solution's enabled, it configures an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your AKS cluster, SSL termination, and Open Service Mesh (OSM) for E2E encryption of inter cluster communication. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
+The Web Application Routing add-on configures an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. Optionally, it also integrates with Open Service Mesh (OSM) for end-to-end encryption of inter cluster communication using mutual TLS (mTLS). As applications are deployed, the add-on creates publicly accessible DNS names for endpoints.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
The Web Application Routing solution makes it easy to access applications that a
- Web Application Routing currently doesn't support named ports in ingress backend.
-## Web Application Routing solution overview
+## Web Application Routing add-on overview
-The add-on deploys two components: an [nginx ingress controller][nginx], and [External-DNS][external-dns] controller.
+The add-on deploys the following components:
-- **Nginx ingress Controller**: The ingress controller exposed to the internet.-- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
+- **[nginx ingress controller][nginx]**: The ingress controller exposed to the internet.
+- **[external-dns controller][external-dns]**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone. Note that this is only deployed when you pass in the `--dns-zone-resource-id` argument.
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).-- An Azure Key Vault containing any application certificates.-- A DNS solution.
+- An Azure Key Vault to store certificates.
+- A DNS solution, such as [Azure DNS](/azure/dns/dns-getstarted-portal).
### Install the `aks-preview` Azure CLI extension
az extension add --name aks-preview
az extension update --name aks-preview ```
-### Install the `osm` CLI
+### Create and export a self-signed SSL certificate (if you don't already own one)
-Since Web Application Routing uses OSM internally to secure intranet communication, we need to set up the `osm` CLI. This command-line tool contains everything needed to configure and manage Open Service Mesh. The latest binaries are available on the [OSM GitHub releases page][osm-release].
-
-### Import certificate to Azure Keyvault
+If you already have an SSL certificate, you can skip this step, otherwise you can use these commands to create a self-signed SSL certificate to use with the Ingress. You will need to replace *`<Hostname>`* with the DNS name that you will be using.
```bash
+# Create a self-signed SSL certificate
+openssl req -new -x509 -nodes -out aks-ingress-tls.crt -keyout aks-ingress-tls.key -subj "/CN=<Hostname>" -addext "subjectAltName=DNS:<Hostname>"
+
+# Export the SSL certificate, skipping the password prompt
openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out aks-ingress-tls.pfx
-# skip Password prompt
```
-```azurecli
-az keyvault certificate import --vault-name <MY_KEYVAULT> -n <KEYVAULT-CERTIFICATE-NAME> -f aks-ingress-tls.pfx
+### Create an Azure Key Vault to store the certificate
+
+If you don't already have an Azure Key Vault, use this command to create one. Azure Key Vault is used to securely store the SSL certificates that will be loaded into the Ingress.
+
+```azurecli-interactive
+az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName>
```
-## Deploy Web Application Routing with the Azure CLI
+### Import certificate to Azure Key Vault
-The Web Application Routing routing add-on can be enabled with the Azure CLI when deploying an AKS cluster. To do so, use the [az aks create][az-aks-create] command with the `--enable-addons` argument. However, since Web Application routing depends on the OSM addon to secure intranet communication and the Azure Keyvault Secret Provider to retrieve certificates, we must enable them at the same time.
+Import the SSL certificate into Azure Key Vault.
-```azurecli
-az aks create --resource-group myResourceGroup --name myAKSCluster --enable-addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --generate-ssh-keys
+```azurecli-interactive
+az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx
```
-You can also enable Web Application Routing on an existing AKS cluster using the [az aks enable-addons][az-aks-enable-addons] command. To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example:
+### Create an Azure DNS zone
-```azurecli
-az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing
+If you want the add-on to automatically manage creating hostnames via Azure DNS, you need to [create an Azure DNS zone](/azure/dns/dns-getstarted-cli) if you don't have one already.
+
+```azurecli-interactive
+# Create a DNS zone
+az network dns zone create -g <ResourceGroupName> -n <ZoneName>
+```
+
+## Enable Web Application Routing via the Azure CLI
+
+The Web Application Routing routing add-on can be enabled with the Azure CLI when deploying an AKS cluster. To do so, use the [az aks create][az-aks-create] command with the `--enable-addons` argument. You can also enable Web Application Routing on an existing AKS cluster using the [az aks enable-addons][az-aks-enable-addons] command.
+
+# [With Open Service Mesh (OSM)](#tab/with-osm)
+
+The following additional add-ons are required:
+* **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes.
+
+```azurecli-interactive
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --generate-ssh-keys --enable-secret-rotation
+```
+
+To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example:
+
+```azurecli-interactive
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --enable-secret-rotation
+```
+
+> [!NOTE]
+> To use the add-on with Open Service Mesh, you should install the `osm` command-line tool. This command-line tool contains everything needed to configure and manage Open Service Mesh. The latest binaries are available on the [OSM GitHub releases page][osm-release].
++
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+The following additional add-on is required:
+* **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes.
+
+```azurecli-interactive
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
+```
+
+To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example:
+
+```azurecli-interactive
+az aks enable-addons-g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
+```
+++
+## Retrieve the add-on's managed identity object ID
+
+Retrieve user managed identity object ID for the add-on. This will be used in the next steps to grant permissions against the Azure DNS zone and the Azure Key Vault. Provide your *`<ResourceGroupName>`*, *`<ClusterName>`*, and *`<Location>`* in the script below which will retrieve the managed identity's object ID.
+
+```azurecli-interactive
+# Provide values for your environment
+RGNAME=<ResourceGroupName>
+CLUSTERNAME=<ClusterName>
+LOCATION=<Location>
+
+# Retrieve user managed identity object ID for the add-on
+SUBSCRIPTION_ID=$(az account show --query id --output tsv)
+MANAGEDIDENTITYNAME="webapprouting-${CLUSTERNAME}"
+MCRGNAME=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query nodeResourceGroup -o tsv)
+USERMANAGEDIDENTITY_RESOURCEID="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${MCRGNAME}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/${MANAGEDIDENTITYNAME}"
+MANAGEDIDENTITY_OBJECTID=$(az resource show --id $USERMANAGEDIDENTITY_RESOURCEID --query "properties.principalId" -o tsv | tr -d '[:space:]')
+```
+
+## Configure the add-on to use Azure DNS to manage creating DNS zones
+
+If you are going to use Azure DNS, update the add-on to pass in the `--dns-zone-resource-id`.
+
+Retrieve the resource ID for the DNS zone.
+
+```azurecli-interactive
+ZONEID=$(az network dns zone show -g <ResourceGroupName> -n <ZoneName> --query "id" --output tsv)
+```
+
+Grant **DNS Zone Contributor** permissions on the DNS zone to the add-on's managed identity.
+
+```azurecli-interactive
+az role assignment create --role "DNS Zone Contributor" --assignee $MANAGEDIDENTITY_OBJECTID --scope $ZONEID
+```
+
+Update the add-on to enable the integration with Azure DNS. This will create the **external-dns** controller.
+
+```azurecli-interactive
+az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_application_routing --dns-zone-resource-id=$ZONEID
+```
+++
+## Grant the add-on permissions to retrieve certificates from Azure Key Vault
+The Web Application Routing add-on creates a user created managed identity in the cluster resource group. This managed identity will need to be granted permissions to retrieve SSL certificates from the Azure Key Vault.
+
+Grant `GET` permissions for the Web Application Routing add-on to retrieve certificates from Azure Key Vault:
+```azurecli-interactive
+az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get
``` ## Connect to your AKS cluster
To connect to the Kubernetes cluster from your local computer, you use [kubectl]
If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the `az aks install-cli` command:
-```azurecli
+```azurecli-interactive
az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*:
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command.
-```azurecli
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+```azurecli-interactive
+az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
```
+## Deploy an application
+
+Web Application Routing uses annotations on Kubernetes Ingress objects to create the appropriate resources, create records on Azure DNS (when configured), and retrieve the SSL certificates from Azure Key Vault.
-## Create the application namespace
+# [With Open Service Mesh (OSM)](#tab/with-osm)
+
+### Create the application namespace
For the sample application environment, let's first create a namespace called `hello-web-app-routing` to run the example pods:
We also need to add the application namespace to the OSM control plane:
osm namespace add hello-web-app-routing ```
-## Grant permissions for Web Application Routing
+### Create the deployment
+
+Create a file named **deployment.yaml** and copy in the following YAML.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: aks-helloworld
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+```
+
+### Create the service
-Identify the Web Application Routing-associated managed identity within the cluster resource group `webapprouting-<CLUSTER_NAME>`. In this walkthrough, the identity is named `webapprouting-myakscluster`.
+Create a file named **service.yaml** and copy in the following YAML.
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: aks-helloworld
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+```
-Copy the identity's object ID:
+### Create the ingress
+The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
-### Grant access to Azure Key Vault
+```azurecli-interactive
+az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> query "id" --output tsv
+```
-Grant `GET` permissions for Web Application Routing to retrieve certificates from Azure Key Vault:
+Create a file named **ingress.yaml** and copy in the following YAML.
-```azurecli
-az keyvault set-policy --name myapp-contoso --object-id <WEB_APP_ROUTING_MSI_OBJECT_ID> --secret-permissions get --certificate-permissions get
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that going to be generated to store the certificate. This is the certificate that's going to be presented in the browser.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ kubernetes.azure.com/use-osm-mtls: "true"
+ nginx.ingress.kubernetes.io/backend-protocol: HTTPS
+ nginx.ingress.kubernetes.io/configuration-snippet: |2-
+
+ proxy_ssl_name "default.hello-web-app-routing.cluster.local";
+ nginx.ingress.kubernetes.io/proxy-ssl-secret: kube-system/osm-ingress-client-cert
+ nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-aks-helloworld
```
-## Use Web Application Routing
+### Create the ingress backend
-The Web Application Routing solution may only be triggered on service resources that are annotated as follows:
+Open Service Mesh (OSM) leverages its [IngressBackend API](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources. To proxy connections to HTTPS backends, we will configure the Ingress and IngressBackend configurations to use https as the backend protocol, and have OSM issue a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate will be stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends. For more information, refer to [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/).
+
+Create a file named **ingressbackend.yaml** and copy in the following YAML.
```yaml
-annotations:
- kubernetes.azure.com/ingress-host: myapp.contoso.com
- kubernetes.azure.com/tls-cert-keyvault-uri: https://<MY-KEYVAULT>.vault.azure.net/certificates/<KEYVAULT-CERTIFICATE-NAME>/<KEYVAULT-CERTIFICATE-REVISION>
+apiVersion: policy.openservicemesh.io/v1alpha1
+kind: IngressBackend
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ backends:
+ - name: aks-helloworld
+ port:
+ number: 80
+ protocol: https
+ tls:
+ skipClientCertValidation: false
+ sources:
+ - kind: Service
+ name: nginx
+ namespace: app-routing-system
+ - kind: AuthenticatedPrincipal
+ name: ingress-nginx.ingress.cluster.local
```
-These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `<MY-KEYVAULT>` and will retrieve the `<KEYVAULT-CERTIFICATE-NAME>` with `<KEYVAULT-CERTIFICATE-REVISION>`. To obtain the certificate URI within your keyvault run:
+### Create the resources on the cluster
-```azurecli
-az keyvault certificate show --vault-name <MY_KEYVAULT> --name <KEYVAULT-CERTIFICATE-NAME> -o jsonc | jq .id
+Use the [kubectl apply][kubectl-apply] command to create the resources.
+
+```bash
+kubectl apply -f deployment.yaml -n hello-web-app-routing
+kubectl apply -f service.yaml -n hello-web-app-routing
+kubectl apply -f ingress.yaml -n hello-web-app-routing
+kubectl apply -f ingressbackend.yaml -n hello-web-app-routing
```
-Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` with your DNS host name and `<MY_KEYVAULT_CERTIFICATE_URI>` with the ID returned from keyvault.
+The following example output shows the created resources:
+
+```bash
+deployment.apps/aks-helloworld created
+service/aks-helloworld created
+ingress.networking.k8s.io/aks-helloworld created
+ingressbackend.policy.openservicemesh.io/aks-helloworld created
+```
+
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+### Create the application namespace
+
+For the sample application environment, let's first create a namespace called `hello-web-app-routing` to run the example pods:
+
+```bash
+kubectl create namespace hello-web-app-routing
+```
+
+### Create the deployment
+
+Create a file named **deployment.yaml** and copy in the following YAML.
```yaml apiVersion: apps/v1
spec:
env: - name: TITLE value: "Welcome to Azure Kubernetes Service (AKS)"-
+```
+
+### Create the service
+
+Create a file named **service.yaml** and copy in the following YAML.
+
+```yaml
apiVersion: v1 kind: Service metadata: name: aks-helloworld
- annotations:
- kubernetes.azure.com/ingress-host: <MY_HOSTNAME>
- kubernetes.azure.com/tls-cert-keyvault-uri: <MY_KEYVAULT_CERTIFICATE_URI>
spec: type: ClusterIP ports:
spec:
app: aks-helloworld ```
+### Create the ingress
+
+The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
+
+```azurecli-interactive
+az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> query "id" --output tsv
+```
+
+Create a file named **ingress.yaml** and copy in the following YAML.
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that going to be generated to store the certificate. This is the certificate that's going to be presented in the browser.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-aks-helloworld
+```
+
+### Create the resources on the cluster
+ Use the [kubectl apply][kubectl-apply] command to create the resources. ```bash
-kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing
+kubectl apply -f deployment.yaml -n hello-web-app-routing
+kubectl apply -f service.yaml -n hello-web-app-routing
+kubectl apply -f ingress.yaml -n hello-web-app-routing
``` The following example output shows the created resources:
The following example output shows the created resources:
```bash deployment.apps/aks-helloworld created service/aks-helloworld created
+ingress.networking.k8s.io/aks-helloworld created
``` +++ ## Verify the managed ingress was created ```bash
NAME CLASS HOSTS ADDRES
aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m ```
-## Configure external DNS to point to cluster
+## Accessing the endpoint over a DNS hostname
-Now that Web Application Routing is configured within our cluster and we have the external IP address, we can configure our DNS servers to reflect this. As soon as the DNS updates have propagated, open a web browser to *<MY_HOSTNAME>*, for example *myapp.contoso.com* and verify you see the demo application. The application may take a few minutes to appear.
+If you have not configured Azure DNS integration, you will need to configure your own DNS provider with an **A record** pointing to the ingress IP address and the host name you configured for the ingress, for example *myapp.contoso.com*.
+
## Remove Web Application Routing
First, remove the associated namespace:
kubectl delete namespace hello-web-app-routing ```
-The Web Application Routing add-on can be removed using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name.
+The Web Application Routing add-on can be removed using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name. Be careful if you already have some of the other add-ons (open-service-mesh or azure-keyvault-secrets-provider) enabled on your cluster so that you don't accidentally disable them.
```azurecli
-az aks disable-addons --addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --name myAKSCluster --resource-group myResourceGroup
+az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup
``` When the Web Application Routing add-on is disabled, some Kubernetes resources may remain in the cluster. These resources include *configMaps* and *secrets*, and are created in the *app-routing-system* namespace. To maintain a clean cluster, you may want to remove these resources.
-## Clean up
-
-Remove the associated Kubernetes objects created in this article using `kubectl delete`.
-
-```bash
-kubectl delete -f samples-web-app-routing.yaml
-```
-
-The example output shows Kubernetes objects have been removed.
-
-```bash
-$ kubectl delete -f samples-web-app-routing.yaml
-
-deployment "aks-helloworld" deleted
-service "aks-helloworld" deleted
-```
- <!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-show]: /cli/azure/aks#az-aks-show
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Previously updated : 12/16/2021 Last updated : 09/13/2022
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal | | * / 443, 12000 | Outbound | TCP | VirtualNetwork / AzureCloud | Health and Monitoring Extension (optional) | External & Internal | | * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VirtualNetwork / Internet | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
| * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal | | * / 443, 12000 | Outbound | TCP | VirtualNetwork / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal | | * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
-| * / 25, 587, 25028 | Outbound | TCP | VirtualNetwork / Internet | Connect to SMTP Relay for sending e-mail (optional) | External & Internal |
-| * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
+| * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal | | * / * | Inbound | TCP | AzureLoadBalancer / VirtualNetwork | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
Outbound network connectivity to Azure Monitoring endpoints, which resolve under
| Azure Public | <ul><li>gcs.prod.monitoring.core.windows.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.com</li></ul> | | Azure Government | <ul><li>fairfax.warmpath.usgovcloudapi.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.us</li></ul> | | Azure China 21Vianet | <ul><li>mooncake.warmpath.chinacloudapi.cn</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>prod5.prod.microsoftmetrics.com</li><li>prod5-black.prod.microsoftmetrics.com</li><li>prod5-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.cn</li></ul> -
-## SMTP relay
-
-Allow outbound network connectivity for the SMTP relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com`, and `ies.global.microsoft.com`
-
-> [!NOTE]
-> Only the SMTP relay provided in API Management may be used to send email from your instance.
- ## Developer portal CAPTCHA Allow outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
It is recommended to write data to `/home` or a [mounted azure storage path](con
::: zone-end
-By default, persistent storage is disabled on custom containers and the setting is exposed in the app settings. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
+By default, persistent storage is **enabled** on custom containers, you can disable this through app settings. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
```azurecli-interactive
-az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=true
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=false
``` In PowerShell: ```azurepowershell-interactive
-Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=true}
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=false}
``` > [!NOTE]
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
After providing your application's Health check path, you can monitor the health
If your app is only scaled to one instance and becomes unhealthy, it will not be removed from the load balancer because that would take down your application entirely. Scale out to two or more instances to get the rerouting benefit of Health check. If your app is running on a single instance, you can still use Health check's [monitoring](#monitoring) feature to keep track of your application's health.
-### Why are the Health check request not showing in my web server logs?
+### Why are the Health check requests not showing in my web server logs?
-The Health check request is sent to your site internally, so the request won't show in [the web server logs](troubleshoot-diagnostic-logs.md#enable-web-server-logging). This also means the request will have an origin of `127.0.0.1` since the request is being sent internally. You can add log statements in your Health check code to keep logs of when your Health check path is pinged.
+The Health check requests are sent to your site internally, so the request won't show in [the web server logs](troubleshoot-diagnostic-logs.md#enable-web-server-logging). This also means the request will have an origin of `127.0.0.1` since the request is being sent internally. You can add log statements in your Health check code to keep logs of when your Health check path is pinged.
### Are the Health check requests sent over HTTP or HTTPS?
app-service Overview Hosting Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md
Isolate your app into a new App Service plan when:
- You want to scale the app independently from the other apps in the existing plan. - The app needs resource in a different geographical region.
+> [!NOTE]
+> An active slot is also classified as an active app as it too is competing for resources on the same App Service Plan.
+ This way you can allocate a new set of resources for your app and gain greater control of your apps. ## Manage an App Service plan
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
The following table shows the supported log types and descriptions:
| AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs| | AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu | | AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
-| AppServiceAppLogs | ASP.NET & Tomcat <sup>1</sup> | ASP.NET & Tomcat <sup>1</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs |
+| AppServiceAppLogs | ASP.NET, .NET Core, & Tomcat <sup>1</sup> | ASP.NET & Tomcat <sup>1</sup> | .NET Core, Java, SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs |
| AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules | | AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs | | AppServiceAntivirusScanAuditLogs <sup>3</sup> | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender for Cloud; **only available for Premium tier** |
If you secure your Azure Storage account by [only allowing selected networks](..
* [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md) * [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
-* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
availability-zones Migrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-configuration.md
+
+ Title: Migrate App Configuration to a region with availability zone support
+description: Learn how to migrate Azure App Configuration to availability zone support.
+++ Last updated : 09/10/2022++++
+# Migrate App Configuration to a region with availability zone support
+
+Azure App Configuration supports Azure availability zones. This guide describes how to migrate an App Configuration store from non-availability zone support to a region with availability zone support.
+
+## Availability zone support in Azure App Configuration
+
+Azure App Configuration supports Azure availability zones to protect your application and data from single datacenter failures. All availability zone-enabled regions have a minimum of three availability zones, and each availability zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. In regions where App Configuration supports availability zones, all stores have availability zones enabled by default.
++
+For more information about availability zones, go to [Regions and Availability Zones in Azure.](../availability-zones/az-overview.md)
+
+## App Configuration store migration
+
+### If App Configuration starts supporting availability zones in your region
+
+#### Prerequisites
+
+None
+
+#### Downtime requirements
+
+None
+
+#### Process
+
+If you created a store in a region where App Configuration didn't have availability zone support at the time and it started supporting it later, you don't need to do anything to start benefiting from the availability zone support. Your store will benefit from the availability zone support that has become available for App Configuration stores in the region.
+
+### If App Configuration doesnΓÇÖt support availability zones in your region
+
+#### Prerequisites
+
+- An Azure subscription with the Owner or Contributor role to create a new App Configuration store
+- Owner, Contributor, or App Configuration Data Owner permissions on the App Configuration store with no availability zone support.
+
+#### Downtime requirements
+
+None
+
+#### Process
+
+If App Configuration doesn't support availability zones in your region, you'll need to move your App Configuration data from this store to another store in a region where App Configuration has availability zone support.
+
+App Configuration stores are region-specific and can't be migrated across regions. To move a store to a region where App Configuration has availability zone support, you must create a new App Configuration store in the target region, then move your App Configuration data from the source store to the new target store.
+
+The following steps walk you through the process of creating a new target store and using the import/export functionality to move the configuration data from your current store to the newly created store.
+
+1. Create a target configuration store in a [region where App Configuration has availability zone support](#availability-zone-support-in-azure-app-configuration)
+1. Transfer your configuration data using the [import function](../azure-app-configuration/howto-import-export-data.md) in your target configuration store.
+1. Optionally, delete your source configuration store if you have no use for it.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency and disaster recovery](../azure-app-configuration/concept-geo-replication.md)
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
Azure Functions support running [in-process](../azure-functions/functions-dotnet
> [!TIP] > When you are updating multiple key-values in App Configuration, you normally don't want your application to reload configuration before all changes are made. You can register a *sentinel key* and update it only when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application.
+ >
+ > You may also do following to minimize the risk of inconsistencies:
+ >
+ > * Design your application to be tolerable for transient configuration inconsistency
+ > * Warm-up your application before bringing it online (serving requests)
+ > * Carry default configuration in your application and use it when configuration validation fails
+ > * Choose a configuration update strategy that minimizes the impact to your application, for example, a low traffic timing.
+ ### [In-process](#tab/in-process)
Azure Functions support running [in-process](../azure-functions/functions-dotnet
In this tutorial, you enabled your Azure Functions app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md
ms.devlang: csharp Previously updated : 04/27/2020 Last updated : 08/24/2022
# Back up App Configuration stores automatically
-In this article, you'll learn how to set up an automatic backup of key-values from a primary Azure App Configuration store to a secondary store. The automatic backup uses the integration of Azure Event Grid with App Configuration.
+In this article, you'll learn how to set up an automatic backup of key-values from a primary Azure App Configuration store to a secondary store. The automatic backup uses the integration of Azure Event Grid with App Configuration.
-After you set up the automatic backup, App Configuration will publish events to Azure Event Grid for any changes made to key-values in a configuration store. Event Grid supports a variety of Azure services from which users can subscribe to the events emitted whenever key-values are created, updated, or deleted.
+After you set up the automatic backup, App Configuration will publish events to Azure Event Grid for any changes made to key-values in a configuration store. Event Grid supports various Azure services from which users can subscribe to the events emitted whenever key-values are created, updated, or deleted.
+
+> [!IMPORTANT]
+> Azure App Configuration added [geo-replication](./concept-geo-replication.md) support recently. You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). The geo-replication feature is currently under preview. It will be the recommended solution for high availability when the feature is generally available.
## Overview
-In this article, you'll use Azure Queue storage to receive events from Event Grid and use a timer-trigger of Azure Functions to process events in the queue in batches.
+In this article, you'll use Azure Queue storage to receive events from Event Grid and use a timer-trigger of Azure Functions to process events in the queue in batches.
When a function is triggered, based on the events, it will fetch the latest values of the keys that have changed from the primary App Configuration store and update the secondary store accordingly. This setup helps combine multiple changes that occur in a short period in one backup operation, which avoids excessive requests made to your App Configuration stores.
In this tutorial, you'll create a secondary store in the `centralus` region and
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)].
-## Prerequisites
+## Prerequisites
- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the Azure development workload.
az eventgrid event-subscription create \
### Set up with ready-to-use functions In this article, you'll work with C# functions that have the following properties:+ - Runtime stack .NET Core 3.1 - Azure Functions runtime version 3.x - Function triggered by timer every 10 minutes
To make it easier for you to start backing up your data, we've [tested and publi
### Build your own function If the sample code provided earlier doesn't meet your requirements, you can also create your own function. Your function must be able to perform the following tasks in order to complete the backup:+ - Periodically read contents of your queue to see if it contains any notifications from Event Grid. Refer to the [Storage Queue SDK](../storage/queues/storage-quickstart-queues-dotnet.md) for implementation details. - If your queue contains [event notifications from Event Grid](./concept-app-configuration-event.md#event-schema), extract all the unique `<key, label>` information from event messages. The combination of key and label is the unique identifier for key-value changes in the primary store. - Read all settings from the primary store. Update only those settings in the secondary store that have a corresponding event in the queue. Delete all settings from the secondary store that were present in the queue but not in the primary store. You can use the [App Configuration SDK](https://github.com/Azure/AppConfiguration#sdks) to access your configuration stores programmatically.
If the sample code provided earlier doesn't meet your requirements, you can also
To learn more about creating a function, see: [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) and [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md). - > [!IMPORTANT] > Use your best judgement to choose the timer schedule based on how often you make changes to your primary configuration store. Running the function too often might end up throttling requests for your store. > - ## Create function app settings If you're using a function that we've provided, you need the following app settings in your function app:+ - `PrimaryStoreEndpoint`: Endpoint for the primary App Configuration store. An example is `https://{primary_appconfig_name}.azconfig.io`. - `SecondaryStoreEndpoint`: Endpoint for the secondary App Configuration store. An example is `https://{secondary_appconfig_name}.azconfig.io`. - `StorageQueueUri`: Queue URI. An example is `https://{unique_storage_name}.queue.core.windows.net/{queue_name}`.
storageQueueUri="https://$storageName.queue.core.windows.net/$queueName"
az functionapp config appsettings set --name $functionAppName --resource-group $resourceGroupName --settings StorageQueueUri=$storageQueueUri PrimaryStoreEndpoint=$primaryStoreEndpoint SecondaryStoreEndpoint=$secondaryStoreEndpoint ``` - ## Grant access to the managed identity of the function app Use the following command or the [Azure portal](../app-service/overview-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned managed identity for your function app.
az functionapp identity assign --name $functionAppName --resource-group $resourc
> To perform the required resource creation and role management, your account needs `Owner` permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, learn [how to assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). Use the following commands or the [Azure portal](./howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration) to grant the managed identity of your function app access to your App Configuration stores. Use these roles:+ - Assign the `App Configuration Data Reader` role in the primary App Configuration store. - Assign the `App Configuration Data Owner` role in the secondary App Configuration store.
If you don't see the new setting in your secondary store:
- It's possible that Event Grid couldn't send the event notification to the queue in time. Check if your queue still contains the event notification from your primary store. If it does, trigger the backup function again. - Check [Azure Functions logs](../azure-functions/functions-create-scheduled-function.md#test-the-function) for any errors or warnings. - Use the [Azure portal](../azure-functions/functions-how-to-use-azure-function-app-settings.md#get-started-in-the-azure-portal) to ensure that the Azure function app contains correct values for the application settings that Azure Functions is trying to read.-- You can also set up monitoring and alerting for Azure Functions by using [Azure Application Insights](../azure-functions/functions-monitoring.md?tabs=cmd). -
+- You can also set up monitoring and alerting for Azure Functions by using [Azure Application Insights](../azure-functions/functions-monitoring.md?tabs=cmd).
## Clean up resources
-If you plan to continue working with this App Configuration and event subscription, don't clean up the resources created in this article. If you don't plan to continue, use the following command to delete the resources created in this article.
+
+If you plan to continue working with this App Configuration and event subscription, you might want to leave these resources in place. If you don't plan to continue, use the [az group delete](/cli/azure/group#az-group-delete) command, which deletes the resource group and the resources in it.
```azurecli-interactive az group delete --name $resourceGroupName
az group delete --name $resourceGroupName
Now that you know how to set up automatic backup of your key-values, learn more about how you can increase the geo-resiliency of your application: -- [Resiliency and disaster recovery](concept-disaster-recovery.md)
+> [!div class="nextstepaction"]
+> [Resiliency and disaster recovery](concept-disaster-recovery.md)
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
In this tutorial, you learn how to:
> [!div class="checklist"] > * Deploy `arc-ci-launcher` using `kubectl`
-> * Examine integration test results in your Azure Blob Storage account
+> * Examine validation test results in your Azure Blob Storage account
## Prerequisites
git clone https://github.com/microsoft/azure_arc.git
ΓööΓöÇΓöÇ overlays <- Overlays for specific Kubernetes Clusters Γö£ΓöÇΓöÇ aks Γöé Γö£ΓöÇΓöÇ configs
- Γöé Γöé Γö£ΓöÇΓöÇ patch.json.tmpl <- To be converted into patch.json, patch for Data Controller control.json
+ Γöé Γöé ΓööΓöÇΓöÇ patch.json.tmpl <- To be converted into patch.json, patch for Data Controller control.json
Γöé ΓööΓöÇΓöÇ kustomization.yaml Γö£ΓöÇΓöÇ kubeadm Γöé Γö£ΓöÇΓöÇ configs
There are two files that need to be generated to localize the launcher to run in
A filled-out sample of the `.test.env` file, generated based on `.test.env.tmpl` is shared below with inline commentary. > [!IMPORTANT]
-> The `export VAR="value"` syntax below is not meant to be run locally to source environment variables from your machine - but is there for the launcher. The launcher mounts this `.test.env` file **as-is** as a Kubernetes `secret` using Kustomize's [`secretGenerator`](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/secretGeneratorPlugin.md#secret-values-from-local-files) (Kustomize takes a file, and turns it into a Kubernetes secret). During initialization, the launcher runs bash's [`source`](https://ss64.com/bash/source.html) command, which imports the environment variables from the as-is mounted `.test.env` file into the launcher's environment.
+> The `export VAR="value"` syntax below is not meant to be run locally to source environment variables from your machine - but is there for the launcher. The launcher mounts this `.test.env` file **as-is** as a Kubernetes `secret` using Kustomize's [`secretGenerator`](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/secretGeneratorPlugin.md#secret-values-from-local-files) (Kustomize takes a file, base64 encodes the entire file's content, and turns it into a Kubernetes secret). During initialization, the launcher runs bash's [`source`](https://ss64.com/bash/source.html) command, which imports the environment variables from the as-is mounted `.test.env` file into the launcher's environment.
In other words, after copy-pasting `.test.env.tmpl` and editing to create `.test.env`, the generated file should look similar to the sample below. The process to fill out the `.test.env` file is identical across operating systems and terminals.
export DOCKER_REGISTRY="mcr.microsoft.com"
export DOCKER_REPOSITORY="arcdata" export DOCKER_TAG="v1.11.0_2022-09-13"
-# Arcdata extension version override - see detailed explanation below [2]
+# "arcdata" Azure CLI extension version override - see detailed explanation below [2]
export ARC_DATASERVICES_WHL_OVERRIDE="" # ================
The extension version to release-train (`ARC_DATASERVICES_EXTENSION_RELEASE_TRAI
> Optional: leave this empty in `.test.env` to use the pre-packaged default.
-The launcher image is pre-packaged with the latest arcdata CLI version at the time of each container image release. However, to work with older releases, it may be necessary to provide the launcher with Azure CLI Blob URL download link, to override the pre-packaged version; e.g to instruct the launcher to install version **1.4.3**, fill in:
+The launcher image is pre-packaged with the latest arcdata CLI version at the time of each container image release. However, to work with older releases and upgrade testing, it may be necessary to provide the launcher with Azure CLI Blob URL download link, to override the pre-packaged version; e.g to instruct the launcher to install version **1.4.3**, fill in:
```bash export ARC_DATASERVICES_WHL_OVERRIDE="https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-1.4.3-py2.py3-none-any.whl"
The CLI version to Blob URL mapping can be found [here](https://azcliextensionsy
> Mandatory: this is required for Connected Cluster Custom Location creation.
-The following steps are sourced from [Enable custom locations on your cluster](../kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster) to retrieve the Custom Location OID for your Azure AD tenant.
+The following steps are sourced from [Enable custom locations on your cluster](../kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster) to retrieve the unique Custom Location Object ID for your Azure AD tenant.
There are two approaches to obtaining the `CUSTOM_LOCATION_OID` for your Azure AD tenant.
To use the Azure CLI instead, see [`az storage account generate-sas`](/cli/azure
> Optional: leave this empty in `.test.env` to run all stages (equivalent to `0` or blank)
-The launcher exposes `SKIP_*` variables, to run and skip specific stages.
+The launcher exposes `SKIP_*` variables, to run and skip specific stages - for example, to perform a "cleanup only" run.
-For example, a "cleanup only" run. Although the launcher is designed to clean up both in the beginning and the end of each run, it's possible for launch and/or test-failures to leave residue resources behind. To run the launcher in "cleanup only" mode, set the following variables in `.test.env`:
+Although the launcher is designed to clean up both in the beginning and the end of each run, it's possible for launch and/or test-failures to leave residue resources behind. To run the launcher in "cleanup only" mode, set the following variables in `.test.env`:
```bash export SKIP_PRECLEAN="0" # Run cleanup
Finished sample of `patch.json`:
"op": "add", "path": "spec.storage.logs.className", "value": "default"
- },
- {
- "op": "add",
- "path": "spec.monitoring",
- "value": {
- "enableOpenTelemetry": true
- }
- }
+ }
] } ```
images:
``` > [!TIP]
-> At this point - there are **3** places we specified `imageTag`s, for clarity, here's an explanation of the different uses of each. Typically - when testing a given release, all 3 values would be the same:
-> | # | Filename | Variable name | Why? | Used by? |
-> | | | - | -- | |
-> | 1 | **`.test.env`** | `DOCKER_TAG` | Sourcing the [Bootstrapper image](https://mcr.microsoft.com/v2/arcdata/arc-bootstrapper/tags/list) as part of [extension install](https://mcr.microsoft.com/v2/arcdata/arcdataservices-extension/tags/list) | [`az k8s-extension create`](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-create) in the launcher |
-> | 2 | **`patch.json`** | `value.imageTag` | Sourcing the [Data Controller image](https://mcr.microsoft.com/v2/arcdata/arc-controller/tags/list) | [`az arcdata dc create`](/cli/azure/arcdata/dc?view=azure-cli-latest&preserve-view=true#az-arcdata-dc-create) in the launcher |
-> | 3 | **`kustomization.yaml`** | `images.newTag` | Sourcing the [launcher's image](https://mcr.microsoft.com/v2/arcdata/arc-ci-launcher/tags/list) | `kubectl apply`ing the launcher |
-
+> To recap, at this point - there are **3** places we specified `imageTag`s, for clarity, here's an explanation of the different uses of each. Typically - when testing a given release, all 3 values would be the same (aligning to a given release):
+>
+>| # | Filename | Variable name | Why? | Used by? |
+>| | | - | -- | |
+>| 1 | **`.test.env`** | `DOCKER_TAG` | Sourcing the [Bootstrapper image](https://mcr.microsoft.com/v2/arcdata/arc-bootstrapper/tags/list) as part of [extension install](https://mcr.microsoft.com/v2/arcdata/arcdataservices-extension/tags/list) | [`az k8s-extension create`](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-create) in the launcher |
+>| 2 | **`patch.json`** | `value.imageTag` | Sourcing the [Data Controller image](https://mcr.microsoft.com/v2/arcdata/arc-controller/tags/list) | [`az arcdata dc create`](/cli/azure/arcdata/dc?view=azure-cli-latest&preserve-view=true#az-arcdata-dc-create) in the launcher |
+>| 3 | **`kustomization.yaml`** | `images.newTag` | Sourcing the [launcher's image](https://mcr.microsoft.com/v2/arcdata/arc-ci-launcher/tags/list) | `kubectl apply`ing the launcher |
### `kubectl apply`
Although it's best to deploy the launcher in a cluster with no pre-existing Arc
![A screenshot of the console terminal discovering Kubernetes and other resources.](media/automated-integration-testing/launcher-pre-flight.png)
-This same metadata-discovery and cleanup process is also run upon launcher exit, to leave the cluster in its pre-existing state before the launch.
+This same metadata-discovery and cleanup process is also run upon launcher exit, to leave the cluster as close as possible to it's pre-existing state before the launch.
## Steps performed by launcher
At a high-level, the launcher performs the following sequence of steps:
5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.) 6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the Controller via the [unified experience](/create-data-controller-direct-cli?tabs=linux#deployunified-experience) b. For Indirect Mode: deploy the Data Controller
-7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and stores locally, labeled as `setup-complete` - as a baseline.
-8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array. These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the integration tests.
-9. [Sonobuoy aggregator](https://sonobuoy.io/docs/v0.56.0/plugins/) accumulate the [`junit` test results](https://sonobuoy.io/docs/v0.56.0/results/) and logs per `arc-sb-plugin` test run, which are exported into the launcher
+7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline.
+8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests.
+9. [Sonobuoy aggregator](https://sonobuoy.io/docs/v0.56.0/plugins/) accumulate the [`junit` test results](https://sonobuoy.io/docs/v0.56.0/results/) and logs per `arc-sb-plugin` test run, which are exported into the launcher pod.
10. Return the exit code of the tests, and generates another set of debug logs - Azure CLI and `sonobuoy` - stored locally, labeled as `test-complete`.
-11. Perform a CRD metadata scan, similar to Step 3, to discover existing Arc and Arc Data Services Custom Resources. It then proceeds to destroy all Arc and Arc Data resources in reverse order from deployment, as well as CRDs, Role/ClusterRoles, PV/PVCs etc.
-12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. It uploads all local test results and logs to this storage account as a tarball (see below).
+11. Perform a CRD metadata scan, similar to Step 3, to discover existing Arc and Arc Data Services Custom Resources. Then, proceed to destroy all Arc and Arc Data resources in reverse order from deployment, as well as CRDs, Role/ClusterRoles, PV/PVCs etc.
+12. Attempt to use the SAS token `LOGS_STORAGE_ACCOUNT_SAS` provided to create a new Storage Account container named based on `LOGS_STORAGE_CONTAINER`, in the **pre-existing** Storage Account `LOGS_STORAGE_ACCOUNT`. If Storage Account container already exists, use it. Upload all local test results and logs to this storage container as a tarball (see below).
13. Exit. ## Examining Test Results
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
New for this release:
- Arc-enabled PostgreSQL server - Removed Hyperscale/Citus scale-out capabilities. Focus will be on providing a single node Postgres server service. All user experiences have had terms and concepts like `Hyperscale`, `server groups`, `worker nodes`, `coordinator nodes`, and so forth. removed. **BREAKING CHANGE** - The postgresql container image is based on [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) base OS image.
- - Only PostgreSQL version 14 is supported for now. Versions 11 and 12 have been removed. Two new images are introduced: `arc-postgres-14` and `arc-postgresql-agent`. The `arc-postgres-11` and `arc-postgres-12` container images are removed going forward. If you use the container image sync script, get the latest image once this [pull request](https://github.com/microsoft/azure_arc/pull/1340) has merged.
+ - Only PostgreSQL version 14 is supported for now. Versions 11 and 12 have been removed. Two new images are introduced: `arc-postgres-14` and `arc-postgresql-agent`. The `arc-postgres-11` and `arc-postgres-12` container images are removed going forward.
- The postgresql CRD version has been updated to v1beta3. Some properties such as `workers` have been removed or changed. Update any scripts or automation you have as needed to align to the new CRD schema. **BREAKING CHANGE** - `arcdata` Azure CLI extension
azure-arc Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/support-policy.md
+
+description: "Explains the support policy for Azure Arc-enabled data services"
+ Title: "Azure Arc-enabled data services support policy"
Last updated : "08/08/2022"++++++++
+# Azure Arc-enabled data services support policy.
+
+This article describes the support policies and troubleshooting boundaries for Azure Arc-enabled data services. This article specifically explains support for Azure Arc data controller and Azure Arc-enabled SQL Managed Instance.
+
+## Support policy
+- Azure Arc-enabled data services follow [Microsoft Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy).
+- Read the original [Modern Lifecycle Policy announcement](https://support.microsoft.com/help/447912/announcing-microsoft-modern-lifecycle-policy).
+- For additional information, see [Modern Policy FAQs](https://support.microsoft.com/help/30882/modern-lifecycle-policy-faq).
+
+## Support versions
+
+Microsoft supports Azure Arc-enabled data services for one year from the date of the release of that specific version. This support applies to the data controller, and any supported data services. For example, this support also applies to Azure Arc-enabled SQL Managed Instance.
+
+For descriptions, and instructions on how to identify a version release date, see [Supported versions](upgrade-overview.md#supported-versions).
+
+Microsoft releases new versions periodically. [Version log](version-log.md) shows the history of releases.
+
+To plan updates, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md).
+
+## Support by components
+
+Microsoft supports Azure Arc-enabled data services, including the data controller, and the data services (like Azure Arc-enabled SQL Managed Instance) that we provide. Arc-enabled data services require a Kubernetes distribution deployed in a customer operated environment. Microsoft does not provide support for the Kubernetes distribution. Support for the environment and hardware that hosts Kubernetes is provided by the operator of the environment and hardware.
+
+Microsoft has worked with industry partners to validate specific distributions for Azure Arc-enabled data services. You can see a list of partners and validated solutions in [Azure Arc-enabled data services Kubernetes validation](validation-program.md).
+
+Microsoft recommends that you run Azure Arc-enabled data services on a validated solution.
+
+## See also
+
+[SQL Server running in Linux containers](/troubleshoot/sql/general/support-policy-sql-server)
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
This section shows how to upgrade a directly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md). -
+For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md).
### Authenticate
azure-arc Upgrade Data Controller Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-portal.md
This section shows how to upgrade a directly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
+For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md).
### Upgrade
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
This section shows how to upgrade an indirectly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
+For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md).
### Upgrade
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
This section shows how to upgrade an indirectly connected data controller.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
+For supported upgrade paths, see [Upgrade Azure Arc-enabled data services](upgrade-overview.md).
### Upgrade
azure-arc Upgrade Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-overview.md
+
+ Title: Overview - upgrade Azure Arc-enabled data services
+description: Explains how to upgrade Azure Arc-enabled data controller, and other data services.
++++++ Last updated : 08/15/2022+++
+# Upgrade Azure Arc-enabled data services
+
+This article describes the paths and options to upgrade Azure Arc-enabled data controller and data services.
+
+## Supported versions
+
+Each release contains an image tag. Use the image tag to identify when Microsoft released the component. Microsoft supports the component for one full year after the release.
+
+Identify your current version by image tag. The image tag version scheme is:
+- `<Major>.<Minor>.<optional:revision>_<date>`.
+- `<date>` identifies the year, month, and day of the release. The pattern is: YYYY-MM-DD.
+
+For example, a complete image tag for the release in June 2022 is: `v1.8.0_2022-06-06`.
+
+The example image released on June 6, 2022.
+
+Microsoft supports this release through June 5, 2023.
+
+> [!NOTE]
+> The latest current branch version is always in the **Full Support** servicing phase. This support statement means that if you encounter a code defect that warrants a critical update, you must have the latest current branch version installed in order to receive a fix.
+
+## Upgrade path
+
+Upgrades are limited to the next incremental minor or major version. For example:
+
+- Supported version upgrades:
+ - 1.1 -> 1.2
+ - 1.3 -> 2.0
+- Unsupported version upgrades:
+ - 1.1 -> 1.4 Not supported because one or more minor versions are skipped.
+
+## Upgrade order
+
+Upgrade the data controller before you upgrade any data service. Azure Arc-enabled SQL Managed Instance is an example of a data service.
+
+A data controller may be up to one version ahead of a data service. A data service major version may not be one version ahead, or more than one version behind a data controller.
+
+The following list displays supported and unsupported configurations, based on image tag.
+
+- Supported configurations.
+ - Data controller and data service at same version:
+ - Data controller: `v1.9.0_2022-07-12`
+ - Data service: `v1.9.0_2022-07-12`
+ - Data controller ahead of data service by one version:
+ - Data controller: `v1.9.0_2022-07-12`
+ - Data service: `v1.8.0_2022-06-14`
+
+- Unsupported configurations:
+ - Data controller behind data service:
+ - Data controller: `v1.8.0_2022-06-14`
+ - Data service: `v1.9.0_2022-07-12`
+ - Data controller ahead of data service by more than one version:
+ - Data controller: `v1.9.0_2022-07-12`
+ - Data service: `v1.6.0_2022-05-02`
+
+## Schedule maintenance
+
+The upgrade will cause a service interruption (downtime).
+
+The amount of time to upgrade the data service depends on the service tier.
+
+The data controller upgrade does not cause application downtime.
+
+- General Purpose: A single replica is not available during the upgrade.
+- Business Critical: A SQL managed instance incurs a brief service interruption (downtime) once during an upgrade. After the data controller upgrades a secondary replica, the service fails over to an upgraded replica. The controller then upgrades the previous primary replica.
+
+> [!TIP]
+> Upgrade the data services during scheduled maintenance time.
+
+### Automatic upgrades
+
+When a SQL managed instance `desiredVersion` is set to `auto`, the data controller will automatically upgrade the managed instance.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|1.0.0_2021-07-30 |15.0.2148.140|Not validated|
+|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|v1.10.0_2022-08-09 |16.0.312.4243|Not validated|
### Nutanix
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 09/13/2022 Last updated : 09/14/2022
This page is updated monthly, so revisit it regularly. If you're looking for ite
- The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience. - If the resource group provided to `azcmagent connect` does not exist, the agent will try to create it and continue connecting the server to Azure.-- Added support for Ubuntu 22.04 - Added `--no-color` flag for all azcmagent commands to suppress the use of colors in terminals that do not support ANSI codes. ### Fixed
azure-arc Concept Log Analytics Extension Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/concept-log-analytics-extension-deployment.md
Title: Deploy Log Analytics agent on Arc-enabled servers
-description: This article reviews the different methods to deploy the Log Analytics agent on Windows and Linux-based machines registered with Azure Arc-enabled servers in your local datacenter or other cloud environment.
Previously updated : 3/18/2022
+ Title: Deploy Azure Monitor agent on Arc-enabled servers
+description: This article reviews the different methods to deploy the Azure Monitor agent on Windows and Linux-based machines registered with Azure Arc-enabled servers in your local datacenter or other cloud environment.
Last updated : 09/14/2022
-# Understand deployment options for the Log Analytics agent on Azure Arc-enabled servers
+# Understand deployment options for the Azure Monitor agent on Azure Arc-enabled servers
-Azure Monitor supports multiple methods to install the Log Analytics agent and connect your machine or server registered with Azure Arc-enabled servers to the service. Azure Arc-enabled servers support the Azure VM extension framework, which provides post-deployment configuration and automation tasks, enabling you to simplify management of your hybrid machines like you can with Azure VMs.
+Azure Monitor supports multiple methods to install the Azure Monitor agent and connect your machine or server registered with Azure Arc-enabled servers to the service. Azure Arc-enabled servers support the Azure VM extension framework, which provides post-deployment configuration and automation tasks, enabling you to simplify management of your hybrid machines like you can with Azure VMs.
-The Log Analytics agent is required if you want to:
+The Azure Monitor agent is required if you want to:
* Monitor the operating system and any workloads running on the machine or server using [VM insights](../../azure-monitor/vm/vminsights-overview.md). * Analyze and alert using [Azure Monitor](../../azure-monitor/overview.md). * Perform security monitoring in Azure by using [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
-* Manage operating system updates by using [Azure Automation Update Management](../../automation/update-management/overview.md).
* Collect inventory and track changes by using [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md).
-* Run Automation runbooks directly on the machine and against resources in the environment by using an [Azure Automation Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md).
-This article reviews the deployment methods for the Log Analytics agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, see [Azure Monitor agents overview](../../azure-monitor/agents/agents-overview.md).
+This article reviews the deployment methods for the Azure Monitor agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, see [Azure Monitor agents overview](../../azure-monitor/agents/agents-overview.md).
## Installation options
This method supports managing the installation, management, and removal of VM ex
### Use Azure Policy
-You can use Azure Policy to deploy the Log Analytics agent VM extension at-scale to machines in your environment, and maintain configuration compliance. This is accomplished by using either the **Configure Log Analytics extension on Azure Arc enabled Linux servers** / **Configure Log Analytics extension on Azure Arc enabled Windows servers** policy definition, or the **Enable Azure Monitor for VMs** policy initiative.
+You can use Azure Policy to deploy the Azure Monitor agent VM extension at-scale to machines in your environment, and maintain configuration compliance. This is accomplished by using either the [**Configure Linux Arc-enabled machines to run Azure Monitor Agent**](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F845857af-0333-4c5d-bbbc-6076697da122) or the [**Configure Windows Arc-enabled machines to run Azure Monitor Agent**](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94f686d6-9a24-4e19-91f1-de937dc171a4) policy definition.
Azure Policy includes several prebuilt definitions related to Azure Monitor. For a complete list of the built-in policies in the **Monitoring** category, see [Azure Policy built-in definitions for Azure Monitor](../../azure-monitor/policy-reference.md).
Azure Policy includes several prebuilt definitions related to Azure Monitor. For
* If the VM extension is removed, after policy evaluation it reinstalls it. * Identifies and installs the VM extension when a new Azure Arc-enabled server is registered with Azure.
-* Only supports specifying a single workspace to report to. Requires using PowerShell or the Azure CLI to configure the Log Analytics Windows agent VM extension to report to up to four workspaces.
#### Disadvantages
-* The **Configure Log Analytics extension on Azure Arc enabled** *operating system* **servers** policy only installs the Log Analytics VM extension and configures the agent to report to a specified Log Analytics workspace. If you want VM insights to monitor the operating system performance, and map running processes and dependencies on other resources, apply the policy initiative **Enable Azure Monitor for VMs**. It installs and configures both the Log Analytics VM extension and the Dependency agent VM extension, which are required.
+* The **Configure** *operating system* **Arc-enabled machines to run Azure Monitor Agent** policy only installs the Azure Monitor agent extension and configures the agent to report to a specified Log Analytics workspace.
* Standard compliance evaluation cycle is once every 24 hours. An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the Azure Policy Compliance Scan GitHub Action. For more information, see [Evaluation triggers](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). ### Use Azure Automation
-The process automation operating environment in Azure Automation and its support for PowerShell and Python runbooks can help you automate the deployment of the Log Analytics agent VM extension at scale to machines in your environment.
+The process automation operating environment in Azure Automation and its support for PowerShell and Python runbooks can help you automate the deployment of the Azure Monitor agent VM extension at scale to machines in your environment.
#### Advantages
The process automation operating environment in Azure Automation and its support
## Next steps
-* To manage operating system updates using Azure Automation Update Management, see [Enable from an Automation account](../../automation/update-management/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
-
-* To track changes using Azure Automation Change Tracking and Inventory, see [Enable from an Automation account](../../automation/change-tracking/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
-
-* Use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on servers or machines registered with Arc-enabled servers. See the [Deploy Hybrid Runbook Worker VM extension](../../automation/extension-based-hybrid-runbook-worker-install.md) article.
- * To start collecting security-related events with Microsoft Sentinel, see [onboard to Microsoft Sentinel](scenario-onboard-azure-sentinel.md), or to collect with Microsoft Defender for Cloud, see [onboard to Microsoft Defender for Cloud](../../security-center/quickstart-onboard-machines.md). * Read the VM insights [Monitor performance](../../azure-monitor/vm/vminsights-performance.md) and [Map dependencies](../../azure-monitor/vm/vminsights-maps.md) articles to see how well your machine is performing and view discovered application components.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Azure Arc-enabled servers lets you manage Windows and Linux physical servers and
When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group.
-To connect hybrid machines, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to:
+To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to:
* Proactively monitor the OS and workloads running on the machine * Manage it using Automation runbooks or solutions like Update Management
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/09/2022 Last updated : 09/14/2022
The following versions of the Windows and Linux operating system are officially
* Azure Editions are supported when running as a virtual machine on Azure Stack HCI * Windows IoT Enterprise * Azure Stack HCI
-* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS
+* Ubuntu 16.04, 18.04, and 20.04 LTS
* Debian 10 * CentOS Linux 7 and 8 * SUSE Linux Enterprise Server (SLES) 12 and 15
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
description: In this QuickStart, you will learn how to use the helper script to
Previously updated : 05/25/2022 Last updated : 09/14/2022
This QuickStart shows you how to connect your SCVMM management server to Azure A
1. Under **Region**, select an Azure location where you want to store the resource metadata. The currently supported regions are **East US** and **West Europe**. 1. Provide a name for **Custom location**. This is the name that you'll see when you deploy virtual machines. Name it for the datacenter or the physical location of your datacenter. For example: *contoso-nyc-dc.*+
+ >[!Note]
+ >If you are using an existing resource bridge created for a different provider (HCI/VMware), ensure that you create a separate custom location for each provider.
+ 1. Leave the option **Use the same subscription and resource group as your resource bridge** selected. 1. Provide a name for your **SCVMM management server instance** in Azure. For example: *contoso-nyc-scvmm.* 1. Select **Next: Download and run script**.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 9/12/2022 Last updated : 9/16/2022
In addition to the generally available data collection listed above, Azure Monit
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : |
-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](/azure/defender-for-cloud/release-notes#auto-deployment-of-azure-monitor-agent-preview) |
+| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](/azure/defender-for-cloud/auto-deploy-azure-monitoring-agent) |
| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](/azure/sentinel/connect-dns-ama)</li><li>Linux Syslog CEF: Preview</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/amadcr-privatepreviews)</li><li>No sign-up needed for Windows Forwarding Event (WEF), Windows Security Events and Windows DNS events</li></ul> | | [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
azure-monitor Resource Manager Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-agent.md
param vmName string
param location string resource linuxAgent 'Microsoft.HybridCompute/machines/extensions@2021-12-10-preview'= {
- name: '${vmName}/AzureMonitorWindowsAgent'
+ name: '${vmName}/AzureMonitorLinuxAgent'
location: location properties: { publisher: 'Microsoft.Azure.Monitor'
- type: 'AzureMonitorWindowsAgent'
+ type: 'AzureMonitorLinuxAgent'
autoUpgradeMinorVersion: true } }
resource linuxAgent 'Microsoft.HybridCompute/machines/extensions@2021-12-10-prev
{ "type": "Microsoft.HybridCompute/machines/extensions", "apiVersion": "2021-12-10-preview",
- "name": "[format('{0}/AzureMonitorWindowsAgent', parameters('vmName'))]",
+ "name": "[format('{0}/AzureMonitorLinuxAgent', parameters('vmName'))]",
"location": "[parameters('location')]", "properties": { "publisher": "Microsoft.Azure.Monitor",
- "type": "AzureMonitorWindowsAgent",
+ "type": "AzureMonitorLinuxAgent",
"autoUpgradeMinorVersion": true } }
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
And then defining these elements for the resulting alert actions using:
|Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
- |Automatically resolve alerts (preview) |Select to resolve the alert when the condition isn't met anymore.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. The alert is resolved when the condition isn't met anymore.|
1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
And then defining these elements for the resulting alert actions using:
|Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
- |Automatically resolve alerts (preview) |Select to resolve the alert when the condition isn't met anymore.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. The alert is resolved when the condition isn't met anymore.|
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 5/18/2022 Last updated : 9/14/2022 ms.reviwer: harelbr
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.DataShare/accounts | Yes | No | [Data Shares](../essentials/metrics-supported.md#microsoftdatashareaccounts) | |Microsoft.DBforMariaDB/servers | No | No | [DB for MariaDB](../essentials/metrics-supported.md#microsoftdbformariadbservers) | |Microsoft.DBforMySQL/servers | No | No |[DB for MySQL](../essentials/metrics-supported.md#microsoftdbformysqlservers)|
-|Microsoft.DBforPostgreSQL/flexibleServers | Yes | No | [DB for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
+|Microsoft.DBforPostgreSQL/flexibleServers | Yes | Yes | [DB for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
|Microsoft.DBforPostgreSQL/serverGroupsv2 | Yes | No | DB for PostgreSQL (hyperscale) | |Microsoft.DBforPostgreSQL/servers | No | No | [DB for PostgreSQL](../essentials/metrics-supported.md#microsoftdbforpostgresqlservers)| |Microsoft.DBforPostgreSQL/serversv2 | No | No | [DB for PostgreSQL V2](../essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2)|
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
description: This article explains the different types of Azure Monitor alerts a
Previously updated : 04/26/2022 Last updated : 09/14/2022
The platform metrics for these services in the following Azure clouds are suppor
| Azure Cache for Redis | Yes | Yes | Yes | | Azure Stack Edge devices | Yes | Yes | Yes | | Recovery Services vaults | Yes | No | No |
+| Azure Database for PostgreSQL - Flexible Servers | Yes | Yes | Yes |
> [!NOTE] > Multi-resource metric alerts are not supported for the following scenarios:
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
public void Initialize(ITelemetry telemetry)
} } ```+
+#### Control the client IP address used for gelocation mappings
+
+The following sample initializer sets the client IP which will be used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion.
+
+```csharp
+public void Initialize(ITelemetry telemetry)
+{
+ var request = telemetry as RequestTelemetry;
+ if (request == null) return true;
+ request.Context.Location.Ip = "{client ip address}"; // Could utilize System.Web.HttpContext.Current.Request.UserHostAddress;
+ return true;
+}
+```
+ ## ITelemetryProcessor and ITelemetryInitializer What's the difference between telemetry processors and telemetry initializers?
azure-monitor Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md
Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.
+> [!IMPORTANT]
+> Service map will be retired on 30 September 2025. To monitor connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, make sure to [migrate to Azure Monitor VM insights](../vm/vminsights-migrate-from-service-map.md) before this date.
+ This article describes the details of onboarding and using Service Map. The prerequisites of the solution are the following: * A Log Analytics workspace in a [supported region](vminsights-configure-workspace.md#supported-regions).
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
+
+ Title: Migrate from Service Map to Azure Monitor VM insights
+description: Migrate from Service Map to Azure Monitor VM insights to monitor the performance and health of virtual machines and scale sets, including their running processes and dependencies on other resources.
+++ Last updated : 09/13/2022++++
+# Migrate from Service Map to Azure Monitor VM insights
+
+[Azure Monitor VM insights](../vm/vminsights-overview.md) monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. This article explains how to migrate from [Service Map](../vm/service-map.md) to Azure Monitor VM insights, which provides a map feature similar to Service Map, along with other benefits.
+
+> [!NOTE]
+> Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring the communication between services.
+
+The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-vm-insights-map-feature-different-from-service-map-).
+
+## Enable VM insights using Azure Monitor Agent
+
+VM insights uses [Azure Monitor Agent](../agents/agents-overview.md), which replaces the Log Analytics agent used by Service map. For more information about how to enable VM insights for Azure virtual machines and on-premises machines, see [How to enable VM insights using Azure Monitor Agent for Azure virtual machines](../vm/vminsights-enable-overview.md#agents).
+
+If you have on-premises machines, we recommend enabling [Azure Arc for servers](../../azure-arc/servers/overview.md) so that you enable the virtual machines for VM insights using processes similar to Azure virtual machines.
+
+VM insights also collects per-machine performance counters, which provide visibility into the health of your virtual machines. Azure Monitor Log ingests these performance counters every minute, which slightly increases monitoring costs per machine. [Learn more about the pricing](../vm/vminsights-overview.md#pricing).
++
+## Remove the Service Map solution from the workspace
+
+Once you migrate to VM insights, remove the Service Map solution from the workspace to avoid data duplication and incurring extra costs:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the search bar, type *Log Analytics workspaces*. As you begin typing, the list filters suggestions based on your input.
+1. Select **Log Analytics workspaces**.
+1. From your list of Log Analytics workspaces, select the workspace you chose when you enabled Service Map.
+1. On the left, select **Solutions**.
+1. From the list of solutions, select **ServiceMap(workspace name)**.
+1. On the **Overview** page for the solution, select **Delete**.
+1. When prompted to confirm, select **Yes**.
+
+> [!IMPORTANT]
+> You won't be able to onboard new subscriptions to service map after 31 August 2024. The Service Map UI won't be available after 30 September 2025.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
### SAP AnyDB * [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-production)
-* [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md#oracle-configuration-guidelines-for-sap-installations-in-azure-vms-on-linux)
+* [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md)
* [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043) * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408) * [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files)
azure-netapp-files Backup Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-search.md
You can display and search backups at the volume level:
2. Navigate to **Backups** to display backups for the volume. The **Type** column shows whether the backup is generated by a *Scheduled* (policy-based) or a *manual* backup.
-3. In the **Search Backups** box, enter the backup name that you want to search for.
+3. In the **Search Backups** field, enter the backup name that you want to search for.
A partial search is supported; you donΓÇÖt have to specify the entire backup name. The search filters the backups based on the search string.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 08/02/2022 Last updated : 09/14/2022 # Cross-region replication of Azure NetApp Files volumes
-The Azure NetApp Files replication functionality provides data protection through cross-region volume replication. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. This capability enables you to fail over your critical application if a region-wide outage or disaster happens.
+The Azure NetApp Files replication functionality provides data protection through cross-region volume replication. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. This capability enables you to fail over your critical application if a region-wide outage or disaster happens.
## <a name="supported-region-pairs"></a>Supported cross-region replication pairs
Azure NetApp Files volume replication is supported between various [Azure region
| North America | West US 2 | East US | | US Government | US Gov Arizona | US Gov Virginia |
+>[!NOTE]
+>There may be a discrepancy in the size of snapshots between source and destination. This discrepancy is expected. To learn more about snapshots, refer to [How Azure NetApp Files snapshots work](snapshots-introduction.md).
+ ## Service-level objectives Recovery Point Objective (RPO) indicates the point in time to which data can be recovered. The RPO target is typically less than twice the replication schedule, but it can vary. In some cases, it can go beyond the target RPO based on factors such as the total dataset size, the change rate, the percentage of data overwrites, and the replication bandwidth available for transfer.
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 07/18/2022 Last updated : 09/14/2022 # Bicep CLI commands
The command creates a file named _main.bicep_ in the same directory as _main.jso
For more information about using this command, see [Decompiling ARM template JSON to Bicep](decompile.md).
+## generate-params
+
+The `generate-params` command builds *.parameters.json* file from the given bicep file, updates if there is an existing parameters.json file.
+
+```azurecli
+az bicep generate-params --file main.bicep
+```
+
+The command creates a parameter file named _main.parameters.json_. The parameter file only contains the parameters without default values configured in the Bicep file.
+ ## install The `install` command adds the Bicep CLI to your local environment. For more information, see [Install Bicep tools](install.md). This command is only available through Azure CLI.
azure-resource-manager Quickstart Create Bicep Use Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio.md
This quickstart guides you through the steps to create a [Bicep file](overview.m
- Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. - Visual Studio version 17.3.0 preview 3 or newer. See [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/).-- Visual Studio Bicep extension. See [Visual Studio Marketplace](https://marketplace.visualstudio.com/).
+- Visual Studio Bicep extension. See [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep).
- Bicep file deployment requires either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). ## Add resource snippet
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 06/30/2022 Last updated : 09/14/2022 # Create Bicep files by using Visual Studio Code
Open or create a Bicep file in VS Code, select the **View** menu and then select
These commands include: -- [Build Bicep File](#build-bicep-file)
+- [Build ARM Template](#build-arm-template)
- [Create Bicep Configuration File](#create-bicep-configuration-file) - [Deploy Bicep File](#deploy-bicep-file) - [Generate Parameters File](#generate-parameters-file) - [Insert Resource](#insert-resource) - [Open Bicep Visualizer](#open-bicep-visualizer) - [Open Bicep Visualizer to the side](#open-bicep-visualizer)-- [Restore Bicep File (Force)](#restore-bicep-file)
+- [Restore Bicep Modules (Force)](#restore-bicep-modules)
These commands are also shown in the context menu when you right-click a Bicep file: :::image type="content" source="./media/visual-studio-code/visual-studio-code-bicep-context-menu.png" alt-text="Screenshot of Visual Studio Code Bicep commands in the context menu.":::
-### Build Bicep file
+### Build ARM template
The `build` command converts a Bicep file to an Azure Resource Manager template (ARM template). The new JSON template is stored in the same folder with the same file name. If a file with the same file name exists, it overwrites the old file. For more information, see [Bicep CLI commands](./bicep-cli.md#bicep-cli-commands). ### Create Bicep configuration file
-The [Bicep configuration file (bicepconfig.json)](./bicep-config.md) can be used to customize your Bicep development experience. You can add `bicepconfig.json` in multiple directories. The configuration file closest to the bicep file in the directory hierarchy is used. When you select this command, the extension opens a dialog for you to select a folder. The default folder is where you store the Bicep file. If a `bicepconfig.json` file already exists in the folder, you have the option to overwrite the existing file.
+The [Bicep configuration file (bicepconfig.json)](./bicep-config.md) can be used to customize your Bicep development experience. You can add `bicepconfig.json` in multiple directories. The configuration file closest to the bicep file in the directory hierarchy is used. When you select this command, the extension opens a dialog for you to select a folder. The default folder is where you store the Bicep file. If a `bicepconfig.json` file already exists in the folder, you can overwrite the existing file.
### Deploy Bicep file
The visualizer shows the resources defined in the Bicep file with the resource d
[![Visual Studio Code Bicep visualizer](./media/visual-studio-code/visual-studio-code-bicep-visualizer.png)](./media/visual-studio-code/visual-studio-code-bicep-visualizer-expanded.png#lightbox)
-You have the option to open the visualizer side-by-side with the Bicep file.
+You can also open the visualizer side-by-side with the Bicep file.
-### Restore Bicep file
+### Restore Bicep modules
When your Bicep file uses modules that are published to a registry, the restore command gets copies of all the required modules from the registry. It stores those copies in a local cache. For more information, see [restore](./bicep-cli.md#restore). ## View type document
-From Visual Studio Code, you can easily open the template reference for the resource type you are working on. To do so, hover your cursor over the resource symbolic name, and then select **View type document**.
+From Visual Studio Code, you can easily open the template reference for the resource type you're working on. To do so, hover your cursor over the resource symbolic name, and then select **View type document**.
:::image type="content" source="./media/visual-studio-code/visual-studio-code-bicep-view-type-document.png" alt-text="Screenshot of Visual Studio Code Bicep view type document.":::
azure-resource-manager Template Tutorial Add Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-parameters.md
This way of handling updates means your template can include all of the resource
## Customize by environment
-Parameters let you customize the deployment by providing values that tailored for a particular environment. You can pass different values, for example, based on whether you're deploying to a development, testing, or production environment.
+Parameters let you customize the deployment by providing values that are tailored for a particular environment. You can pass different values, for example, based on whether you're deploying to a development, testing, or production environment.
The previous template always deploys a standard locally redundant storage (LRS) **Standard_LRS** account. You might want the flexibility to deploy different stock keeping units (SKUs) depending on the environment. The following example shows the changes to add a parameter for SKU. Copy the whole file and paste it over your template.
If you're stopping now, you might want to clean up your deployed resources by de
You improved the template you created in the [first tutorial](template-tutorial-create-first-template.md) by adding parameters. In the next tutorial, you learn about template functions. > [!div class="nextstepaction"]
-> [Add template functions](template-tutorial-add-functions.md)
+> [Add template functions](template-tutorial-add-functions.md)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 03/23/2022 Last updated : 09/07/2022
Azure Backup provides several ways to restore a VM.
**Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM.<br><br>If a VM restore fails because an Azure VM SKU wasn't available in the specified region of Azure, or because of any other issues, Azure Backup still restores the disks in the specified resource group. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md).
-**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br> <li> [Create a VM](#create-a-vm) <br> <li> [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is currently enabled only in [standard policy](backup-during-vm-creation.md#create-a-vm-with-backup-configured) from Vault tier. It's also supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+ >[!Tip]
->To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview). This helps you to monitor such failures and take necessary actions to remediate the issues.
+>To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
>[!NOTE] >You can also recover specific files and folders on an Azure VM. [Learn more](backup-azure-restore-files-from-vm.md).
Some details about storage accounts:
- **Storage type**: Blob storage isn't supported. - **Storage redundancy**: Zone redundant storage (ZRS) isn't supported. The replication and redundancy information for the account is shown in parentheses after the account name. - **Premium storage**:
- - When restoring non-premium VMs, premium storage accounts aren't supported.
- - When restoring managed VMs, premium storage accounts configured with network rules aren't supported.
+ - When you restore non-premium VMs, premium storage accounts aren't supported.
+ - When you restore managed VMs, premium storage accounts configured with network rules aren't supported.
## Before you start
As one of the [restore options](#restore-options), you can create a VM quickly w
![Restore configuration wizard - choose restore options](./media/backup-azure-arm-restore-vms/recovery-configuration-wizard1.png)
+1. Choose the required subscription from the **Subscription** drop-down list to restore an Azure VM to a different subscription.
+
+ Azure Backup now supports Cross Subscription Restore (CSR), you can now restore an Azure VM using a recovery point from default subscription to another. Default subscription is the subscription where recovery point is available.
+
+ The following screenshot lists all subscriptions under the tenant where you've permissions, which enable you to restore the Azure VM to another subscription.
+
+ :::image type="content" source="./media/backup-azure-arm-restore-vms/backup-azure-cross-subscription-restore.png" alt-text="Screenshot showing the list of all subscriptions under the tenant where you have permissions.":::
+ 1. Select **Restore** to trigger the restore operation. >[!Note]
As one of the [restore options](#restore-options), you can create a disk from a
:::image type="content" source="./media/backup-azure-arm-restore-vms/trigger-restore-operation-disks.png" alt-text="Screenshot showing to select Resource disks.":::
+1. Choose the required subscription from the **Subscription** drop-down list to restore the VM disks to a different subscription.
+
+ Azure Backup now supports Cross Subscription Restore (CSR). Like Azure VM, you can now restore Azure VM disks using a recovery point from default subscription to another. Default subscription is the subscription where recovery point is available.
+ 1. Select **Restore** to trigger the restore operation. When your virtual machine uses managed disks and you select the **Create virtual machine** option, Azure Backup doesn't use the specified storage account. In the case of **Restore disks** and **Instant Restore**, the storage account is used only for storing the template. Managed disks are created in the specified resource group. When your virtual machine uses unmanaged disks, they're restored as blobs to the storage account.
In summary, the **Availability Zone** will only appear when
## Restoring unmanaged VMs and disks as managed
-You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restores aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
+You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restore operations aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
![Restore as managed disks](./media/backup-azure-arm-restore-vms/restore-as-managed-disks.png)
There are many common scenarios in which you might need to restore VMs.
**Restore VMs with special network configurations** | Special network configurations include VMs using internal or external load balancing, using multiple NICS, or multiple reserved IP addresses. You restore these VMs by using the [restore disk option](#restore-disks). This option makes a copy of the VHDs into the specified storage account, and you can then create a VM with an [internal](../load-balancer/quickstart-load-balancer-standard-internal-powershell.md) or [external](../load-balancer/quickstart-load-balancer-standard-public-powershell.md) load balancer, [multiple NICS](../virtual-machines/windows/multiple-nics.md), or [multiple reserved IP addresses](../virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md), in accordance with your configuration. **Network Security Group (NSG) on NIC/Subnet** | Azure VM backup supports Backup and Restore of NSG information at vnet, subnet, and NIC level. **Zone Pinned VMs** | If you back up an Azure VM that's pinned to a zone (with Azure Backup), then you can restore it in the same zone where it was pinned. [Learn more](../availability-zones/az-overview.md)
-**Restore VM in any availability set** | When restoring a VM from the portal, there's no option to choose an availability set. A restored VM doesn't have an availability set. If you use the restore disk option, then you can [specify an availability set](../virtual-machines/windows/tutorial-availability-sets.md) when you create a VM from the disk using the provided template or PowerShell.
+**Restore VM in any availability set** | When you restore a VM from the portal, there's no option to choose an availability set. A restored VM doesn't have an availability set. If you use the restore disk option, then you can [specify an availability set](../virtual-machines/windows/tutorial-availability-sets.md) when you create a VM from the disk using the provided template or PowerShell.
**Restore special VMs such as SQL VMs** | If you're backing up a SQL VM using Azure VM backup and then use the restore VM option or create a VM after restoring disks, then the newly created VM must be registered with the SQL provider as mentioned [here](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm?tabs=azure-cli%2cbash). This will convert the restored VM into a SQL VM. ### Restore domain controller VMs
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 07/19/2022 Last updated : 09/14/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
Jobs from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup
>- Azure workloads such as SQL and SAP HANA backups within Azure VMs have huge number of backup jobs. For example, log backups can run for every 15 minutes. So for such DB workloads, only user triggered operations are displayed. Scheduled backup operations aren't displayed. >- In Backup center you can view jobs for upto last 14 days. If you want to view jobs for a large duration, you can go to the individual Recovery Services vaults and select the **Backup Jobs** tab. For jobs older than 6 months, we recommend you to use Log Analytics and/or [Backup Reports](configure-reports.md) to reliably and efficiently query older jobs.
-## Azure Monitor alerts for Azure Backup (preview)
+## Azure Monitor alerts for Azure Backup
Azure Backup also provides alerts via Azure Monitor that enables you to have a consistent experience for alert management across different Azure services, including Azure Backup. With Azure Monitor alerts, you can route alerts to any notification channel supported by Azure Monitor, such as email, ITSM, Webhook, Logic App, and so on.
Currently, Azure Backup provides two main types of built-in alerts:
* **Security Alerts**: For scenarios, such as deletion of backup data, or disabling of soft-delete functionality for vault, security alerts (of severity Sev 0) are fired, and displayed in the Azure portal or consumed via other clients (PowerShell, CLI, and REST API). Security alerts are generated by default and can't be turned off. However, you can control the scenarios for which the notifications (for example, emails) should be fired. For more information on how to configure notifications, see [Action rules](../azure-monitor/alerts/alerts-action-rules.md). * **Job Failure Alerts**: For scenarios, such as backup failure and restore failure, Azure Backup provides built-in alerts via Azure Monitor (of Severity Sev 1). Unlike security alerts, you can choose to turn off Azure Monitor alerts for job failure scenarios. For example, you've already configured custom alert rules for job failures via Log Analytics, and don't need built-in alerts to be fired for every job failure. By default, alerts for job failures are turned off. For more information, see the [section on turning on alerts for these scenarios](#turning-on-azure-monitor-alerts-for-job-failure-scenarios).
-The following table summarizes the different backup alerts currently available (in preview) via Azure Monitor and the supported workload/vault types:
+The following table summarizes the different backup alerts currently available via Azure Monitor and the supported workload/vault types:
| **Alert Category** | **Alert Name** | **Supported workload types / vault types** | **Description** | | | - | | -- |
-| Security | Delete Backup Data | - Microsoft Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when you stop backup and deletes backup data. <br><br> **Note** <br> If you disable the soft-delete feature for the vault, Delete Backup Data alert is not received. |
-| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | For all workloads that support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service. |
-| Security | Purge Complete | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | Delete Backup Data |
+| Security | Delete Backup Data | - Microsoft Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when you stop backup and deletes backup data. <br><br> **Note** <br> If you disable the soft-delete feature for the vault, Delete Backup Data alert isn't received. |
+| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM | For all workloads that support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service. |
+| Security | Purge Complete | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM | Delete Backup Data |
| Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service. |
+| Security | Modify Policy with Shorter Retention | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM <br><br> - Azure Files | This alert is fired when a backup policy is modified to use lesser retention. |
+| Security | Modify Protection with Shorter Retention | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM <br><br> - Azure Files | This alert is fired when a backup instance is assigned to a different policy with lesser retention. |
+| Security | MUA Disabled | Recovery Services vaults | This alert is fired when a user disables MUA functionality for vault. |
+| Security | Disable hybrid security features | Recovery Services vaults | This alert is fired when hybrid security settings are disabled for a vault. |
| Jobs | Backup Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Managed Disks | This alert is fired when a backup job failure has occurred. By default, alerts for backup failures are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). | | Jobs | Restore Failure | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - Azure Files <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a restore job failure has occurred. By default, alerts for restore failures are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). | | Jobs | Unsupported backup type | - SQL in Azure VM <br><br> - SAP HANA in Azure VM | This alert is fired when the current settings for a database don't support certain backup types present in the policy. By default, alerts for unsupported backup type scenario are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). | | Jobs | Workload extension unhealthy | - SQL in Azure VM <br><br> - SAP HANA in Azure VM | This alert is fired when the Azure Backup workload extension for database backups is in an unhealthy state that might prevent future backups from succeeding. By default, alerts for workload extension unhealthy scenario are turned on. For more information, see the [section on turning on alerts for this scenario](#turning-on-azure-monitor-alerts-for-job-failure-scenarios). |
+> [!NOTE]
+>- For Azure VM backup, backup failure alerts are not sent in scenarios where the underlying VM is deleted, or another backup job is already in progress (leading to failure of the other backup job). This is because these are scenarios where backup is expected to fail by design and hence alerts are not generated in these 2 cases.
+ ### Turning on Azure Monitor alerts for job failure scenarios To opt in to Azure Monitor alerts for backup failure and restore failure scenarios, follow these steps:
To manage monitoring settings for a Backup vault, follow these steps:
1. We also recommend you to select the checkbox **Use only Azure Monitor alerts**.
- By selecting this option, you are consenting to receive backup alerts only via Azure Monitor and you will stop receiving alerts from the older classic alerts solution. [Review the key differences between classic alerts and built-in Azure Monitor alerts](./move-to-azure-monitor-alerts.md).
+ By selecting this option, you're consenting to receive backup alerts only via Azure Monitor and you'll stop receiving alerts from the older classic alerts solution. [Review the key differences between classic alerts and built-in Azure Monitor alerts](./move-to-azure-monitor-alerts.md).
:::image type="content" source="./media/backup-azure-monitoring-laworkspace/recovery-services-vault-opt-out-classic-alerts.png" alt-text="Screenshot showing the option to enable receiving backup alerts.":::
To manage monitoring settings for a Backup vault, follow these steps:
### Viewing fired alerts in the Azure portal
-Once an alert is fired for a vault, you can go to Backup center to view the alert in the Azure portal. On the **Overview** tab, you can see a summary of active alerts split by severity. There're two types of alerts displayed:
+Once an alert is fired for a vault, you can go to **Backup center** to view the alert in the Azure portal. On the **Overview** tab, you can see a summary of active alerts split by severity. There are two types of alerts displayed:
* **Datasource Alerts**: Alerts that are tied to a specific datasource being backed-up (for example, back up or restore failure for a VM, deleting backup data for a database, and so on) appear under the **Datasource Alerts** section. * **Global Alerts**: Alerts that aren't tied to a specific datasource (for example, disabling soft-delete functionality for a vault) appear under the **Global Alerts** section.
-Each of the above types of alerts is further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure because these alerts are only fired after registering the feature in the preview portal.
+Each of the above types of alerts is further split into **Security** and **Configured** alerts. Currently, Security alerts include the scenarios of deleting backup data, or disabling soft-delete for vault (for the applicable workloads as detailed in the above section). Configured alerts include backup failure and restore failure, because these alerts are fired only when alerts aren't disabled for these scenarios.
:::image type="content" source="media/backup-azure-monitoring-laworkspace/backup-center-azure-monitor-alerts.png" alt-text="Screenshot for viewing alerts in Backup center.":::
To configure notifications for Azure Monitor alerts, create an [alert processing
1. Go to **Backup center** in the Azure portal.
-1. Select **Alerts (Preview)** from the menu and select **Alert processing rules (preview)**.
+1. Select **Alerts** from the menu and select **Alert processing rules**.
:::image type="content" source="./media/backup-azure-monitoring-laworkspace/backup-center-manage-alert-processing-rules-inline.png" alt-text="Screenshot for Manage Actions in Backup center." lightbox="./media/backup-azure-monitoring-laworkspace/backup-center-manage-alert-processing-rules-expanded.png":::
On-demand backup jobs aren't consolidated.
### Exceptions when an alert isn't raised
-There're a few exceptions when an alert isn't raised on a failure. They are:
+There are a few exceptions when an alert isn't raised on a failure. They are:
- You've explicitly canceled the running job. - The job fails because another backup job is in progress (no actions to be taken as we've to wait for the previous job to finish).
backup Backup Azure Security Feature Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature-cloud.md
To disable soft delete on a vault, you must have the Backup Contributor role for
It's important to remember that once soft delete is disabled, the feature is disabled for all the types of workloads. For example, it's not possible to disable soft delete only for SQL server or SAP HANA DBs while keeping it enabled for virtual machines in the same vault. You can create separate vaults for granular control. >[!Tip]
->To receive alerts/notifications when a user in the organization disables soft-delete for a vault, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview). As the disable of soft-delete is a potential destructive operation, we recommend you to use alert system for this scenario to monitor all such operations and take actions on any unintended operations.
+>To receive alerts/notifications when a user in the organization disables soft-delete for a vault, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). As the disable of soft-delete is a potential destructive operation, we recommend you to use alert system for this scenario to monitor all such operations and take actions on any unintended operations.
### Disabling soft delete using Azure portal
If items were deleted before soft-delete was disabled, then they'll be in a soft
### Do I need to enable the soft-delete feature on every vault?
-No, it's built-in and enabled by default for all the Recovery Services vaults.
+No, it's a built-in feature and enabled by default for all the Recovery Services vaults.
### Can I configure the number of days for which my data will be retained in soft-deleted state after the delete operation is complete?
Yes.
Undelete followed by a resume operation will protect the resource again. The resume operation associates a backup policy to trigger the scheduled backups with the selected retention period. Also, the garbage collector runs as soon as the resume operation completes. If you wish to perform a restore from a recovery point that's past its expiration date, you're advised to do it before triggering the resume operation.
-### Can I delete my vault if there are soft deleted items in the vault?
+### Can I delete my vault if there are soft-deleted items in the vault?
The Recovery Services vault can't be deleted if there are backup items in soft-deleted state in the vault. The soft-deleted items are permanently deleted 14 days after the delete operation. If you can't wait for 14 days, then [disable soft delete](#enabling-and-disabling-soft-delete), undelete the soft deleted items, and delete them again to permanently get deleted. After ensuring there are no protected items and no soft deleted items, the vault can be deleted. ### Can I delete the data earlier than the 14 days soft-delete period after deletion?
-No. You can't force delete the soft-deleted items. They're automatically deleted after 14 days. This security feature is enabled to safeguard the backed-up data from accidental or malicious deletes. You should wait for 14 days before performing any other action on the item. Soft-deleted items won't be charged. If you need to reprotect the items marked for soft-delete within 14 days in a new vault, then contact Microsoft support.
+No. You can't force-delete the soft-deleted items. They're automatically deleted after 14 days. This security feature is enabled to safeguard the backed-up data from accidental or malicious deletes. You should wait for 14 days before performing any other action on the item. Soft-deleted items won't be charged. If you need to reprotect the items marked for soft-delete within 14 days in a new vault, then contact Microsoft support.
### Can soft delete operations be performed in PowerShell or CLI?
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 07/04/2022 Last updated : 09/07/2022
If you see permissions in the **MachineKeys** directory that are different than
* Under **Personal** > **Certificates**, delete all certificates where **Issued To** is the classic deployment model or **Windows Azure CRP Certificate Generator**. 3. Trigger a VM backup job.
-### ExtensionStuckInDeletionState - Extension state is not supportive to backup operation
+### ExtensionStuckInDeletionState - Extension state is not supportive to the backup operation
Error code: ExtensionStuckInDeletionState <br/> Error message: Extension state is not supportive to the backup operation
To resolve this issue:
>- With a different name than the original one, **or** >- In a different resource group with the same name.
+#### UserErrorCrossSubscriptionRestoreNotSuppportedForOLRΓÇ»
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedForOLRΓÇ»
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported for Original Location Recovery.
+
+**Resolution**: Ensure that you [select Create New/ Restore Disk](backup-azure-arm-restore-vms.md#restore-disks) for restore operation.
+
+#### UserErrorCrossSubscriptionRestoreNotSuppportedForUnManagedAzureVM ΓÇ»
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedForUnManagedAzureVM ΓÇ»
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported for Azure VMs with Unmanaged Disks.
+
+**Resolution**: Perform standard restores within the same subscription instead.
+
+#### UserErrorCrossSubscriptionRestoreNotSuppportedForCRR
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedForCRR ΓÇ»
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported along-with Cross Region Restore.
+
+**Resolution**: Use either Cross Subscription Restore' or Cross Region Restore.ΓÇ»
+ΓÇ»
+#### UserErrorCrossSubscriptionRestoreNotSuppportedFromSnapshotΓÇ»
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedFromSnapshotΓÇ»
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported when restoring from a Snapshot recovery point.
+
+**Resolution**: Select a different recovery point where Tier 2 (Vault-Tier) is available.
+ΓÇ»
+#### UserErrorCrossSubscriptionRestoreInvalidTenantΓÇ»
+
+**Error code**: UserErrorCrossSubscriptionRestoreInvalidTenantΓÇ»
+
+**Error message**: Operation failed as the tenant IDs for source and target subscriptions don't match.
+
+**Resolution**: Ensure that the source and target subscriptions belong to the same tenant.
+
+#### UserErrorCrossSubscriptionRestoreInvalidTargetSubscriptionΓÇ»
+
+**Error code**: UserErrorCrossSubscriptionRestoreInvalidTargetSubscriptionΓÇ»
+
+**Error message**: Operation failed as the target subscription specified for restore is not registered to the Azure Recovery Services Resource Provider.ΓÇ»
+
+**Resolution**: Ensure the target subscription is registered to the Recovery Services Resource Provider before you attempt a cross subscription restore.
+
+#### UserErrorCrossSubscriptionRestoreNotSuppportedForEncryptedAzureVM
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedForEncryptedAzureVM
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported for Encrypted Azure VMs.
+
+**Resolution**: Use the same subscription for Restore of Encrypted AzureVMs.
+
+#### UserErrorCrossSubscriptionRestoreNotSuppportedForTrustedLaunchAzureVM
+
+**Error code**: UserErrorCrossSubscriptionRestoreNotSuppportedForTrustedLaunchAzureVM
+
+**Error message**: Operation failed as Cross Subscription Restore is not supported for Trusted Launch Azure VMs (TVMs).
+
+**Resolution**: Use the same subscription for Restore of Trusted Launch Azure VMs.
+ ## Backup or restore takes time If your backup takes more than 12 hours, or restore takes more than 6 hours, review [best practices](backup-azure-vms-introduction.md#best-practices), and
backup Backup Center Monitor Operate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-monitor-operate.md
Azure Backup provides a set of built-in metrics via Azure Monitor that allows yo
Azure Backup offers the following key capabilities:
-* Ability to view out-of-the-box metrics related to backup and restore health of your backup items along with associated trends.
+* Ability to view out-of-the-box metrics related to back up and restore health of your backup items along with associated trends.
* Ability to write custom alert rules on these metrics to efficiently monitor the health of your backup items. * Ability to route fired metric alerts to different notification channels supported by Azure Monitor, such as email, ITSM, webhook, logic apps, and so on.
You can also see a summary of open alerts in the last 24 hours in the **Overview
Currently, the following types of alerts are displayed in Backup center:
-* **Default Azure Monitor alerts for Azure Backup (preview)**: This includes the built-in security alerts and configured alerts that Azure Backup provides via Azure Monitor. [Learn more about the alert scenarios supported by this solution](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview).
+* **Default Azure Monitor alerts for Azure Backup (preview)**: This includes the built-in security alerts and configured alerts that Azure Backup provides via Azure Monitor. [Learn more about the alert scenarios supported by this solution](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup).
* **Metric alerts for Azure Backup (preview)**: This includes alerts fired based on the metric alert rules you created. [Learn more about Azure Backup metric alerts](metrics-overview.md) >[!NOTE] >- Currently, Backup center displays only alerts for Azure-based workloads. To view alerts for on-premises resources, go to the Recovery Services vault and click **Alerts** from the menu. >- Backup center displays only Azure Monitor alerts. Alerts raised by the older alerting solution (accessed under the [Backup Alerts](backup-azure-monitoring-built-in-monitor.md#backup-alerts-in-recovery-services-vault) tab in Recovery Services vault) aren't displayed in Backup center.
-For more details about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
+For more information about Azure Monitor alerts, see [Overview of alerts in Azure](../azure-monitor/alerts/alerts-overview.md).
### Datasource and Global Alerts
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-support-matrix.md
Backup center helps enterprises to [govern, monitor, operate, and analyze backup
| Monitoring | View all backup instances | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous | | Monitoring | View all backup policies | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous | | Monitoring | View all vaults | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous |
-| Monitoring | View Azure Monitor alerts at scale | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) documentation |
+| Monitoring | View Azure Monitor alerts at scale | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup) documentation |
| Monitoring | View Azure Backup metrics and write metric alert rules | Azure VM <br><br>SQL in Azure VM <br><br> SAP HANA in Azure VM<br><br>Azure Files <br><br>Azure Blobs | You can view metrics for all Recovery Services vaults for a region and subscription simultaneously. Viewing metrics for a larger scope in the Azure portal isnΓÇÖt currently supported. The same limits are also applicable to configure metric alert rules. For more information, see [View metrics in the Azure portal](metrics-overview.md#view-metrics-in-the-azure-portal).| | Actions | Configure backup | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) | | Actions | Restore Backup Instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) |
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 07/19/2022 Last updated : 09/07/2022
Here's what's supported if you want to back up Linux machines.
Back up Linux Azure VMs with the Linux Azure VM agent | File consistent backup.<br/><br/> App-consistent backup using [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk, and use it to replace a disk on an existing VM. You can also restore individual files and folders. Back up Linux Azure VMs with MARS agent | Not supported.<br/><br/> The MARS agent can only be installed on Windows machines. Back up Linux Azure VMs with DPM/MABS | Not supported.
-Backup Linux Azure VMs with docker mount points | Currently, Azure Backup doesnΓÇÖt support exclusion of docker mount points as these are mounted at different paths every time.
+Back up Linux Azure VMs with docker mount points | Currently, Azure Backup doesnΓÇÖt support exclusion of docker mount points as these are mounted at different paths every time.
## Operating system support (Linux)
Azure Backup provides support for customers to author their own pre-post scripts
| Maximum recovery points per protected instance (machine/workload) | 9999. Maximum expiry time for a recovery point | No limit (99 years).
-Maximum backup frequency to vault (Azure VM extension) | Once a day.
-Maximum backup frequency to vault (MARS agent) | Three backups per day.
-Maximum backup frequency to DPM/MABS | Every 15 minutes for SQL Server.<br/><br/> Once an hour for other workloads.
+Maximum backup-frequency to vault (Azure VM extension) | Once a day.
+Maximum backup-frequency to vault (MARS agent) | Three backups per day.
+Maximum backup-frequency to DPM/MABS | Every 15 minutes for SQL Server.<br/><br/> Once an hour for other workloads.
Recovery point retention | Daily, weekly, monthly, and yearly. Maximum retention period | Depends on backup frequency. Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.<br/><br/> Tape recovery points are unlimited for on-premises DPM.
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-Restore across subscription/region/zone. | Not supported.
+<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+[Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
+Restore across zone | Unsupported.
Restore to an existing VM | Use replace disk option. Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled. Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
Azure Backup supports encryption for in-transit and at-rest data:
Network traffic to Azure: -- Backup traffic from servers to the Recovery Services vault is encrypted by using Advanced Encryption Standard 256.
+- Backup-traffic from servers to the Recovery Services vault is encrypted by using Advanced Encryption Standard 256.
- Backup data is sent over a secure HTTPS link. - The backup data is stored in the Recovery Services vault in encrypted form. - Only you have the encryption key to unlock this data. Microsoft can't decrypt the backup data at any point.
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 08/11/2022 Last updated : 09/14/2022
For details on Monitoring scenarios, go to [Monitoring in the Azure portal](back
## View backup alerts
-Because log backups occur every 15 minutes, monitoring backup jobs can be tedious. Azure Backup eases monitoring by sending email alerts. Email alerts are:
+Azure Backup raises built-in alerts via Azure Monitor for the following SQL database backups scenarios:
-- Triggered for all backup failures.-- Consolidated at the database level by error code.-- Sent only for a database's first backup failure.
+- Backup failures
+- Restore failures
+- Unsupported backup type is configured
+- Workload extension unhealthy
+- Deletion of backup data
-To monitor database backup alerts:
+For more information on the supported alert scenarios, see [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#azure-monitor-alerts-for-azure-backup).
-1. Sign in to the [Azure portal](https://portal.azure.com).
+To monitor database backup alerts, follow these steps:
-2. On the vault dashboard, select **Backup Alerts**.
+1. In the Azure portal, go to **Backup center** and filter for **SQL in Azure VM** data source type.
- ![Select Backup Alerts](./media/backup-azure-sql-database/sql-backup-alerts-list.png)
+ :::image type="content" source="./media/backup-azure-sql-database/sql-alerts-inline.png" alt-text="Screenshot showing the Backup alerts menu item." lightbox="./media/backup-azure-sql-database/sql-alerts-expanded.png":::
+
+1. Select the **Alerts** menu item to view the list of all alerts that were fired for SQL database backups in the selected time period.
+
+ :::image type="content" source="./media/backup-azure-sql-database/sql-alerts-list-inline.png" alt-text="Screenshot showing the Backup alerts list." lightbox="./media/backup-azure-sql-database/sql-alerts-list-expanded.png":::
+
+1. To configure notifications for these alerts, you must create an alert processing rule.
+
+ Learn about [Configure notifications for alerts](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#configuring-notifications-for-alerts).
## Stop protection for a SQL Server database
To stop protection for a database:
> >
-## Resume protection for a SQL database
+## Resume protection for an SQL database
When you stop protection for the SQL database, if you select the **Retain Backup Data** option, you can later resume protection. If you don't retain the backup data, you can't resume protection.
-To resume protection for a SQL database:
+To resume protection for an SQL database, follow these steps:
1. Open the backup item and select **Resume backup**.
The backed-up SQL VM is deleted or moved using Resource move. The experience dep
New VM subscription | New VM Name | New VM Resource group | New VM Region | Experience - | -- | | - |
-Same | Same | Same | Same | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM?** <br><br> No SQL backups will be triggered automatically on the _new_ virtual machine. You must re-register the VM to the same vault. Then itΓÇÖll appear as a valid target, and SQL data can be restored to the latest available point-in-time via the alternate location recovery capability. After restoring SQL data, SQL backups will continue on this machine. VM backup will continue as-is, if previously configured.
-Same | Same | Different | Same | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br>**How to get backup data from _old_ VM to _new_ VM?** <br><br> As the new virtual machine is in a different resource group, itΓÇÖll be treated as a new machine and you have to explicitly configure SQL backups (and VM backup too, if previously configured) to the same vault. Then proceed to restore the SQL backup item of the old VM to latest available point-in-time via the _alternate location recovery_ to the new VM. The SQL backups will now continue.
-Same | Same | Same or different | Different | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM? <br><br> As the new virtual machine is in a different region, youΓÇÖve to configure SQL backups to a vault in the new region. <br><br> If the new region is a paired region, you can choose to restore SQL data to latest available point-in-time via the ΓÇÿcross region restoreΓÇÖ capability from the SQL backup item of the _old_ VM. <br><br> If the new region is a non-paired region, direct restore from the previous SQL backup item is not supported. However, you can choose restore as files option, from the SQL backup item of the ΓÇÿoldΓÇÖ VM, to get the data to a mounted share in a VM of the old region, and then mount it to the new VM.
-Different | Same or different | Same or different | Same or different | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection + delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM?** <br><br> As the new virtual machine is in a different subscription, youΓÇÖve to configure SQL backups to a vault in the new subscription. If it is a new vault in different subscription, direct restore from the previous SQL backup item is not supported. However, you can choose restore as files option, from the SQL backup item of the _old_ VM, to get the data to a mounted share in a VM of the old subscription, and then mount it to the new VM.
+Same | Same | Same | Same | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM?** <br><br> No SQL backups will be triggered automatically on the _new_ virtual machine. You must re-register the VM to the same vault. Then it will appear as a valid target, and SQL data can be restored to the latest available point-in-time via the alternate location recovery capability. After you restore SQL data, SQL backups will continue on this machine. VM backup will continue as-is, if previously configured.
+Same | Same | Different | Same | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br>**How to get backup data from _old_ VM to _new_ VM?** <br><br> As the new virtual machine is in a different resource group, it will be treated as a new machine and you've to explicitly configure SQL backups (and VM backup too, if previously configured) to the same vault. Then proceed to restore the SQL backup item of the old VM to latest available point-in-time via the _alternate location recovery_ to the new VM. The SQL backups will now continue.
+Same | Same | Same or different | Different | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection and delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM? <br><br> As the new virtual machine is in a different region, youΓÇÖve to configure SQL backups to a vault in the new region. <br><br> If the new region is a paired region, you can choose to restore SQL data to latest available point-in-time via the ΓÇÿcross region restoreΓÇÖ capability from the SQL backup item of the _old_ VM. <br><br> If the new region is a non-paired region, direct restore from the previous SQL backup item isn't supported. However, you can choose the *restore as files* option, from the SQL backup item of the ΓÇÿoldΓÇÖ VM, to get the data to a mounted share in a VM of the old region, and then mount it to the new VM.
+Different | Same or different | Same or different | Same or different | **What will happen to backups of _old_ VM?** <br><br> YouΓÇÖll receive an alert that backups will be stopped on the _old_ VM. The backup data will be retained as per the last active policy. You can choose to stop protection + delete data and unregister the old VM once all backup data is cleaned up as per policy. <br><br> **How to get backup data from _old_ VM to _new_ VM?** <br><br> As the new virtual machine is in a different subscription, youΓÇÖve to configure SQL backups to a vault in the new subscription. If it's a new vault in different subscription, direct restore from the previous SQL backup item isn't supported. However, you can choose the *restore as files* option, from the SQL backup item of the _old_ VM, to get the data to a mounted share in a VM of the old subscription, and then mount it to the new VM.
## Next steps
backup Monitoring And Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/monitoring-and-alerts-overview.md
Title: Monitoring and reporting solutions for Azure Backup description: Learn about different monitoring and reporting solutions provided by Azure Backup. Previously updated : 10/23/2021 Last updated : 09/14/2022+++ # Monitoring and reporting solutions for Azure Backup
The following table provides a summary of the different monitoring and reporting
| Scenario | Solutions available | | | |
-| Monitor backup jobs and backup instances | <ul><li>**Built-in monitoring**: You can monitor backup jobs and backup instances in real time via the [Backup center](./backup-center-overview.md) dashboard.</li><li>**Customized monitoring dashboards**: Azure Backup allows you to use non-portal clients, such as [PowerShell](./backup-azure-vms-automation.md), [CLI](./create-manage-azure-services-using-azure-command-line-interface.md), and [REST API](./backup-azure-arm-userestapi-managejobs.md), to query backup monitoring data for use in your custom dashboards. <br><br> In addition, you can query your backups at scale (across vaults, subscriptions, regions, and Lighthouse tenants) using [Azure Resource Graph (ARG)](./query-backups-using-azure-resource-graph.md). <br><br> [Backup Explorer](./monitor-azure-backup-with-backup-explorer.md) is one sample monitoring workbook, which uses data in ARG that you can use as a reference to create your own dashboards. </li></ul> |
-| Monitor overall backup health | <ul><li>**Resource Health**: You can monitor the health of your Recovery Services vault and troubleshoot events causing the resource health issues. [Learn more](../service-health/resource-health-overview.md). <br><br> You can view the health history and identify events affecting the health of your resource. You can also trigger alerts related to the resource health events. </li><li>**Azure Monitor Metrics**: Azure Backup also offers the above health metrics via Azure Monitor, which provides you more granular details about the health of your backups. This also allows you to configure alerts and notifications on these metrics. [Learn more](./metrics-overview.md)</li></ul> |
-| Get alerted to critical backup incidents | <ul><li>**Built-in alerts using Azure Monitor (preview)**: Azure Backup provides an [alerting solution based on Azure Monitor](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) for scenarios such as deletion of backup data, disabling of soft-delete, backup failures, and restore failures. <br><br> You can view these alerts and manage via Backup center. To [configure notifications](./backup-azure-monitoring-built-in-monitor.md#configuring-notifications-for-alerts) for these alerts (for example, emails), you can use Azure Monitor's [Action rules](../azure-monitor/alerts/alerts-action-rules.md?tabs=portal) and [Action groups](../azure-monitor/alerts/action-groups.md) to route alerts to a wide range of notification channels. </li> <li> **Azure Backup Metric Alerts using Azure Monitor (preview)**: You can write custom alert rules using Azure Monitor metrics to monitor the health of your backup items across different KPIs. [Learn more](./metrics-overview.md) </li> <li>**Classic Alerts**: This is the older alerting solution [accessed using the Backup Alerts tab](./backup-azure-monitoring-built-in-monitor.md#backup-alerts-in-recovery-services-vault) in the Recovery Services vault blade. These alerts canΓÇÖt be viewed in Backup center. If youΓÇÖre using classic alerts, we recommend to start using one or more of the Azure Monitor based alert solutions (described above) as itΓÇÖs the forward-looking solution for alerting. </li><li>**Custom log alerts**: If you've scenarios where an alert needs to be generated based on custom logic, you can make use of [Log Analytics based alerts](./backup-azure-monitoring-use-azuremonitor.md#create-alerts-by-using-log-analytics) for such scenarios, provided youΓÇÖve configured your vaults to send diagnostics data to a Log Analytics (LA) workspace. Due to the current [frequency at which data in an LA workspace is updated](./backup-azure-monitoring-use-azuremonitor.md#diagnostic-data-update-frequency), this solution is typically used for scenarios where itΓÇÖs acceptable to have a small time lag between the occurrence of the actual incident and the generation of the alert. </li></ul> |
-| Analyze historical trends | <ul><li>**Built-in reports**: You can use [Backup Reports](./configure-reports.md) (based on Azure Monitor Logs) to analyze historical trends related to job success and backup usage, and discover optimization opportunities for your backups. You can also [configure periodic emails](./backup-reports-email.md) of these reports. </li><li>**Customized reporting dashboards**: You can also query the data in Azure Monitor Logs (LA) using the documented [system functions](./backup-reports-system-functions.md) to create your own dashboards to analyze historical information related to your backups.</li></ul> |
+| Monitor backup jobs and backup instances | - **Built-in monitoring**: You can monitor backup jobs and backup instances in real time via the [Backup center](./backup-center-overview.md) dashboard. <br><br> - **Customized monitoring dashboards**: Azure Backup allows you to use non-portal clients, such as [PowerShell](./backup-azure-vms-automation.md), [CLI](./create-manage-azure-services-using-azure-command-line-interface.md), and [REST API](./backup-azure-arm-userestapi-managejobs.md), to query backup monitoring data for use in your custom dashboards. In addition, you can query your backups at scale (across vaults, subscriptions, regions, and Lighthouse tenants) using [Azure Resource Graph (ARG)](./query-backups-using-azure-resource-graph.md). [Backup Explorer](./monitor-azure-backup-with-backup-explorer.md) is one sample monitoring workbook, which uses data in ARG that you can use as a reference to create your own dashboards. |
+| Monitor overall backup health | - **Resource Health**: You can monitor the health of your Recovery Services vault and troubleshoot events causing the resource health issues. [Learn more](../service-health/resource-health-overview.md). You can view the health history and identify events affecting the health of your resource. You can also trigger alerts related to the resource health events. <br><br> - **Azure Monitor Metrics**: Azure Backup also offers the above health metrics via Azure Monitor, which provides you more granular details about the health of your backups. This also allows you to configure alerts and notifications on these metrics. [Learn more](./metrics-overview.md). |
+| Get alerted to critical backup incidents | - **Built-in alerts using Azure Monitor**: Azure Backup provides an [alerting solution based on Azure Monitor](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup) for scenarios, such as deletion of backup data, disabling of soft-delete, backup failures, and restore failures. You can view and manage these alerts via Backup center. To [configure notifications](./backup-azure-monitoring-built-in-monitor.md#configuring-notifications-for-alerts) for these alerts (for example, emails), you can use Azure Monitor's [Action rules](../azure-monitor/alerts/alerts-action-rules.md?tabs=portal) and [Action groups](../azure-monitor/alerts/action-groups.md) to route alerts to a wide range of notification channels. <br><br> - **Azure Backup Metric Alerts using Azure Monitor (preview)**: You can write custom alert rules using Azure Monitor metrics to monitor the health of your backup items across different KPIs. [Learn more](./metrics-overview.md) <br><br> - **Classic Alerts**: This is the older alerting solution, which you can [access using the Backup Alerts tab](./backup-azure-monitoring-built-in-monitor.md#backup-alerts-in-recovery-services-vault) in the Recovery Services vault blade. These alerts don't appear in Backup center. If you're using classic alerts, we recommend to start using one or more of the Azure Monitor based alert solutions (described above) as it's the forward-looking solution for alerting. <br><br> - **Custom log alerts**: If you've scenarios where an alert needs to be generated based on custom logic, you can use [Log Analytics based alerts](./backup-azure-monitoring-use-azuremonitor.md#create-alerts-by-using-log-analytics) for such scenarios, provided you've configured your vaults to send diagnostics data to a Log Analytics (LA) workspace. Due to the current [frequency at which data in an LA workspace is updated](./backup-azure-monitoring-use-azuremonitor.md#diagnostic-data-update-frequency), this solution is typically used for scenarios where it's acceptable to have a short time difference between the occurrence of the actual incident and the generation of the alert. |
+| Analyze historical trends | - **Built-in reports**: You can use [Backup Reports](./configure-reports.md) (based on Azure Monitor Logs) to analyze historical trends related to job success and backup usage, and discover optimization opportunities for your backups. You can also [configure periodic emails](./backup-reports-email.md) of these reports. <br><br> - **Customized reporting dashboards**: You can also query the data in Azure Monitor Logs (LA) using the documented [system functions](./backup-reports-system-functions.md) to create your own dashboards to analyze historical information related to your backups. |
| Audit user triggered actions on vaults | **Activity Logs**: You can use standard [Activity Logs](../azure-monitor/essentials/activity-log.md) for your vaults to view information on various user-triggered actions, such as modification of backup policies, restoration of a backup item, and so on. You can also configure alerts on Activity Logs, or export these logs to a Log Analytics workspace for long-term retention. | ## Next steps - [Learn more](./backup-center-overview.md) about Backup center.-- [Learn more](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) about Azure Monitor Alerts.
+- [Learn more](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup) about Azure Monitor Alerts.
- [Learn more](../service-health/resource-health-overview.md) about Azure Resource Health.
backup Move To Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md
Title: Switch to Azure Monitor based alerts for Azure Backup description: This article describes the new and improved alerting capabilities via Azure Monitor and the process to configure Azure Monitor. Previously updated : 07/19/2022 Last updated : 09/14/2022
The following table lists the differences between classic backup alerts and buil
| | | | | **Setting up notifications** | - You must enable the configure notifications feature for each Recovery Services vault, along with the email id(s) to which the notifications should be sent. <br><br> - For certain destructive operations, email notifications are sent to the subscription owner, admin and co-admin irrespective of the notification settings of the vault.| - Notifications are configured by creating an alert processing rule. <br><br> - While *alerts* are generated by default and can't be turned off for destructive operations, the notifications are in the control of the user, allowing you to clearly specify which set of email address (or other notification endpoints) you wish to route alerts to. | | **Notification suppression for database backup scenarios** | When there are multiple failures for the same database due to the same error code, a single alert is generated (with the occurrence count updated for each failure type) and a new alert is only generated when the original alert is inactivated. | The behavior is currently different. Here, a separate alert is generated for every backup failure. If there's a window of time when backups will fail for a certain known item (for example, during a maintenance window), you can create a suppression rule to suppress email noise for that backup item during the given period. |
-| **Pricing** | There're no additional charges for this solution. | Alerts for critical operations/failures generate by default (that you can view in the Azure portal or via non-portal interfaces) at no additional charge. However, to route these alerts to a notification channel (such as email), it incurs a minor charge for notifications beyond the *free tier* (of 1000 emails per month). Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
+| **Pricing** | There are no additional charges for this solution. | Alerts for critical operations/failures generate by default (that you can view in the Azure portal or via non-portal interfaces) at no additional charge. However, to route these alerts to a notification channel (such as email), it incurs a minor charge for notifications beyond the *free tier* (of 1000 emails per month). Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
-Azure Backup now provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with just a few selects.
+Azure Backup now provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with just a few selects. To perform this action, you need to have access to the *Backup Contributor* and *Monitoring Contributor* Azure role-based access control (Azure RBAC) roles to the subscription.
Follow these steps:
Follow these steps:
:::image type="content" source="./media/move-to-azure-monitor-alerts/backup-center-alerts-link-inline.png" alt-text="Screenshot showing number of vaults which have classic alerts enabled." lightbox="./media/move-to-azure-monitor-alerts/backup-center-alerts-link-expanded.png":::
- On the next screen, there're two recommended actions:
+ On the next screen, there are two recommended actions:
- - **Create rule**: This action creates an alert processing rule attached to an action group to enable you to receive notifications for Azure Monitor alerts. After selecting, it leads you to a template deployment experience.
+ - **Create rule**: This action creates an alert processing rule attached to an action group to enable you to receive notifications for Azure Monitor alerts. After you select, it leads you to a template deployment experience.
:::image type="content" source="./media/move-to-azure-monitor-alerts/recommended-action-one.png" alt-text="Screenshot showing recommended alert migration action Create rule for Recovery Services vaults.":::
Follow these steps:
1. Enter the subscription, resource group, and region in which the alert processing rule and action group should be created. Also specify the email ID(s) to which notifications should be sent. Other parameters populate with default values and only need to be edited, if you want to customize the names and descriptions that the resources are created in.
- :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-parameters.png" alt-text="Screenshot showing template parameters to setup notification rules for Azure Monitor alerts.":::
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-parameters.png" alt-text="Screenshot showing template parameters to set up notification rules for Azure Monitor alerts.":::
1. Select **Review+Create** to initiate deployment.
By default, the suppression alert processing rule takes priority over the other
To create a suppression alert processing rule, follow these steps:
-1. Go to **Backup center** -> **Alerts**, and select **Alert processing rules**.
+1. Go to **Backup center** > **Alerts**, and select **Alert processing rules**.
- :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-blade.png" alt-text="Screenshot showing alert processing rules blade in portal.":::
+ :::image type="content" source="./media/move-to-azure-monitor-alerts/alert-processing-rule-blade-inline.png" alt-text="Screenshot showing alert processing rules blade in portal." lightbox="./media/move-to-azure-monitor-alerts/alert-processing-rule-blade-expanded.png":::
1. Select **Create**.
To create a suppression alert processing rule, follow these steps:
You can also use programmatic methods to opt-out of classic alerts and manage Azure Monitor notifications. -- **Opting out of classic backup alerts**: The **monitoringSettings** vault property helps you specify whether you want to disable classic alerts. You can create a custom ARM/Bicep template or Azure Policy to modify this setting for your vaults. Below is an example of this property for a vault where classic alerts are disabled and built-in Azure Monitor alerts are enabled for all job failures.
+### Opt out of classic backup alerts
+
+In the following sections, you'll learn how to opt out of classic backup alert solution using the supported clients.
+
+#### Using Azure Resource Manager (ARM)/ Bicep/ REST API/ Azure Policy
+
+The **monitoringSettings** vault property helps you specify if you want to disable classic alerts. You can create a custom ARM/Bicep template or Azure Policy to modify this setting for your vaults.
+
+The following example of the vault settings property shows that the classic alerts are disabled and built-in Azure Monitor alerts are enabled for all job failures.
```json {
You can also use programmatic methods to opt-out of classic alerts and manage Az
} ``` -- **Setting up notifications for Azure Monitor alerts**:
+#### Using Azure PowerShell
+
+To modify the alert settings of the vault, use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault?view=azps-8.2.0&preserve-view=true) command.
+
+The following example helps you to enable built-in Azure Monitor alerts for job failures and disables classic alerts:
+
+```azurepowershell
+Update-AzRecoveryServicesVault -ResourceGroupName testRG -Name testVault -DisableClassicAlerts $true -DisableAzureMonitorAlertsForJobFailure $false
+```
+
+### Using Azure CLI
+
+ To modify the alert settings of the vault, use the [az backup vault backup-properties set](/cli/azure/backup/vault/backup-properties?view=azure-cli-latest&preserve-view=true) command.
+
+The following example helps you to enable built-in Azure Monitor alerts for job failures and disables classic alerts.
+
+```azurecli-interactive
+az backup vault backup-properties set \
+ --name testVault \
+ --resource-group testRG \
+ --clasic-alerts Disable \
+ --alerts-for-job-failures Enable
+```
+
+### Set up notifications for Azure Monitor alerts
You can use the following standard programmatic interfaces supported by Azure Monitor to manage action groups and alert processing rules. - [Azure Monitor REST API reference](/rest/api/monitor/) - [Azure Monitor PowerShell reference](/powershell/module/az.monitor/?view=azps-8.0.0&preserve-view=true)-- [Azure Monitor CLI reference](/cli/azure/monitor?view=azure-cli-latest)
+- [Azure Monitor CLI reference](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true)
+
+#### Using Azure Resource Manager (ARM)/ Bicep/ REST API
+
+You can use [these sample ARM and Bicep templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.recoveryservices/recovery-services-create-alert-processing-rule) that create an alert processing rule and action group associated to all Recovery Services vaults in the selected subscription.
+
+#### Using Azure PowerShell
+
+As described in earlier sections, you need an action group (notification channel) and alert processing rule (notification rule) to configure notifications for your vaults.
+
+To configure the notification, run the following cmdlet:
+
+1. Create an action group associated with an email ID using the [New-AzActionGroupReceiver](/powershell/module/az.monitor/new-azactiongroupreceiver?view=azps-8.2.0&preserve-view=true) cmdlet and the [Set-AzActionGroup](/powershell/module/az.monitor/set-azactiongroup?view=azps-8.2.0&preserve-view=true) cmdlet.
+
+ ```powershell
+ $email1 = New-AzActionGroupReceiver -Name 'user1' -EmailReceiver -EmailAddress 'user1@contoso.com'
+ Set-AzActionGroup -Name "testActionGroup" -ResourceGroupName "testRG" -ShortName "testAG" -Receiver $email1
+ ```
+
+1. Create an alert processing rule that's linked to the above action group using the [Set-AzAlertProcessingRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule?view=azps-8.2.0&preserve-view=true) cmdlet.
+
+ ```powershell
+ Set-AzAlertProcessingRule -ResourceGroupName "testRG" -Name "AddActionGroupToSubscription" -Scope "/subscriptions/xxxx-xxx-xxxx" -FilterTargetResourceType "Equals:Microsoft.RecoveryServices/vaults" -Description "Add ActionGroup1 to alerts on all RS vaults in subscription" -Enabled "True" -AlertProcessingRuleType "AddActionGroups" -ActionGroupId "/subscriptions/xxxx-xxx-xxxx/resourcegroups/testRG/providers/microsoft.insights/actiongroups/testActionGroup"
+ ```
+
+#### Using Azure CLI
+
+As described in earlier sections, you need an action group (notification channel) and alert processing rule (notification rule) to configure notifications for your vaults.
+
+To configure the same, run the following commands:
+
+1. Create an action group associated with an email ID using the [az monitor action-group create](/cli/azure/monitor/action-group?view=azure-cli-latest&preserve-view=true#az-monitor-action-group-create) command.
+
+ ```azurecli-interactive
+ az monitor action-group create --name testag1 --resource-group testRG --short-name testag1 --action email user1 user1@contoso.com --subscription "Backup PM Subscription"
+ ```
+
+1. Create an alert processing rule that is linked to the above action group using the [az monitor alert-processing-rule create](/cli/azure/monitor/alert-processing-rule?view=azure-cli-latest&preserve-view=true#az-monitor-alert-processing-rule-create) command.
+
+ ```azurecli-interactive
+ az monitor alert-processing-rule create \
+ --name 'AddActionGroupToSubscription' \
+ --rule-type AddActionGroups \
+ --scopes "/subscriptions/xxxx-xxx-xxxx" \
+ --filter-resource-type Equals "Microsoft.RecoveryServices/vaults"
+ --action-groups "/subscriptions/xxxx-xxx-xxxx/resourcegroups/testRG/providers/microsoft.insights/actiongroups/testag1" \
+ --enabled true \
+ --resource-group testRG \
+ --description "Add ActionGroup1 to all RS vault alerts in subscription"
+ ```
## Next steps Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 06/30/2022 Last updated : 09/14/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- September 2022
+ - [Built-in Azure Monitor alerting for Azure Backup is now generally available](#built-in-azure-monitor-alerting-for-azure-backup-is-now-generally-available)
- June 2022 - [Multi-user authorization using Resource Guard is now generally available](#multi-user-authorization-using-resource-guard-is-now-generally-available) - May 2022
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Built-in Azure Monitor alerting for Azure Backup is now generally available
+
+Azure Backup now offers a new and improved alerting solution via Azure Monitor. This solution provides multiple benefits, such as:
+
+- Ability to configure notifications to a wide range of notification channels.
+- Ability to select specific scenarios to get notified.
+- Ability to manage alerts and notifications programmatically.
+- Ability to have a consistent alert management experience for multiple Azure services, including Azure Backup.
+
+If you're currently using the [classic alerts solution](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#backup-alerts-in-recovery-services-vault), we recommend you to switch to Azure Monitor alerts. Now, Azure Backup provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with a few clicks.
+
+For more information, see [Switch to Azure Monitor based alerts for Azure Backup](move-to-azure-monitor-alerts.md).
++ ## Multi-user authorization using Resource Guard is now generally available Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
For more information, see [Archive tier support in Azure Backup](archive-tier-su
## Multiple backups per day for Azure Files is now generally available
-Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss in the event of a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
+Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss if a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
Using Azure Backup, you can create a backup policy or modify an existing backup policy to take multiple snapshots in a day. This capability allows you to define the duration in which your backup jobs will run. Therefore, you can align your backup schedule with the working hours when there are frequent updates to Azure Files content. With this release, you can also configure policy for multiple backups per day using Azure PowerShell and Azure CLI.
For more information, see [how to protect Recovery Services vault and manage cri
## Multiple backups per day for Azure Files (in preview)
-Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss in the event of a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
+Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss if a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
Using Azure Backup, you can now create a backup policy or modify an existing backup policy to take multiple snapshots in a day. With this capability, you can also define the duration in which your backup jobs would trigger. This capability empowers you to align your backup schedule with the working hours when there are frequent updates to Azure Files content.
Azure Backup allows you to move your long-term retention points for Azure Virtua
In addition to the capability to move the recovery points: -- Azure Backup provides recommendations to move a specific set of recovery points for Azure Virtual Machine backups that'll ensure cost savings.
+- Azure Backup provides recommendations to move a specific set of recovery points for Azure Virtual Machine backups that will ensure cost savings.
- You have the capability to move all their recovery points for a particular backup item at one go using sample scripts. - You can view Archive storage usage on the Vault dashboard.
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
In order to **export** the user settings Cloud Shell saves for you such as prefe
Bash: ```
- token="Bearer $(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".access_token")"
- curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"$token" -s | jq
+ token="Bearer $(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".accessToken")"
+ curl https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token" -s | jq
``` PowerShell:
In order to **delete** your user settings Cloud Shell saves for you such as pref
Bash: ```
- token=(az account get-access-token --resource "https://management.azure.com/" | jq -r ".access_token")
- curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"$token"
+ token=$(az account get-access-token --resource "https://management.azure.com/" | jq -r ".accessToken")
+ curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"Bearer $token"
``` PowerShell:
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
To create a Custom Speech project, follow these steps:
Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that creates a project:
spx csr project create --name "My Project" --description "My Project Description" --language "en-US" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/projects" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
Here's an example Speech CLI command to create an endpoint and deploy a model:
spx csr endpoint create --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
Here's an example Speech CLI command that redeploys the custom endpoint with a n
spx csr endpoint update --endpoint YourEndpointId --model YourModelId ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
Here's an example Speech CLI command that gets logs for an endpoint:
spx csr endpoint list --endpoint YourEndpointId ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
- The location of each log file with more details are returned in the response body. ::: zone-end
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Follow these steps to create a test:
1. Enter the test name and description, and then select **Next**. 1. Review the test details, and then select **Save and close**.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that creates a test:
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
Follow these steps to get test results:
This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that gets test results:
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
- The word error rates and more details are returned in the response body. You should receive a response body in the following format:
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
- The word error rates and more details are returned in the response body. You should receive a response body in the following format:
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
Follow these instructions to create a test:
1. Enter the test name and description, and then select **Next**. 1. Review your settings, and then select **Save and close**.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that creates a test:
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
Follow these steps to get test results:
This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that gets test results:
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
- The models, audio dataset, transcriptions, and more details are returned in the response body. You should receive a response body in the following format:
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
- The models, audio dataset, transcriptions, and more details are returned in the response body. You should receive a response body in the following format:
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.
> [!IMPORTANT] > Take note of the **Expiration** date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
To create a model with datasets for training, use the `spx csr model create` com
- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon. - Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response. - Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-- Optionally, you can set the `baseModel` parameter. If you don't specify the `baseModel`, the default base model for the locale is used.
+- Optionally, you can set the `base` property. For example: `--base 1aae1070-7972-47e9-a977-87e3b05c457d`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response.
Here's an example Speech CLI command that creates a model with datasets for training:
spx csr model create --project YourProjectId --name "My Model" --description "My
``` > [!NOTE]
-> In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
-
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
+> In this example, the `base` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
You should receive a response body in the following format:
To create a model with datasets for training, use the [CreateModel](https://east
- Set the required `datasets` property to the URI of the datasets that you want used for training. - Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later. - Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-- Optionally, you can set the `baseModel` property. If you don't specify the `baseModel`, the default base model for the locale is used.
+- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"}`. If you don't specify the `baseModel`, the default base model for the locale is used.
Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
> [!NOTE] > In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
Follow these instructions to copy a model to a project in another region:
After the model is successfully copied, you'll be notified and can view it in the target project.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Copy-a-model" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
> [!NOTE] > Only the `targetSubscriptionKey` property in the request body has information about the destination Speech resource.
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Copy-a-model" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To upload your own datasets in Speech Studio, follow these steps:
After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md)
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
- ::: zone-end ::: zone pivot="speech-cli"
Here's an example Speech CLI command that creates a dataset and connects it to a
spx csr dataset create --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/datasets" ```
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
- You should receive a response body in the following format: ```json
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
If the HTTP status is `200 OK`, the body of the response contains an audio file
## Audio outputs
-This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
-
-| Streaming | Non-Streaming |
-| - | |
-| audio-16khz-16bit-32kbps-mono-opus | riff-8khz-8bit-mono-alaw |
-| audio-16khz-32kbitrate-mono-mp3 | riff-8khz-8bit-mono-mulaw |
-| audio-16khz-64kbitrate-mono-mp3 | riff-8khz-16bit-mono-pcm |
-| audio-16khz-128kbitrate-mono-mp3 | riff-22050hz-16bit-mono-pcm |
-| audio-24khz-16bit-24kbps-mono-opus | riff-24khz-16bit-mono-pcm |
-| audio-24khz-16bit-48kbps-mono-opus | riff-44100hz-16bit-mono-pcm |
-| audio-24khz-48kbitrate-mono-mp3 | riff-48khz-16bit-mono-pcm |
-| audio-24khz-96kbitrate-mono-mp3 | |
-| audio-24khz-160kbitrate-mono-mp3 | |
-| audio-48khz-96kbitrate-mono-mp3 | |
-| audio-48khz-192kbitrate-mono-mp3 | |
-| ogg-16khz-16bit-mono-opus | |
-| ogg-24khz-16bit-mono-opus | |
-| ogg-48khz-16bit-mono-opus | |
-| raw-8khz-8bit-mono-alaw | |
-| raw-8khz-8bit-mono-mulaw | |
-| raw-8khz-16bit-mono-pcm | |
-| raw-16khz-16bit-mono-pcm | |
-| raw-16khz-16bit-mono-truesilk | |
-| raw-22050hz-16bit-mono-pcm | |
-| raw-24khz-16bit-mono-pcm | |
-| raw-24khz-16bit-mono-truesilk | |
-| raw-44100hz-16bit-mono-pcm | |
-| raw-48khz-16bit-mono-pcm | |
-| webm-16khz-16bit-mono-opus | |
-| webm-24khz-16bit-24kbps-mono-opus | |
-| webm-24khz-16bit-mono-opus | |
+The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+
+#### [Streaming](#tab/streaming)
+
+```
+amr-wb-16000hz
+audio-16khz-16bit-32kbps-mono-opus
+audio-16khz-32kbitrate-mono-mp3
+audio-16khz-64kbitrate-mono-mp3
+audio-16khz-128kbitrate-mono-mp3
+audio-24khz-16bit-24kbps-mono-opus
+audio-24khz-16bit-48kbps-mono-opus
+audio-24khz-48kbitrate-mono-mp3
+audio-24khz-96kbitrate-mono-mp3
+audio-24khz-160kbitrate-mono-mp3
+audio-48khz-96kbitrate-mono-mp3
+audio-48khz-192kbitrate-mono-mp3
+ogg-16khz-16bit-mono-opus
+ogg-24khz-16bit-mono-opus
+ogg-48khz-16bit-mono-opus
+raw-8khz-8bit-mono-alaw
+raw-8khz-8bit-mono-mulaw
+raw-8khz-16bit-mono-pcm
+raw-16khz-16bit-mono-pcm
+raw-16khz-16bit-mono-truesilk
+raw-22050hz-16bit-mono-pcm
+raw-24khz-16bit-mono-pcm
+raw-24khz-16bit-mono-truesilk
+raw-44100hz-16bit-mono-pcm
+raw-48khz-16bit-mono-pcm
+webm-16khz-16bit-mono-opus
+webm-24khz-16bit-24kbps-mono-opus
+webm-24khz-16bit-mono-opus
+```
+
+#### [NonStreaming](#tab/nonstreaming)
+
+```
+riff-8khz-8bit-mono-alaw
+riff-8khz-8bit-mono-mulaw
+riff-8khz-16bit-mono-pcm
+riff-22050hz-16bit-mono-pcm
+riff-24khz-16bit-mono-pcm
+riff-44100hz-16bit-mono-pcm
+riff-48khz-16bit-mono-pcm
+```
+
+***
> [!NOTE] > en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
To send push notifications for messages missed by your users while they were awa
For more details, see [Push Notifications](../notifications.md). > [!NOTE]
-> Currently sending chat push notifications with Notification Hub is generally available in Android version 1.1.0 and in public preview for iOS version 1.3.0-beta.1.
+> Currently sending chat push notifications with Notification Hub is generally available in Android version 1.1.0 and in IOS version 1.3.0.
## Build intelligent, AI powered chat experiences
communication-services Identifiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identifiers.md
+
+ Title: Communication Identifier types
+
+description: Understand identifier types and their usage
+++++ Last updated : 08/30/2022+++
+zone_pivot_groups: acs-js-csharp-java-python-ios-android-rest
++
+# Understand Identifier types
+
+Communication Services SDKs and REST APIs use the *identifier* type to identify who is communicating with whom. For example, identifiers specify who to call, or who has sent a chat message.
+
+Depending on context, identifiers get wrapped with extra properties, like inside the `ChatParticipant` in the Chat SDK or inside the `RemoteParticipant` in the Calling SDK.
+
+In this article, you'll learn about different types of identifiers and how they look across programming languages. You'll also get tips on how to use them.
++
+## The CommunicationIdentifier type
+
+There are user identities that you create yourself and there are external identities. Microsoft Teams users and phone numbers are external identities that come to play in interop scenarios. Each of these different identity types has a corresponding identifier that represents it. An identifier is a structured type that offers type-safety and works well with your editor's code completion.
+++++++++
+## Next steps
+
+* For an introduction to communication identities, see [Identity model](./identity-model.md).
+* To learn how to quickly create identities for testing, see the [quick-create identity quickstart](../quickstarts/identity/quick-create-identity.md).
+* To learn how to use Communication Services together with Microsoft Teams, see [Teams interoperability](./teams-interop.md).
communication-services Add Chat Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md
Previously updated : 08/09/2022 Last updated : 09/14/2022 # Enable Push Notifications in your chat app
->[!IMPORTANT]
->This Push Notification feature is currently in public preview. Preview APIs and SDKs are provided without a service-level agreement, and aren't recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
- This tutorial will guide you to enable Push Notification in your IOS App by using Azure Communication Chat SDK. Push notifications alert clients of incoming messages in a chat thread in situations where the mobile app isn't running in the foreground. Azure Communication Services supports two versions of push notifications.
Go to this [Apple official doc](https://developer.apple.com/documentation/userno
Notice that in the step ΓÇ£Implement Your ExtensionΓÇÖs Handler Methods,ΓÇ¥ Apple provides the sample code to decrypt data and we'll follow the overall structure. However, since we use chat SDK for decryption, we need to replace the part starting from `ΓÇ£// Try to decode the encrypted message data.ΓÇ¥` with our customized logic. Refer to the [sample code](https://github.com/Azure-Samples/communication-services-ios-quickstarts/blob/main/add-chat-push-notifications/SwiftPushTestNotificationExtension/NotificationService.swift) to see the related implementation in `NotificationService.swift`.
-* Item 3: Implementation of PushNotificationKeyHandler Protocol
+* Item 3: Implementation of PushNotificationKeyStorage Protocol
-Third, `PushNotificationKeyHandler` is required for advanced version. As the SDK user, you could use the default `AppGroupPushNotificationKeyHandler` class provided by chat SDK to generate a key handler. If you donΓÇÖt use `App Group` as the key storage or would like to customize key handling methods, create your own class which conforms to PushNotificationKeyHandler protocol.
+Third, `PushNotificationKeyStorage` is required for advanced version. As the SDK user, you could use the default `AppGroupPushNotificationKeyStorage` class provided by chat SDK. If you donΓÇÖt use `App Group` as the key storage or would like to customize key storage methods, create your own class which conforms to PushNotificationKeyStorage protocol.
-For PushNotificationKeyHandler, it defines two methods: `onPersistKey(encryptionKey:expiryTime)` and `onRetrieveKeys() -> [String]`.
+For PushNotificationKeyStorage, it defines two methods: `onPersistKey(encryptionKey:expiryTime)` and `onRetrieveKeys() -> [String]`.
The first method is used to persist the encryption key in the storage of userΓÇÖs IOS device. Chat SDK will set 45 minutes as the expiry time of the encryption key. If you want Push Notification to be effect for more than 45 minutes, you need to schedule to call `chatClient.startPushNotifications(deviceToken:)` on a comparatively frequent basis (eg. every 15 minutes) so a new encryption key could be registered before the old key expires.
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration Previously updated : 09/07/2022 Last updated : 09/14/2022 # Built-in connectors in Azure Logic Apps
You can use the following built-in connectors to perform general tasks, for exam
**FTP**<br>(*Standard workflow only*) \ \
- Connect to FTP or FTPS servers you can access from the internet so that you can work with your files and folders.
+ Connect to FTP or FTPS servers that you can access from the internet so that you can work with your files and folders.
:::column-end::: :::column::: ![SFTP-SSH icon][sftp-ssh-icon] \ \
- **SFTP-SSH**<br>(*Standard workflow only*)
+ **SFTP**<br>(*Standard workflow only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders. :::column-end::: :::column:::
+ ![SMTP icon][smtp-icon]
+ \
+ \
+ **SMTP**<br>(*Standard workflow only*)
+ \
+ \
+ Connect to SMTP servers that you can send email.
:::column-end::: :::column::: :::column-end:::
You can use the following built-in connectors to access specific services and sy
\ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column::: ![Azure Event Hubs icon][azure-event-hubs-icon] \
You can use the following built-in connectors to access specific services and sy
\ Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. :::column-end:::
+ :::column:::
+ ![Azure File Storage icon][azure-file-storage-icon]
+ \
+ \
+ **Azure File Storage**<br>(*Standard workflow only*)
+ \
+ \
+ Connect to your Azure Storage account so that you can create, update, and manage files.
+ :::column-end:::
:::column::: [![Azure Functions icon][azure-functions-icon]][azure-functions-doc] \
You can use the following built-in connectors to access specific services and sy
\ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end:::
+ :::column:::
+ ![Azure Key Vault icon][azure-key-vault-icon]
+ \
+ \
+ **Azure Key Vault**<br>(*Standard workflow only*)
+ \
+ \
+ Connect to Azure Key Vault to store, access, and manage secrets.
+ :::column-end:::
:::column::: [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \
You can use the following built-in connectors to access specific services and sy
Call other workflows that start with the Request trigger named **When a HTTP request is received**. :::column-end::: :::column:::
- ![Azure Service Bus icon][azure-service-bus-icon]
+ [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
\ \
- **Azure Service Bus**<br>(*Standard workflow only*)
+ [**Azure Service Bus**][azure-service-bus-doc]<br>(*Standard workflow only*)
\ \ Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
You can use the following built-in connectors to access specific services and sy
Connect to your Azure Storage account so that you can create, update, query, and manage tables. :::column-end::: :::column:::
- ![IBM DB2 icon][ibm-db2-icon]
+ ![Azure Queue Storage][azure-queue-storage-icon]
\ \
- **DB2**<br>(*Standard workflow only*)
+ **Azure Queue Storage**<br>(*Standard workflow only*)
\ \
- Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more.
+ Connect to your Azure Storage account so that you can create, update, and manage queues.
:::column-end::: :::row-end::: :::row:::
+ :::column:::
+ ![IBM DB2 icon][ibm-db2-icon]
+ \
+ \
+ **IBM DB2**<br>(*Standard workflow only*)
+ \
+ \
+ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more.
+ :::column-end:::
:::column::: ![IBM Host File icon][ibm-host-file-icon] \
You can use the following built-in connectors to access specific services and sy
\ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end:::
- :::column:::
- :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: ## Run code from workflows
For more information, review the following documentation:
[azure-blob-storage-icon]: ./media/apis-list/azure-blob-storage.png [azure-cosmos-db-icon]: ./media/apis-list/azure-cosmos-db.png [azure-event-hubs-icon]: ./media/apis-list/azure-event-hubs.png
+[azure-file-storage-icon]: ./media/apis-list/azure-file-storage.png
[azure-functions-icon]: ./media/apis-list/azure-functions.png
+[azure-key-vault-icon]: ./media/apis-list/azure-key-vault.png
[azure-logic-apps-icon]: ./media/apis-list/azure-logic-apps.png
+[azure-queue-storage-icon]: ./media/apis-list/azure-queues.png
[azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png [azure-table-storage-icon]: ./media/apis-list/azure-table-storage.png [batch-icon]: ./media/apis-list/batch.png
For more information, review the following documentation:
[schedule-icon]: ./media/apis-list/recurrence.png [scope-icon]: ./media/apis-list/scope.png [sftp-ssh-icon]: ./media/apis-list/sftp.png
+[smtp-icon]: ./media/apis-list/smtp.png
[sql-server-icon]: ./media/apis-list/sql.png [switch-icon]: ./media/apis-list/switch.png [terminate-icon]: ./media/apis-list/terminate.png
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
The Service Bus connector has different versions, based on [logic app workflow t
[!INCLUDE [Warning about creating infinite loops](../../includes/connectors-infinite-loops.md)]
+### Peek-lock
+
+Peek-lock operations are available only with the Azure Service Bus managed connector, not the built-in connector.
+ ### Limit on saved sessions in connector cache Per [Service Bus messaging entity, such as a subscription or topic](../service-bus-messaging/service-bus-queues-topics-subscriptions.md), the Service Bus connector can save up to 1,500 unique sessions at a time to the connector cache. If the session count exceeds this limit, old sessions are removed from the cache. For more information, see [Message sessions](../service-bus-messaging/message-sessions.md).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
The Azure Cosmos DB team will review your request and contact you via email to c
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic. +
+### How to identify containers to merge
+
+Containers that meet both of these conditions are likely to benefit from merging partitions:
+- Condition 1: The current RU/s per physical partition is <3000 RU/s
+- Condition 2: The current average storage in GB per physical partition is <20 GB
+
+Condition 1 often occurs when you have previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
+Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions.
+
+#### Criteria 1
+
+To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
+
+For containers using autoscale, this will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this will show the manual RU/s on each physical partition.
+
+In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has 5 physical partitions and each physical partition has 1000 RU/s.
++
+#### Criteria 2
+
+To determine the current average storage per physical partition, first find the overall storage (data + index) of the container.
+
+Navigate to **Insights** > **Storage** > **Data & Index Usage**. The total storage is the sum of the data and index usage. In the below example, the container has a total of 74 GB of storage.
++
+Next, find the total number of physical partitions. This is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have 5 physical partitions.
+
+Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / 5 physical partitions) = 14.8 GB per physical partition.
+
+Based on criteria 1 and 2, our container can potentially benefit from merging partitions.
### Merging physical partitions
data-factory Concepts Parameters Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-parameters-variables.md
+
+ Title: Pipeline parameters and variables
+
+description: Learn about pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics.
++++++ Last updated : 09/13/2022++
+# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics
+
+This article helps you understand the difference between pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics and how to use them to control your pipeline.
+
+## Pipeline parameters
+
+Parameters are defined for the whole pipeline, and are constant during a pipeline run. You can read them during a pipeline run but you are unable to modify them.
+
+### Define a parameter
+
+To define a pipeline parameter click on your pipeline to view the pipeline configuration tabs. Select the "Parameters" tab and click on "+ New" to define a new parameter.
+Parameters can be of type String, Int, Float, Bool, Array, Object or SecureString. In this tab, you can also assign a default value to your parameter.
+
+![Screenshot of parameter definition.](./media/pipeline-parameter-variable-definition/parameter-definition.png)
+
+Before each pipeline run there will be a right panel where you can assign a new value to your parameter, otherwise the pipeline will take the default value previously defined.
+
+### Access a parameter value
+
+To access a parameter value use the ```@pipeline().parameters.<parameter name>``` expression.
+
+## Pipeline variables
+
+Pipeline variables can be set at the start of a pipeline, read and modified during a pipeline run through a [Set Variable](control-flow-set-variable-activity.md) activity.
+
+> [!NOTE]
+> Variables are currently scoped at the pipeline level. This means that they are not thread safe and can cause unexpected and undesired behavior if they are accessed from within a parallel iteration activity such as a foreach loop, especially when the value is also being modified within that foreach activity.
+### Define a variable
+
+To define a pipeline variable click on your pipeline to view the pipeline configuration tabs. Select the "Variables" tab and click on "+ New" to define a new variable.
+Parameters can be of type String, Bool or Array. In this tab, you can also assign a default value to your variable that it will be used as initial value at the start of a pipeline run.
+
+![Screenshot of variable definition.](./media/pipeline-parameter-variable-definition/variable-definition.png)
+
+### Access a variable value
+
+To access a variable value use the ```@variables('<variable name>')``` expression.
+
+## Next steps
+See the following tutorials for step-by-step instructions for creating pipelines with activities:
+
+- [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md)
+- [Build a pipeline with a data transformation activity](tutorial-transform-data-spark-powershell.md)
+
+How to achieve CI/CD (continuous integration and delivery) using Azure Data Factory
+- [Continuous integration and delivery in Azure Data Factory](continuous-integration-delivery.md)
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
For Dynamics 365 specifically, the following application types are supported:
- Dynamics 365 for Field Service - Dynamics 365 for Project Service Automation - Dynamics 365 for Marketing+ This connector doesn't support other application types like Finance, Operations, and Talent. >[!TIP]
databox Data Box Hardware Additional Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-hardware-additional-terms.md
Previously updated : 08/09/2022 Last updated : 09/13/2022
-# Azure Data Box hardware additional terms
+# Azure Data Box Hardware Additional Terms
This article documents additional terms for Azure Data Box hardware.
-## Availability of Data Box devices
+## Availability of Data Box Device
The Data Box Device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft is not responsible for delays related to the Service outside of its direct control. Microsoft reserves the right to refuse to offer the Service and corresponding Data Box Device to anyone in its sole discretion and judgment.
-## Possession and return of the Data Box device
+## Possession and Return of the Data Box Device
As part of the Service, Microsoft allows Customer to retain the Data Box Device for limited periods of time which may vary based on the Data Box Device type. If Customer retains the Data Box Device beyond the specified time period, Microsoft may charge Customer additional daily fees as described at https://go.microsoft.com/fwlink/?linkid=2052173.
-## Shipment and title; fees
+## Shipment and Title; Fees
-### Title and risk of loss
+### Title and Risk of Loss
All right, title and interest in each Data Box Device is and shall remain the property of Microsoft, and except as described in the Additional Terms, no rights are granted to any Data Box Device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, material damage or destruction to or of any Data Box Device while it is at any of CustomerΓÇÖs locations as described in Shipment and Title; Fees, Table 1. Customer is responsible for inspecting the Data Box Device upon receipt from the carrier and for promptly reporting any damage to Microsoft Support at databoxsupport@microsoft.com. Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Data Box Device for delivery back to the Designated Azure Data Center.
Microsoft may charge Customer specified fees in connection with its use of the D
Table 1:
-|Data Box device type | Lost or materially damaged time period and amounts|
+|Data Box Device type | Lost or Materially Damaged Time Period and Amounts|
||| |Data Box | Period: After 90 days<br> Amount: $40,000.00 USD | |Data Box Disk | Period: After 90 days<br> Amount: $2,500.00 USD | |Data Box Heavy | Period: After 90 days<br> Amount: $250,000.00 USD | |Data Box Gateway | N/A |
-### Shipment and return of Data Box device
+### Shipment and Return of Data Box Device
Microsoft will designate a carrier for shipping and delivery of Data Box Devices that are transported or delivered between Customer and a Designated Azure Data Center or a Microsoft entity. Customer will be responsible for costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the Data Box Device, including any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning a Data Box Device to Microsoft, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft.
-### Transit risks
+### Transit Risks
Although data on a Data Box Device is encrypted, Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit.
-### Self-managed shipment
+### Self-Managed Shipment
Alternatively, Customer may elect to use CustomerΓÇÖs designated carrier or Customer itself to ship and return the Data Box Device by selecting this option in the Service portal. Once selected, (i) Microsoft will inform Customer about Data Box Device availability; (ii) Microsoft will prepare the Data Box Device for pick-up by CustomerΓÇÖs designated carrier or Customer itself; and (iii) Customer will coordinate with Microsoft and Designated Azure Data Center personnel for pick-up and return of the Data Box Device by CustomerΓÇÖs designated carrier or Customer directly. CustomerΓÇÖs election for self-managed shipment is subject to the following: (i) Customer abides by all other applicable terms and conditions related to the Service and Data Box Device, including the Product Terms and the Azure Data Box Hardware Terms; (ii) Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (a) ΓÇ£Title and Risk of LossΓÇ¥) from the time that Microsoft makes the Data Box Device available for pick-up by CustomerΓÇÖs designated carrier or Customer, to the time Microsoft has accepted the Data Box Device from CustomerΓÇÖs designated carrier or Customer at the Designated Azure Data Center; (iii) Customer is fully responsible for the costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the same, including carrier charges, any taxes, or applicable customs fees; (iv) When returning a Data Box Device to Microsoft or a Designated Azure Data Center, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions and any packaging materials provided by Microsoft; (v) Customer will be charged applicable fees (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (b) ΓÇ£FeesΓÇ¥) which commence from the time the Data Box Device is ready for pick-up at the agreed upon time and location, and will cease once the Data Box Device has been delivered to Microsoft or the Designated Azure Data Center; and (vi) Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit when shipped by CustomerΓÇÖs designated carrier.
-## Responsibilities if Customer moves a Data Box device between locations
+## Responsibilities if Customer Moves a Data Box Device between Locations
While Customer is in possession of a Data Box Device, Customer may, at its sole risk and expense, transport the Data Box Device to its domestic locations, and international locations as permitted by Microsoft in writing, for use to upload its data in accordance with this section and the requirements of the Additional Terms. If Customer wishes to move a Data Box Device to another country, then Customer must be the exporter of record from the country of export and importer of record into the country where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country location where Customer initially received the Data Box Device. If requested, Microsoft may provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device.
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
Now that you enabled the Azure Monitor Agent, check out the features that are su
- [Endpoint protection assessment](endpoint-protection-recommendations-technical.md) - [Adaptive application controls](adaptive-application-controls.md) - [Fileless attack detection](defender-for-servers-introduction.md#plan-features)
+- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md)
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
For example, if you've [connected an Amazon Web Services (AWS) account](quicksta
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), File Integrity Monitoring (FIM), and more.
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quickstart-onboard-gcp.md) accounts to Microsoft Defender for Cloud.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table summarizes what's included in each plan.
| **Log Analytics 500 MB free data ingestion** | Defender for Cloud leverages Azure Monitor to collect data from Azure VMs and servers, using the Log Analytics agent. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Threat detection** | Defender for Cloud detects threats at the OS level, network layer, and control plane. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Adaptive application controls (AAC)** | [AACs](adaptive-application-controls.md) in Defender for Cloud define allowlists of known safe applications for machines. | |:::image type="icon" source="./media/icons/yes-icon.png"::: |
-| **File integrity monitoring (FIM)** | [FIM](file-integrity-monitoring-overview.md) (change monitoring) examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **File Integrity Monitoring (FIM)** | [FIM](file-integrity-monitoring-overview.md) (change monitoring) examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Just-in-time VM access for management ports** | Defender for Cloud provides [JIT access](just-in-time-access-overview.md), locking down machine ports to reduce the machine's attack surface.| | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Adaptive network hardening** | Filtering traffic to and from resources with network security groups (NSG) improves your network security posture. You can further improve security by [hardening the NSG rules](adaptive-network-hardening.md) based on actual traffic patterns. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Docker host hardening** | Defender for Cloud assesses containers hosted on Linux machines running Docker containers, and compares them with the Center for Internet Security (CIS) Docker Benchmark. [Learn more](harden-docker-hosts.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
defender-for-cloud File Integrity Monitoring Enable Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-ama.md
+
+ Title: Enable File Integrity Monitoring (Azure Monitor Agent)
+description: Learn how to enable File Integrity Monitor when you collect data with the Azure Monitor Agent (AMA)
+++ Last updated : 09/04/2022+
+# Enable File Integrity Monitoring when using the Azure Monitor Agent
+
+To provide [File Integrity Monitoring (FIM)](file-integrity-monitoring-overview.md), the Azure Monitor Agent (AMA) collects data from machines according to [Data Collection Rules](../azure-monitor/essentials/data-collection-rule-overview.md). When the current state of your system files is compared with the state during the previous scan, FIM notifies you about suspicious modifications.
+
+FIM uses the Azure Change Tracking solution to track and identify changes in your environment. When File Integrity Monitoring is enabled, you have a **Change Tracking** resource of type **Solution**. Learn about [data collection for Change Tracking](../automation/change-tracking/overview.md#change-tracking-and-inventory-data-collection).
+
+File Integrity Monitoring with the Azure Monitor Agent offers:
+
+- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) that enhances security, reliability, and facilitates multi-homing experience to store data.
+- **Compatibility with tracking tool**- Compatible with the Change tracking (CT) extension deployed through the Azure Policy on the client's virtual machine. You can switch to Azure Monitor Agent (AMA), and then the CT extension pushes the software, files, and registry to AMA.
+- **Simplified onboarding**- You can [onboard FIM](#enable-file-integrity-monitoring-with-ama) from Microsoft Defender for Cloud.
+- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](../azure-monitor/agents/azure-monitor-agent-migration.md) so that all VMs point to a single workspace for data collection and maintenance.
+- **Rules management** ΓÇô Uses [Data Collection Rules](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-public-preview/) to configure or customize various aspects of data collection. For example, you can change the frequency of file collection.
+
+> [!NOTE]
+> If you [remove the **Change Tracking** resource](../automation/change-tracking/remove-feature.md#remove-changetracking-solution), you will also disable the File Integrity Monitoring in Defender for Cloud.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|Preview|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans)|
+|Required roles and permissions:|**Owner**<br>**Contributor**|
+|Clouds:|:::image type="icon" source="./medi) enabled devices.<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
+
+## Prerequisites
+
+To track changes to your files on machines with AMA:
+
+- Enable [Defender for Servers Plan 2](defender-for-servers-introduction.md)
+
+- [Install AMA](auto-deploy-azure-monitoring-agent.md) on machines that you want to monitor
+
+## Enable File Integrity Monitoring with AMA
+
+To enable File Integrity Monitoring (FIM):
+
+1. Use the FIM recommendation to select machines for file integrity monitoring:
+ 1. From Defender for Cloud's sidebar, open the **Recommendations** page.
+ 1. Select the recommendation [File integrity monitoring should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b7d740f-c271-4bfd-88fb-515680c33440). Learn more about [Defender for Cloud recommendations](review-security-recommendations.md).
+ 1. Select the machines that you want to use File Integrity Monitoring on, select **Fix**, and select **Fix X resources**.
+
+ The recommendation fix:
+
+ - Installs the `ChangeTracking-Windows` or `ChangeTracking-Linux` extension on the machines.
+ - Generates a data collection rule (DCR) for the subscription, named `Microsoft-ChangeTracking-[subscriptionId]-default-dcr`, that defines what files and registries should be monitored based on default settings. The fix attaches the DCR to all machines in the subscription that have AMA installed and FIM enabled.
+ - Creates a new Log Analytics workspace with the naming convention `defaultWorkspace-[subscriptionId]-fim` and with the default workspace settings.
+
+ You can update the DCR and Log Analytics workspace settings later.
+
+1. From Defender for Cloud's sidebar, go to **Workload protections** > **File integrity monitoring**, and select the banner to show the results for machines with Azure Monitor Agent.
+
+ :::image type="content" source="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-banner.png" alt-text="Screenshot of banner in File integrity monitoring to show the results for machines with Azure Monitor Agent.":::
+
+1. The machines with File Integrity Monitoring enabled are shown.
+
+ :::image type="content" source="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-results.png" alt-text="Screenshot of File integrity monitoring results for machines with Azure Monitor Agent." lightbox="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-results.png":::
+
+ You can see the number of changes that were made to the tracked files, and you can select **View changes** to see the changes made to the tracked files on that machine.
+
+## Edit the list of tracked files and registry keys
+
+File Integrity Monitoring (FIM) for machines with Azure Monitor Agent uses [Data Collection Rules (DCRs)](../azure-monitor/essentials/data-collection-rule-overview.md) to define the list of files and registry keys to track. Each subscription has a DCR for the machines in that subscription.
+
+FIM creates DCRs with a default configuration of tracked files and registry keys. You can edit the DCRs to add, remove, or update the list of files and registries that are tracked by FIM.
+
+To edit the list of tracked files and registries:
+
+1. In File integrity monitoring, select **Data collection rules**.
+
+ You can see each of the rules that were created for the subscriptions that you have access to.
+
+1. Select the DCR that you want to update for a subscription.
+
+ Each file in the list of Windows registry keys, Windows files, and Linux files contains a definition for a file or registry key, including name, path, and other options. You can also set **Enabled** to **False** to untrack the file or registry key without removing the definition.
+
+ Learn more about [system file and registry key definitions](../automation/change-tracking/manage-change-tracking.md#track-files).
+
+1. Select a file, and then add or edit the file or registry key definition.
+
+1. Select **Add** to save the changes.
+
+## Exclude machines from File Integrity Monitoring
+
+Every machine in the subscription that is attached to the DCR is monitored. You can detach a machine from the DCR so that the files and registry keys aren't tracked.
+
+To exclude a machine from File Integrity Monitoring:
+
+- In the list of monitored machines in the FIM results, select the menu (**...**) for the machine and select **Detach data collection rule**.
++
+The machine moves to the list of unmonitored machines, and file changes aren't tracked for that machine anymore.
+
+## Next steps
+
+Learn more about Defender for Cloud in:
+
+- [Setting security policies](tutorial-security-policy.md) - Learn how to configure security policies for your Azure subscriptions and resource groups.
+- [Managing security recommendations](review-security-recommendations.md) - Learn how recommendations help you protect your Azure resources.
+- [Azure Security blog](https://azure.microsoft.com/blog/topics/security/) - Get the latest Azure security news and information.
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
+
+ Title: Enable File Integrity Monitoring (Log Analytics agent)
+description: Learn how to enable File Integrity Monitoring when you collect data with the Log Analytics agent
+++ Last updated : 09/04/2022+
+# Enable File Integrity Monitoring when using the Log Analytics agent
+
+To provide [File Integrity Monitoring (FIM)](file-integrity-monitoring-overview.md), the Log Analytics agent uploads data to the Log Analytics workspace. By comparing the current state of these items with the state during the previous scan, FIM notifies you if suspicious modifications have been made.
+
+FIM uses the Azure Change Tracking solution to track and identify changes in your environment. When File Integrity Monitoring is enabled, you have a **Change Tracking** resource of type **Solution**. For data collection frequency details, see [Change Tracking data collection details](../automation/change-tracking/overview.md#change-tracking-and-inventory-data-collection).
+
+> [!NOTE]
+> If you remove the **Change Tracking** resource, you will also disable the File Integrity Monitoring feature in Defender for Cloud.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|General availability (GA)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
+|Required roles and permissions:|**Workspace owner** can enable/disable FIM (for more information, see [Azure Roles for Log Analytics](/services-hub/health/azure-roles#azure-roles)).<br>**Reader** can view results.|
+|Clouds:|:::image type="icon" source="./medi).<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
+
+## Enable File Integrity Monitoring with the Log Analytics agent
+
+FIM is only available from Defender for Cloud's pages in the Azure portal. There is currently no REST API for working with FIM.
+
+1. From the **Workload protections** dashboard's **Advanced protection** area, select **File integrity monitoring**.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/open-file-integrity-monitoring.png" alt-text="Screenshot of screenshot of opening the File Integrity Monitoring dashboard." lightbox="./media/file-integrity-monitoring-overview/open-file-integrity-monitoring.png":::
+
+ The following information is provided for each workspace:
+
+ - Total number of changes that occurred in the last week (you may see a dash "-ΓÇ£ if FIM is not enabled on the workspace)
+ - Total number of computers and VMs reporting to the workspace
+ - Geographic location of the workspace
+ - Azure subscription that the workspace is under
+
+1. Use this page to:
+
+ - Access and view the status and settings of each workspace
+
+ - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon indicates that the workspace or subscription isn't protected with Microsoft Defender for Servers. To use the FIM features, your subscription must be protected with this plan. For more information, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+
+ - ![Enable icon][3] Enable FIM on all machines under the workspace and configure the FIM options. This icon indicates that FIM is not enabled for the workspace.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/workspace-list-fim.png" alt-text="Screenshot of enabling FIM for a specific workspace.":::
+
+ > [!TIP]
+ > If there's no enable or upgrade button, and the space is blank, it means that FIM is already enabled on the workspace.
+
+1. Select **ENABLE**. The details of the workspace including the number of Windows and Linux machines under the workspace is shown.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/workspace-fim-status.png" alt-text="Screenshot of FIM workspace details page.":::
+
+ The recommended settings for Windows and Linux are also listed. Expand **Windows files**, **Registry**, and **Linux files** to see the full list of recommended items.
+
+1. Clear the checkboxes for any recommended entities you do not want to be monitored by FIM.
+
+1. Select **Apply file integrity monitoring** to enable FIM.
+
+> [!NOTE]
+> You can change the settings at any time. Learn more about [editing monitored entities](#edit-monitored-entities).
+
+### Disable File Integrity Monitoring
+
+FIM uses the Azure Change Tracking solution to track and identify changes in your environment. By disabling FIM, you remove the Change Tracking solution from selected workspace.
+
+To disable FIM:
+
+1. From the **File Integrity Monitoring dashboard** for a workspace, select **Disable**.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/disable-file-integrity-monitoring.png" alt-text="Screenshot of disabling file integrity monitoring from the settings page.":::
+
+1. Select **Remove**.
+
+## Monitor workspaces, entities, and files
+
+### Audit monitored workspaces
+
+The **File integrity monitoring** dashboard displays for workspaces where FIM is enabled. The FIM dashboard opens after you enable FIM on a workspace or when you select a workspace in the **file integrity monitoring** window that already has FIM enabled.
++
+The FIM dashboard for a workspace displays the following details:
+
+- Total number of machines connected to the workspace
+- Total number of changes that occurred during the selected time period
+- A breakdown of change type (files, registry)
+- A breakdown of change category (modified, added, removed)
+
+Select **Filter** at the top of the dashboard to change the time period for which changes are shown.
++
+The **Servers** tab lists the machines reporting to this workspace. For each machine, the dashboard lists:
+
+- Total changes that occurred during the selected period of time
+- A breakdown of total changes as file changes or registry changes
+
+When you select a machine, the query appears along with the results that identify the changes made during the selected time period for the machine. You can expand a change for more information.
++
+The **Changes** tab (shown below) lists all changes for the workspace during the selected time period. For each entity that was changed, the dashboard lists the:
+
+- Machine that the change occurred on
+- Type of change (registry or file)
+- Category of change (modified, added, removed)
+- Date and time of change
++
+**Change details** opens when you enter a change in the search field or select an entity listed under the **Changes** tab.
++
+### Edit monitored entities
+
+1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png" alt-text="Screenshot of accessing the file integrity monitoring settings for a workspace." lightbox="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png":::
+
+ **Workspace Configuration** opens with tabs for each type of element that can be monitored:
+
+ - Windows registry
+ - Windows files
+ - Linux Files
+ - File content
+ - Windows services
+
+ Each tab lists the entities that you can edit in that category. For each entity listed, Defender for Cloud identifies whether FIM is enabled (true) or not enabled (false). Edit the entity to enable or disable FIM.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-workspace-configuration.png" alt-text="Screenshot of workspace configuration for file integrity monitoring in Microsoft Defender for Cloud.":::
+
+1. Select an entry from one of the tabs and edit any of the available fields in the **Edit for Change Tracking** pane. Options include:
+
+ - Enable (True) or disable (False) file integrity monitoring
+ - Provide or change the entity name
+ - Provide or change the value or path
+ - Delete the entity
+
+1. Discard or save your changes.
+
+### Add a new entity to monitor
+
+1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
+
+ The **Workspace Configuration** opens.
+
+1. One the **Workspace Configuration**:
+
+ 1. Select the tab for the type of entity that you want to add: Windows registry, Windows files, Linux Files, file content, or Windows services.
+ 1. Select **Add**.
+
+ In this example, we selected **Linux Files**.
+
+ :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-add-element.png" alt-text="Screenshot of adding an element to monitor in Microsoft Defender for Cloud's file integrity monitoring." lightbox="./media/file-integrity-monitoring-overview/file-integrity-monitoring-add-element.png":::
+
+1. Select **Add**. **Add for Change Tracking** opens.
+
+1. Enter the necessary information and select **Save**.
+
+### Folder and path monitoring using wildcards
+
+Use wildcards to simplify tracking across directories. The following rules apply when you configure folder monitoring using wildcards:
+- Wildcards are required for tracking multiple files.
+- Wildcards can only be used in the last segment of a path, such as C:\folder\file or /etc/*.conf
+- If an environment variable includes a path that is not valid, validation will succeed but the path will fail when inventory runs.
+- When setting the path, avoid general paths such as c:\*.* which will result in too many folders being traversed.
+
+## Compare baselines using File Integrity Monitoring
+
+[File Integrity Monitoring (FIM)](file-integrity-monitoring-overview.md) informs you when changes occur to sensitive areas in your resources, so you can investigate and address unauthorized activity. FIM monitors Windows files, Windows registries, and Linux files.
+
+### Enable built-in recursive registry checks
+
+The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary may configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
+
+![Registry.](./media/file-integrity-monitoring-enable-log-analytics/baselines-registry.png)
+
+>[!NOTE]
+> Recursive checks apply only to recommended security hives and not to custom registry paths.
+
+### Add a custom registry check
+
+FIM baselines start by identifying characteristics of a known-good state for the operating system and supporting application. For this example, we will focus on the password policy configurations for Windows Server 2008 and higher.
+
+|Policy Name | Registry Setting|
+|-|--|
+|Domain controller: Refuse machine account password changes| MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RefusePasswordChange|
+|Domain member: Digitally encrypt or sign secure channel data (always)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RequireSignOrSeal|
+|Domain member: Digitally encrypt secure channel data (when possible)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\SealSecureChannel|
+|Domain member: Digitally sign secure channel data (when possible)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\SignSecureChannel|
+|Domain member: Disable machine account password changes|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\DisablePasswordChange|
+|Domain member: Maximum machine account password age|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\MaximumPasswordAge|
+|Domain member: Require strong (Windows 2000 or later) session key|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RequireStrongKey|
+|Network security: Restrict NTLM: NTLM authentication in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RestrictNTLMInDomain|
+|Network security: Restrict NTLM: Add server exceptions in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\DCAllowedNTLMServers|
+|Network security: Restrict NTLM: Audit NTLM authentication in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\AuditNTLMInDomain|
+
+> [!NOTE]
+> To learn more about registry settings supported by various operating system versions, refer to the [Group Policy Settings reference spreadsheet](https://www.microsoft.com/download/confirmation.aspx?id=25250).
+
+To configure FIM to monitor registry baselines:
+
+- In the **Add Windows Registry for Change Tracking** window, in the **Windows Registry Key** text box, enter the following registry key:
+
+ ```
+ HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
+ ```
+
+ :::image type="content" source="./media/file-integrity-monitoring-enable-log-analytics/baselines-add-registry.png" alt-text="Screenshot of enable FIM on a registry.":::
+
+### Track changes to Windows files
+
+1. In the **Add Windows File for Change Tracking** window, in the **Enter path** text box, enter the folder that contains the files that you want to track.
+In the example in the following figure,
+**Contoso Web App** resides in the D:\ drive within the **ContosWebApp** folder structure.
+1. Create a custom Windows file entry by providing a name of the setting class, enabling recursion, and specifying the top folder with a wildcard (*) suffix.
+
+ :::image type="content" source="./media/file-integrity-monitoring-enable-log-analytics/baselines-add-file.png" alt-text="Screenshot of enable FIM on a file.":::
+
+### Retrieve change data
+
+File Integrity Monitoring data resides within the Azure Log Analytics/ConfigurationChange table set.
+
+ 1. Set a time range to retrieve a summary of changes by resource.
+
+ In the following example, we are retrieving all changes in the last fourteen days in the categories of registry and files:
+
+ ```
+ ConfigurationChange
+ | where TimeGenerated > ago(14d)
+ | where ConfigChangeType in ('Registry', 'Files')
+ | summarize count() by Computer, ConfigChangeType
+ ```
+
+1. To view details of the registry changes:
+
+ 1. Remove **Files** from the **where** clause,
+ 1. Remove the summarization line and replace it with an ordering clause:
+
+ ```
+ ConfigurationChange
+ | where TimeGenerated > ago(14d)
+ | where ConfigChangeType in ('Registry')
+ | order by Computer, RegistryKey
+ ```
+
+Reports can be exported to CSV for archival and/or channeled to a Power BI report.
+
+![FIM data.](./media/file-integrity-monitoring-enable-log-analytics/baselines-data.png)
+
+<!--Image references-->
+[3]: ./media/file-integrity-monitoring-overview/enable.png
+[4]: ./media/file-integrity-monitoring-overview/upgrade-plan.png
+
+## Next steps
+
+Learn more about Defender for Cloud in:
+
+- [Setting security policies](tutorial-security-policy.md) - Learn how to configure security policies for your Azure subscriptions and resource groups.
+- [Managing security recommendations](review-security-recommendations.md) - Learn how recommendations help you protect your Azure resources.
+- [Azure Security blog](https://azure.microsoft.com/blog/topics/security/) - Get the latest Azure security news and information.
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Title: File integrity monitoring in Microsoft Defender for Cloud
-description: Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for Cloud using this walkthrough.
+ Title: Track changes to system files and registry keys
+description: Learn about tracking changes to system files and registry keys with file integrity monitoring in Microsoft Defender for Cloud.
Previously updated : 11/09/2021 Last updated : 09/04/2022
-# File integrity monitoring in Microsoft Defender for Cloud
+# File Integrity Monitoring in Microsoft Defender for Cloud
-Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for Cloud using this walkthrough.
--
-## Availability
-
-|Aspect|Details|
-|-|:-|
-|Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
-|Required roles and permissions:|**Workspace owner** can enable/disable FIM (for more information, see [Azure Roles for Log Analytics](/services-hub/health/azure-roles#azure-roles)).<br>**Reader** can view results.|
-|Clouds:|:::image type="icon" source="./medi).<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
--
-## What is FIM in Defender for Cloud?
-File integrity monitoring (FIM), also known as change monitoring, examines operating system files, Windows registries, application software, Linux system files, and more, for changes that might indicate an attack.
+File Integrity Monitoring (FIM) examines operating system files, Windows registries, application software, and Linux system files for changes that might indicate an attack. FIM lets you take advantage of [Change Tracking](../automation/change-tracking/overview.md) directly in Defender for Cloud.
Defender for Cloud recommends entities to monitor with FIM, and you can also define your own FIM policies or entities to monitor. FIM informs you about suspicious activity such as: - File and registry key creation or removal - File modifications (changes in file size, access control lists, and hash of the content)-- Registry modifications (changes in size, access control lists, type, and the content)-
-In this tutorial you'll learn how to:
-
-> [!div class="checklist"]
-> * Review the list of suggested entities to monitor with FIM
-> * Define your own, custom FIM rules
-> * Audit changes to your monitored entities
-> * Use wildcards to simplify tracking across directories
--
-## How does FIM work?
-
-The Log Analytics agent uploads data to the Log Analytics workspace. By comparing the current state of these items with the state during the previous scan, FIM notifies you if suspicious modifications have been made.
+- Registry modifications (changes in size, access control lists, type, and content)
-FIM uses the Azure Change Tracking solution to track and identify changes in your environment. When file integrity monitoring is enabled, you have a **Change Tracking** resource of type **Solution**. For data collection frequency details, see [Change Tracking data collection details](../automation/change-tracking/overview.md#change-tracking-and-inventory-data-collection).
-
-> [!NOTE]
-> If you remove the **Change Tracking** resource, you will also disable the file integrity monitoring feature in Defender for Cloud.
+Many regulatory compliance standards require implementing FIM controls, such as PCI-DSS and ISO 17799.
## Which files should I monitor?
-When choosing which files to monitor, consider the files that are critical for your system and applications. Monitor files that you donΓÇÖt expect to change without planning. If you choose files that are frequently changed by applications or operating system (such as log files and text files) it'll create a lot of noise, making it difficult to identify an attack.
+When choosing which files to monitor, consider the files that are critical for your system and applications. Monitor files that you donΓÇÖt expect to change without planning. If you choose files that are frequently changed by applications or operating system (such as log files and text files) it will create noise, making it difficult to identify an attack.
Defender for Cloud provides the following list of recommended items to monitor based on known attack patterns.
Defender for Cloud provides the following list of recommended items to monitor b
|||HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\PublicProfile| |||HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\StandardProfile| -
-## Enable file integrity monitoring
-
-FIM is only available from Defender for Cloud's pages in the Azure portal. There is currently no REST API for working with FIM.
-
-1. From the **Workload protections** dashboard's **Advanced protection** area, select **File integrity monitoring**.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/open-file-integrity-monitoring.png" alt-text="Launching FIM." lightbox="./media/file-integrity-monitoring-overview/open-file-integrity-monitoring.png":::
-
- The **File integrity monitoring** configuration page opens.
-
- The following information is provided for each workspace:
-
- - Total number of changes that occurred in the last week (you may see a dash "-ΓÇ£ if FIM is not enabled on the workspace)
- - Total number of computers and VMs reporting to the workspace
- - Geographic location of the workspace
- - Azure subscription that the workspace is under
-
-1. Use this page to:
-
- - Access and view the status and settings of each workspace
-
- - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon Indicates that the workspace or subscription isn't protected with Microsoft Defender for Servers. To use the FIM features, your subscription must be protected with this plan. For more information, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
-
- - ![Enable icon][3] Enable FIM on all machines under the workspace and configure the FIM options. This icon indicates that FIM is not enabled for the workspace.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/workspace-list-fim.png" alt-text="Enabling FIM for a specific workspace.":::
--
- > [!TIP]
- > If there's no enable or upgrade button, and the space is blank, it means that FIM is already enabled on the workspace.
--
-1. Select **ENABLE**. The details of the workspace including the number of Windows and Linux machines under the workspace is shown.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/workspace-fim-status.png" alt-text="FIM workspace details page.":::
-
- The recommended settings for Windows and Linux are also listed. Expand **Windows files**, **Registry**, and **Linux files** to see the full list of recommended items.
-
-1. Clear the checkboxes for any recommended entities you do not want to be monitored by FIM.
-
-1. Select **Apply file integrity monitoring** to enable FIM.
-
-> [!NOTE]
-> You can change the settings at any time. See [Edit monitored entities](#edit-monitored-entities) below to learn more.
---
-## Audit monitored workspaces
-
-The **File integrity monitoring** dashboard displays for workspaces where FIM is enabled. The FIM dashboard opens after you enable FIM on a workspace or when you select a workspace in the **file integrity monitoring** window that already has FIM enabled.
--
-The FIM dashboard for a workspace displays the following details:
--- Total number of machines connected to the workspace-- Total number of changes that occurred during the selected time period-- A breakdown of change type (files, registry)-- A breakdown of change category (modified, added, removed)-
-Select **Filter** at the top of the dashboard to change the time period for which changes are shown.
--
-The **Servers** tab lists the machines reporting to this workspace. For each machine, the dashboard lists:
--- Total changes that occurred during the selected period of time-- A breakdown of total changes as file changes or registry changes-
-When you select a machine, the query appears along with the results that identify the changes made during the selected time period for the machine. You can expand a change for more information.
--
-The **Changes** tab (shown below) lists all changes for the workspace during the selected time period. For each entity that was changed, the dashboard lists the:
--- Machine that the change occurred on-- Type of change (registry or file)-- Category of change (modified, added, removed)-- Date and time of change--
-**Change details** opens when you enter a change in the search field or select an entity listed under the **Changes** tab.
--
-## Edit monitored entities
-
-1. From the **File integrity monitoring dashboard** for a workspace, select **Settings** from the toolbar.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png" alt-text="Accessing the file integrity monitoring settings for a workspace." lightbox="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png":::
-
- **Workspace Configuration** opens with tabs for each type of element that can be monitored:
-
- - Windows registry
- - Windows files
- - Linux Files
- - File content
- - Windows services
-
- Each tab lists the entities that you can edit in that category. For each entity listed, Defender for Cloud identifies whether FIM is enabled (true) or not enabled (false). Edit the entity to enable or disable FIM.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-workspace-configuration.png" alt-text="Workspace configuration for file integrity monitoring in Microsoft Defender for Cloud.":::
-
-1. Select an entry from one of the tabs and edit any of the available fields in the **Edit for Change Tracking** pane. Options include:
-
- - Enable (True) or disable (False) file integrity monitoring
- - Provide or change the entity name
- - Provide or change the value or path
- - Delete the entity
-
-1. Discard or save your changes.
--
-## Add a new entity to monitor
-
-1. From the **File integrity monitoring dashboard** for a workspace, select **Settings** from the toolbar.
-
- The **Workspace Configuration** opens.
-
-1. One the **Workspace Configuration**:
-
- 1. Select the tab for the type of entity that you want to add: Windows registry, Windows files, Linux Files, file content, or Windows services.
- 1. Select **Add**.
-
- In this example, we selected **Linux Files**.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-add-element.png" alt-text="Adding an element to monitor in Microsoft Defender for Cloud's file integrity monitoring" lightbox="./media/file-integrity-monitoring-overview/file-integrity-monitoring-add-element.png":::
-
-1. Select **Add**. **Add for Change Tracking** opens.
-
-1. Enter the necessary information and select **Save**.
-
-## Folder and path monitoring using wildcards
-
-Use wildcards to simplify tracking across directories. The following rules apply when you configure folder monitoring using wildcards:
-- Wildcards are required for tracking multiple files.-- Wildcards can only be used in the last segment of a path, such as C:\folder\file or /etc/*.conf-- If an environment variable includes a path that is not valid, validation will succeed but the path will fail when inventory runs.-- When setting the path, avoid general paths such as c:\*.* which will result in too many folders being traversed.-
-## Disable FIM
-You can disable FIM. FIM uses the Azure Change Tracking solution to track and identify changes in your environment. By disabling FIM, you remove the Change Tracking solution from selected workspace.
-
-To disable FIM:
-
-1. From the **File integrity monitoring dashboard** for a workspace, select **Disable**.
-
- :::image type="content" source="./media/file-integrity-monitoring-overview/disable-file-integrity-monitoring.png" alt-text="Disable file integrity monitoring from the settings page.":::
-
-1. Select **Remove**.
- ## Next steps
-In this article, you learned to use file integrity monitoring (FIM) in Defender for Cloud. To learn more about Defender for Cloud, see the following pages:
-* [Setting security policies](tutorial-security-policy.md) -- Learn how to configure security policies for your Azure subscriptions and resource groups.
-* [Managing security recommendations](review-security-recommendations.md) -- Learn how recommendations help you protect your Azure resources.
-* [Azure Security blog](/archive/blogs/azuresecurity/)--Get the latest Azure security news and information.
+In this article, you learned about File Integrity Monitoring (FIM) in Defender for Cloud.
+
+Next, you can:
-<!--Image references-->
-[3]: ./media/file-integrity-monitoring-overview/enable.png
-[4]: ./media/file-integrity-monitoring-overview/upgrade-plan.png
+- [Enable File Integrity Monitoring when using the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md)
+- [Enable File Integrity Monitoring when using the Log Analytics agent](file-integrity-monitoring-enable-log-analytics.md)
defender-for-cloud File Integrity Monitoring Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-usage.md
- Title: File Integrity Monitoring in Microsoft Defender for Cloud
-description: Learn how to compare baselines with File Integrity Monitoring in Microsoft Defender for Cloud.
--- Previously updated : 11/09/2021--
-# Compare baselines using File Integrity Monitoring (FIM)
-
-File Integrity Monitoring (FIM) informs you when changes occur to sensitive areas in your resources, so you can investigate and address unauthorized activity. FIM monitors Windows files, Windows registries, and Linux files.
-
-This topic explains how to enable FIM on the files and registries. For more information about FIM, see [File Integrity Monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
-
-## Why use FIM?
-
-Operating system, applications, and associated configurations control the behavior and security state of your resources. Therefore, attackers target the files that control your resources, in order to overtake a resource's operating system and/or execute activities without being detected.
-
-In fact, many regulatory compliance standards such as PCI-DSS & ISO 17799 require implementing FIM controls.
-
-## Enable built-in recursive registry checks
-
-The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary may configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
-
-![Registry.](./media/file-integrity-monitoring-usage/baselines-registry.png)
-
->[!NOTE]
-> Recursive checks apply only to recommended security hives and not to custom registry paths.
-
-## Add a custom registry check
-
-FIM baselines start by identifying characteristics of a known-good state for the operating system and supporting application. For this example, we will focus on the password policy configurations for Windows Server 2008 and higher.
--
-|Policy Name | Registry Setting|
-|-|--|
-|Domain controller: Refuse machine account password changes| MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RefusePasswordChange|
-|Domain member: Digitally encrypt or sign secure channel data (always)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RequireSignOrSeal|
-|Domain member: Digitally encrypt secure channel data (when possible)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\SealSecureChannel|
-|Domain member: Digitally sign secure channel data (when possible)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\SignSecureChannel|
-|Domain member: Disable machine account password changes|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\DisablePasswordChange|
-|Domain member: Maximum machine account password age|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\MaximumPasswordAge|
-|Domain member: Require strong (Windows 2000 or later) session key|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RequireStrongKey|
-|Network security: Restrict NTLM: NTLM authentication in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RestrictNTLMInDomain|
-|Network security: Restrict NTLM: Add server exceptions in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\DCAllowedNTLMServers|
-|Network security: Restrict NTLM: Audit NTLM authentication in this domain|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\AuditNTLMInDomain|
-
-> [!NOTE]
-> To learn more about registry settings supported by various operating system versions, refer to the [Group Policy Settings reference spreadsheet](https://www.microsoft.com/download/confirmation.aspx?id=25250).
-
-To configure FIM to monitor registry baselines:
--- In the **Add Windows Registry for Change Tracking** window, in the **Windows Registry Key** text box, enter the following registry key:-
- ```
- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
- ```
-
- :::image type="content" source="./media/file-integrity-monitoring-usage/baselines-add-registry.png" alt-text="Enable FIM on a registry.":::
-
-## Track changes to Windows files
-
-1. In the **Add Windows File for Change Tracking** window, in the **Enter path** text box, enter the folder which contains the files that you want to track.
-In the example in the following figure,
-**Contoso Web App** resides in the D:\ drive within the **ContosWebApp** folder structure.
-1. Create a custom Windows file entry by providing a name of the setting class, enabling recursion, and specifying the top folder with a wildcard (*) suffix.
-
- :::image type="content" source="./media/file-integrity-monitoring-usage/baselines-add-file.png" alt-text="Enable FIM on a file.":::
-
-## Retrieve change data
-
-File Integrity Monitoring data resides within the Azure Log Analytics / ConfigurationChange table set.
-
- 1. Set a time range to retrieve a summary of changes by resource.
-
- In the following example, we are retrieving all changes in the last fourteen days in the categories of registry and files:
-
- ```
- ConfigurationChange
- | where TimeGenerated > ago(14d)
- | where ConfigChangeType in ('Registry', 'Files')
- | summarize count() by Computer, ConfigChangeType
- ```
-
-1. To view details of the registry changes:
-
- 1. Remove **Files** from the **where** clause,
- 1. Remove the summarization line and replace it with an ordering clause:
-
- ```
- ConfigurationChange
- | where TimeGenerated > ago(14d)
- | where ConfigChangeType in ('Registry')
- | order by Computer, RegistryKey
- ```
-
-Reports can be exported to CSV for archival and/or channeled to a Power BI report.
-
-![FIM data.](./media/file-integrity-monitoring-usage/baselines-data.png)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The following IAM permissions are needed to discover AWS resources:
| GuardDute | `guardduty:DescribeOrganizationConfiguration` <br> `guardduty:DescribePublishingDestination` <br> `guardduty:List*` | | IAM | `iam:Generate*` <br> `iam:Get*` <br> `iam:List*` <br> `iam:Simulate*` | | KMS | `kms:Describe*` <br> `kms:List*` |
-| LAMDBA | `lambda:GetPolicy` <br> `lambda:List*` |
+| LAMBDA | `lambda:GetPolicy` <br> `lambda:List*` |
| Network firewall | `network-firewall:DescribeFirewall` <br> `network-firewall:DescribeFirewallPolicy` <br> `network-firewall:DescribeLoggingConfiguration` <br> `network-firewall:DescribeResourcePolicy` <br> `network-firewall:DescribeRuleGroup` <br> `network-firewall:DescribeRuleGroupMetadata` <br> `network-firewall:ListFirewallPolicies` <br> `network-firewall:ListFirewalls` <br> `network-firewall:ListRuleGroups` <br> `network-firewall:ListTagsForResource` | | RDS | `rds:Describe*` <br> `rds:List*` | | RedShift | `redshift:Describe*` |
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
## September 2022 - [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities)
+- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)
### Suppress alerts based on Container and Kubernetes entities
You can now suppress alerts based on these Kubernetes entities so you can use th
Learn more about [alert suppression rules](alerts-suppression-rules.md).
+### Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent
+
+File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
+
+FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can deploy through Defender for Cloud.
+
+Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
+ ## August 2022 Updates in August include:
When vulnerabilities are detected, Defender for Cloud generates the following se
Learn more about [viewing vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images).
-## Azure Monitor Agent integration now in preview
+### Azure Monitor Agent integration now in preview
Defender for Cloud now includes preview support for the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA). AMA is intended to replace the legacy Log Analytics agent (also referred to as the Microsoft Monitoring Agent (MMA)), which is on a path to deprecation. AMA [provides a number of benefits](../azure-monitor/agents/azure-monitor-agent-migration.md#benefits) over legacy agents.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes | | [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes | | [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
-| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes | | [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes | | [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes | | [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes | | [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
-| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes | | [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes | | [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Network-based security alerts](other-threat-protections.md#network-layer) | - | - | | [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | | [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö |
-| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö |
| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | | [Network map](protect-network-resources.md#network-map) | - | - | | [Adaptive network hardening](adaptive-network-hardening.md) | - | - |
For information about when recommendations are generated for each of these solut
| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available | | **Microsoft Defender for Servers features** <sup>[7](#footnote7)</sup> | | | | | - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
-| - [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
+| - [File Integrity Monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
| - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA | | - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | Not Available | Not Available | | - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
firewall-manager Migrate To Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/migrate-to-policy.md
$FirewallName = "azfw"
$FirewallPolicyResourceGroup = "AzFWPolicyRG" $FirewallPolicyName = "fwpolicy" $FirewallPolicyLocation = "WestEurope"-
-$DefaultAppRuleCollectionGroupName = "ApplicationRuleCollectionGroup"
-$DefaultNetRuleCollectionGroupName = "NetworkRuleCollectionGroup"
-$DefaultNatRuleCollectionGroupName = "NatRuleCollectionGroup"
-$ApplicationRuleGroupPriority = 300
-$NetworkRuleGroupPriority = 200
-$NatRuleGroupPriority = 100
-
+ @@ -43,141 +44,186 @@ $InvalidCharsPattern = "[']"
#Helper functions for translating ApplicationProtocol and ApplicationRule Function GetApplicationProtocolsString {
Function GetApplicationProtocolsString
} return $output.Substring(0, $output.Length - 1) }- Function GetApplicationRuleCmd { Param([Object] $ApplicationRule)- $cmd = "New-AzFirewallPolicyApplicationRule"
- $cmd = $cmd + " -Name " + "'" + $($ApplicationRule.Name) + "'"
-
+ $parsedName = ParseRuleName($ApplicationRule.Name)
+ $cmd = $cmd + " -Name " + "'" + $parsedName + "'"
if ($ApplicationRule.SourceAddresses) { $ApplicationRule.SourceAddresses = $ApplicationRule.SourceAddresses -join ","
Function GetApplicationRuleCmd
$ApplicationRule.SourceIpGroups = $ApplicationRule.SourceIpGroups -join "," $cmd = $cmd + " -SourceIpGroup " + $ApplicationRule.SourceIpGroups }- if ($ApplicationRule.Description) { $cmd = $cmd + " -Description " + "'" + $ApplicationRule.Description + "'"
Function GetApplicationRuleCmd
{ $protocols = GetApplicationProtocolsString($ApplicationRule.Protocols) $cmd = $cmd + " -Protocol " + $protocols- $AppRule = $($ApplicationRule.TargetFqdns) -join "," $cmd = $cmd + " -TargetFqdn " + $AppRule- } if ($ApplicationRule.FqdnTags) { $cmd = $cmd + " -FqdnTag " + "'" + $ApplicationRule.FqdnTags + "'" }- return $cmd }-
+Function ParseRuleName
+{
+ Param([Object] $RuleName)
+ if ($RuleName -match $InvalidCharsPattern) {
+ $newRuleName = $RuleName -split $InvalidCharsPattern -join ""
+ Write-Host "Rule $RuleName contains an invalid character. Invalid characters have been removed, rule new name is $newRuleName. " -ForegroundColor Cyan
+ return $newRuleName
+ }
+ return $RuleName
+}
If (!(Get-AzResourceGroup -Name $FirewallPolicyResourceGroup)) { New-AzResourceGroup -Name $FirewallPolicyResourceGroup -Location $FirewallPolicyLocation }- $azfw = Get-AzFirewall -Name $FirewallName -ResourceGroupName $FirewallResourceGroup- Write-Host "creating empty firewall policy"
-$fwDnsSetting = New-AzFirewallPolicyDnsSetting -EnableProxy
-$fwp = New-AzFirewallPolicy -Name $FirewallPolicyName -ResourceGroupName $FirewallPolicyResourceGroup -Location $FirewallPolicyLocation -ThreatIntelMode $azfw.ThreatIntelMode -DnsSetting $fwDnsSetting -Force
+if ($azfw.DNSEnableProxy) {
+ $fwDnsSetting = New-AzFirewallPolicyDnsSetting -EnableProxy
+ $fwp = New-AzFirewallPolicy -Name $FirewallPolicyName -ResourceGroupName $FirewallPolicyResourceGroup -Location $FirewallPolicyLocation -ThreatIntelMode $azfw.ThreatIntelMode -DnsSetting $fwDnsSetting -Force
+}
+else {
+ $fwp = New-AzFirewallPolicy -Name $FirewallPolicyName -ResourceGroupName $FirewallPolicyResourceGroup -Location $FirewallPolicyLocation -ThreatIntelMode $azfw.ThreatIntelMode
+}
Write-Host $fwp.Name "created" Write-Host "creating " $azfw.ApplicationRuleCollections.Count " application rule collections"- #Translate ApplicationRuleCollection If ($azfw.ApplicationRuleCollections.Count -gt 0) {
If ($azfw.ApplicationRuleCollections.Count -gt 0)
$appRuleGroup = New-AzFirewallPolicyRuleCollectionGroup -Name $DefaultAppRuleCollectionGroupName -Priority $ApplicationRuleGroupPriority -RuleCollection $firewallPolicyAppRuleCollections -FirewallPolicyObject $fwp Write-Host "Created ApplicationRuleCollectionGroup " $appRuleGroup.Name }- #Translate NetworkRuleCollection Write-Host "creating " $azfw.NetworkRuleCollections.Count " network rule collections" If ($azfw.NetworkRuleCollections.Count -gt 0)
If ($azfw.NetworkRuleCollections.Count -gt 0)
$firewallPolicyNetRules = @() ForEach ($rule in $rc.Rules) {
+ $parsedName = ParseRuleName($rule.Name)
If ($rule.SourceAddresses) { If ($rule.DestinationAddresses) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceAddress $rule.SourceAddresses -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} elseif ($rule.DestinationIpGroups) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceAddress $rule.SourceAddresses -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} elseif ($rule.DestinationFqdns) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationFqdn $rule.DestinationFqdns -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceAddress $rule.SourceAddresses -DestinationFqdn $rule.DestinationFqdns -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} } elseif ($rule.SourceIpGroups) { If ($rule.DestinationAddresses) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceIpGroup $rule.SourceIpGroups -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} elseif ($rule.DestinationIpGroups) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceIpGroup $rule.SourceIpGroups -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} elseif ($rule.DestinationFqdns) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationFqdn $rule.DestinationFqdns -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $parsedName -SourceIpGroup $rule.SourceIpGroups -DestinationFqdn $rule.DestinationFqdns -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} } Write-Host "Created network rule " $firewallPolicyNetRule.Name
If ($azfw.NetworkRuleCollections.Count -gt 0)
$netRuleGroup = New-AzFirewallPolicyRuleCollectionGroup -Name $DefaultNetRuleCollectionGroupName -Priority $NetworkRuleGroupPriority -RuleCollection $firewallPolicyNetRuleCollections -FirewallPolicyObject $fwp Write-Host "Created NetworkRuleCollectionGroup " $netRuleGroup.Name }- #Translate NatRuleCollection # Hierarchy for NAT rule collection is different for AZFW and FirewallPolicy. In AZFW you can have a NatRuleCollection with multiple NatRules # where each NatRule will have its own set of source , dest, translated IPs and ports. # In FirewallPolicy a NatRuleCollection has a a set of rules which has one condition (source and dest IPs and Ports) and the translated IP and ports # as part of NatRuleCollection. # So when translating NAT rules we will have to create separate ruleCollection for each rule in AZFW and every ruleCollection will have only 1 rule.-
-Write-Host "creating " $azfw.NatRuleCollections.Count " network rule collections"
+Write-Host "creating " $azfw.NatRuleCollections.Count " NAT rule collections"
If ($azfw.NatRuleCollections.Count -gt 0) { $firewallPolicyNatRuleCollections = @()
If ($azfw.NatRuleCollections.Count -gt 0)
If ($rc.Rules.Count -gt 0) { Write-Host "creating " $rc.Rules.Count " nat rules for collection " $rc.Name
- ForEach ($rule in $rc.Rules)
- {
- $firewallPolicyNatRule = New-AzFirewallPolicyNatRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -TranslatedAddress $rule.TranslatedAddress -TranslatedPort $rule.TranslatedPort -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
- Write-Host "Created nat rule " $firewallPolicyNatRule.Name
+
+ ForEach ($rule in $rc.Rules)
+ {
+ $parsedName = ParseRuleName($rule.Name)
+ If ($rule.SourceAddresses)
+ @@ -188,18 +234,19 @@ If ($azfw.NatRuleCollections.Count -gt 0) {
+ {
+ $firewallPolicyNatRule = New-AzFirewallPolicyNatRule -Name $parsedName -SourceIpGroup $rule.SourceIpGroups -TranslatedAddress $rule.TranslatedAddress -TranslatedPort $rule.TranslatedPort -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ }
+ Write-Host "Created NAT rule: " $firewallPolicyNatRule.Name
$firewallPolicyNatRules += $firewallPolicyNatRule }
- $natRuleCollectionName = $rc.Name + $rule.Name
+
+ $natRuleCollectionName = $rc.Name
$fwpNatRuleCollection = New-AzFirewallPolicyNatRuleCollection -Name $natRuleCollectionName -Priority $priority -ActionType $rc.Action.Type -Rule $firewallPolicyNatRules $priority += 1
- Write-Host "Created NatRuleCollection " $fwpNatRuleCollection.Name
+ Write-Host "Created NAT RuleCollection " $fwpNatRuleCollection.Name
$firewallPolicyNatRuleCollections += $fwpNatRuleCollection } } $natRuleGroup = New-AzFirewallPolicyRuleCollectionGroup -Name $DefaultNatRuleCollectionGroupName -Priority $NatRuleGroupPriority -RuleCollection $firewallPolicyNatRuleCollections -FirewallPolicyObject $fwp
- Write-Host "Created NatRuleCollectionGroup " $natRuleGroup.Name
+ Write-Host "Created NAT RuleCollectionGroup " $natRuleGroup.Name
} ``` ## Next steps
-Learn more about Azure Firewall Manager deployment: [Azure Firewall Manager deployment overview](deployment-overview.md).
+Learn more about Azure Firewall Manager deployment: [Azure Firewall Manager deployment overview](deployment-overview.md).
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
} ```
+## Enrollment groups
+
+Enrollment groups are used to manage the device authentication options in your IoT Central application. To learn more, see [Device authentication concepts in IoT Central](concepts-device-authentication.md).
+
+To learn how to create and manage enrollment groups in the UI, see [How to connect devices with X.509 certificates to IoT Central Application](how-to-connect-devices-x509.md).
+
+## Create an enrollment group
+
+### [X509](#tab/X509)
+
+When you create an enrollment group for devices that use X.509 certificates, you first need to upload the root or intermediate certificate to your IoT Central application.
+
+### Generate root and device certificates
+
+In this section, you generate the X.509 certificates you need to connect a device to IoT Central.
+
+> [!WARNING]
+> This way of generating X.509 certs is for testing only. For a production environment you should use your official, secure mechanism for certificate generation.
+
+1. Navigate to the certificate generator script in the Microsoft Azure IoT SDK for Node.js you downloaded. Install the required packages:
+
+ ```cmd/sh
+ cd azure-iot-sdk-node/provisioning/tools
+ npm install
+ ```
+
+1. Create a root certificate and then derive a device certificate by running the script:
+
+ ```cmd/sh
+ node create_test_cert.js root mytestrootcert
+ node create_test_cert.js device sample-device-01 mytestrootcert
+ ```
+
+ > [!TIP]
+ > A device ID can contain letters, numbers, and the `-` character.
+
+These commands produce the following root and the device certificate
+
+| filename | contents |
+| -- | -- |
+| mytestrootcert_cert.pem | The public portion of the root X.509 certificate |
+| mytestrootcert_key.pem | The private key for the root X.509 certificate |
+| mytestrootcert_fullchain.pem | The entire keychain for the root X.509 certificate. |
+| mytestrootcert.pfx | The PFX file for the root X.509 certificate. |
+| sampleDevice01_cert.pem | The public portion of the device X.509 certificate |
+| sampleDevice01_key.pem | The private key for the device X.509 certificate |
+| sampleDevice01_fullchain.pem | The entire keychain for the device X.509 certificate. |
+| sampleDevice01.pfx | The PFX file for the device X.509 certificate. |
+
+Make a note of the location of these files. You need it later.
+
+### Generate the base-64 encoded version of the root certificate
+
+In the folder on your local machine that contains the certificates you generated, create a file called convert.js and add the following JavaScript content:
+
+```javascript
+const fs = require('fs')
+const fileContents = fs.readFileSync(process.argv[2]).toString('base64');
+console.log(fileContents);
+```
+
+Run the following command to generate a base-64 encode version of the certificate:
+
+```cmd/sh
+node convert.js mytestrootcert_cert.pem
+```
+
+Make a note of the base-64 encoded version of the certificate. You need it later.
+
+### Add an X.509 enrollment group
+
+Use the following request to create a new enrollment group with `myx509eg` as the ID:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg?api-version=2022-07-31
+```
+
+The following example shows a request body that adds a new X.509 enrollment group:
+
+```json
+{
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "x509"
+ }
+}
+
+```
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the enrollment group.
+* `@enabled`: Whether the devices using the group are allowed to connect to IoT Central.
+* `@type`: Type of devices that connect through the group, either `iot` or `iotEdge`.
+* `attestation`: The attestation mechanism for the enrollment group, either `symmetricKey` or `x509`.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "myEnrollmentGroupId",
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "x509",
+ "x509": {
+ "signingCertificates": {}
+ }
+ },
+ "etag": "IjdiMDcxZWQ5LTAwMDAtMDcwMC0wMDAwLTYzMWI3MWQ4MDAwMCI="
+}
+```
+
+### Add an X.509 certificate to an enrollment group
+
+Use the following request to set the primary X.509 certificate of the myx509eg enrollment group:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary?api-version=2022-07-31
+```
+
+entry - Entry of certificate, either `primary` or `secondary`
+
+Use this request to add either a primary or secondary X.509 certificate to the enrollment group.
+
+The following example shows a request body that adds an X.509 certificate to an enrollment group:
+
+```json
+{
+ "verified": false,
+ "certificate": "<base64-certificate>"
+}
+```
+
+* certificate - The base-64 version of the certificate you made a note of previously.
+* verified - `true` if you attest that the certificate is valid, `false` if you need to prove the validity of the certificate.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "verified": false,
+ "info": {
+ "sha1Thumbprint": "644543467786B60C14DFE6B7C968A1990CF63EAC"
+ },
+ "etag": "IjE3MDAwODNhLTAwMDAtMDcwMC0wMDAwLTYyNjFmNzk0MDAwMCI="
+}
+```
+
+### Generate verification code for an X.509 certificate
+
+Use the following request to generate a verification code for the primary or secondary X.509 certificate of an enrollment group.
+
+If you set `verified` to `false` in the previous request, use the following request to generate a verification code for the primary X.509 certificate in the `myx509eg` enrollment group:
+
+```http
+POST https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary/generateVerificationCode?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "verificationCode": "<certificate-verification-code>"
+}
+```
+
+Make a note of the verification code, you need it in the next step.
+
+### Generate the verification certificate
+
+Use the following command to generate a verification certificate from the verification code in the previous step:
+
+ ```cmd/sh
+ node create_test_cert.js verification --ca mytestrootcert_cert.pem --key mytestrootcert_key.pem --nonce {verification-code}
+ ```
+
+Run the following command to generate a base-64 encoded version of the certificate:
+
+```cmd/sh
+node convert.js verification_cert.pem
+```
+
+Make a note of the base-64 encoded version of the certificate. You need it later.
+
+### Verify X.509 certificate of an enrollment group
+
+Use the following request to verify the primary X.509 certificate of the `myx509eg` enrollment group by providing the certificate with the signed verification code:
+
+```http
+POST PUT https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary/verify?api-version=2022-07-31
+```
+
+The following example shows a request body that verifies an X.509 certificate:
+
+```json
+{
+ "certificate": "base64-verification-certificate"
+}
+```
+
+### Get X.509 certificate of an enrollment group
+
+Use the following request to retrieve details of X.509 certificate of an enrollment group from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "verified": true,
+ "info": {
+ "sha1Thumbprint": "644543467786B60C14DFE6B7C968A1990CF63EAC"
+ },
+ "etag": "IjE3MDAwODNhLTAwMDAtMDcwMC0wMDAwLTYyNjFmNzk0MDAwMCI="
+}
+```
+
+### Delete an X.509 certificate from an enrollment group
+
+Use the following request to delete the primary X.509 certificate from an enrollment group with ID `myx509eg`:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary?api-version=2022-07-31
+```
+
+### [Symmetric key](#tab/symmetric-key)
+
+### Add a symmetric key enrollment group
+
+Use the following request to create a new enrollment group with `mysymmetric` as the ID:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/mysymmetric?api-version=2022-07-31
+```
+
+The following example shows a request body that adds a new enrollment group:
+
+```json
+{
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "symmetricKey"
+ }
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "mysymmetric",
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "symmetricKey",
+ "symmetricKey": {
+ "primaryKey": "<primary-symmetric-key>",
+ "secondaryKey": "<secondary-symmetric-key>"
+ }
+ },
+ "etag": "IjA4MDUwMTJiLTAwMDAtMDcwMC0wMDAwLTYyODJhOWVjMDAwMCI="
+}
+```
+
+IoT Central generates the primary and secondary symmetric keys when you make this API call.
+++
+### Get an enrollment group
+
+Use the following request to retrieve details of an enrollment group with `mysymmetric` as the ID:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/mysymmetric?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "mysymmetric",
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "symmetricKey",
+ "symmetricKey": {
+ "primaryKey": "<primary-symmetric-key>",
+ "secondaryKey": "<secondary-symmetric-key>"
+ }
+ },
+ "etag": "IjA4MDUwMTJiLTAwMDAtMDcwMC0wMDAwLTYyODJhOWVjMDAwMCI="
+}
+```
+
+### Update an enrollment group
+
+Use the following request to update an enrollment group.
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg?api-version=2022-07-31
+```
+
+The following example shows a request body that updates the display name of a enrollment group:
+
+```json
+{
+ "displayName": "My new group name",
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "myEnrollmentGroupId",
+ "displayName": "My new group name",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "symmetricKey",
+ "symmetricKey": {
+ "primaryKey": "<primary-symmetric-key>",
+ "secondaryKey": "<secondary-symmetric-key>"
+ }
+ },
+ "etag": "IjA4MDUwMTJiLTAwMDAtMDcwMC0wMDAwLTYyODJhOWVjMDAwMCI="
+}
+```
+
+### Delete an enrollment group
+
+Use the following request to delete an enrollment group with ID `myx509eg`:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg?api-version=2022-07-31
+```
+
+### List enrollment groups
+
+Use the following request to retrieve a list of enrollment groups from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "myEnrollmentGroupId",
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "symmetricKey",
+ "symmetricKey": {
+ "primaryKey": "primaryKey",
+ "secondaryKey": "secondarykey"
+ }
+ },
+ "etag": "IjZkMDc1YTgzLTAwMDAtMDcwMC0wMDAwLTYzMTc5ZjA4MDAwMCI="
+ },
+ {
+ "id": "enrollmentGroupId2",
+ "displayName": "My group",
+ "enabled": true,
+ "type": "iot",
+ "attestation": {
+ "type": "x509",
+ "x509": {
+ "signingCertificates": {}
+ }
+ },
+ "etag": "IjZkMDdjNjkyLTAwMDAtMDcwMC0wMDAwLTYzMTdhMDY1MDAwMCI="
+ }
+ ]
+}
+```
+ ## Next steps Now that you've learned how to manage devices with the REST API, a suggested next step is to [How to control devices with rest api.](howto-control-devices-with-rest-api.md)
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
These actions may be disabled if there no more cores that can be enabled for you
If you reach the cores limit, you can request a limit increase to continue using Azure Lab Services. The request process is a checkpoint to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
-To create a support request, you must be an [Owner](../role-based-access-control/built-in-roles.md), [Contributor](../role-based-access-control/built-in-roles.md), or be assigned to the [Support Request Contributor](../role-based-access-control/built-in-roles.md) role at the subscription level. For information about creating support requests in general, see how to create a [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-The admin can follow these steps to request a limit increase:
-
-1. Open your [lab plan](how-to-manage-lab-plans.md) or [lab account](how-to-manage-lab-accounts.md).
-1. On the **Overview** page of the lab plan, select the **Request core limit increase** button from the menu bar at the top.
-1. On the **Basics** page of **New support request** wizard, enter a short summary that will help you remember the support request in the **Summary** textbox. The issue type, subscription, and quota type information are automatically filled out for you. Select **Next: Solutions**.
-
- :::image type="content" source="./media/capacity-limits/new-support-request.png" alt-text="Screenshot of new support request to request more core capacity.":::
-
-1. The **New support request** wizard will automatically advance from the **Solutions** page to the **Details** page.
-1. One the **Details** page, enter the following information in the **Description** page.
- - VM size. For size details, see [VM sizing](administrator-guide.md#vm-sizing).
- - Number of VMs.
- - Location. Location will be a [geography](https://azure.microsoft.com/global-infrastructure/geographies/#geographies) or region, if using the [August 2022 Update](lab-services-whats-new.md).
-1. Under **Advanced diagnostic information**, select **No**.
-1. Under **Support method** section, select your preferred contact method. Verify contact information is correct.
-1. Select **Next: Review + create**
-1. On the **Review + create** page, select **Create** to submit the support request.
-
-Once you submit the support request, we'll review the request. If necessary, we'll contact you to get more details.
+To create a support request, see [Request a core limit increase](./how-to-request-capacity-increase.md).
## Subscriptions with default limit of zero cores
Before you set up a large number of VMs across your labs, we recommend that you
See the following articles: -- [As an admin, see VM sizing](administrator-guide.md#vm-sizing).
+- As an admin, see [VM sizing](administrator-guide.md#vm-sizing).
+- As an admin, see [Request a capacity increase](./how-to-request-capacity-increase.md)
- [Frequently asked questions](classroom-labs-faq.yml).
lab-services How To Request Capacity Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-request-capacity-increase.md
+
+ Title: Request a core limit increase
+description: Learn how to request a core limit (quota) increase to expand capacity for your labs.
+++ Last updated : 08/26/2022++
+<!-- As a lab administrator, I want more cores available for my subscription so that I can support more students. -->
+
+# Request a core limit increase
+If you reach the cores limit for your subscription, you can request a limit increase to continue using Azure Lab Services. The request process allows the Azure Lab Services team to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+
+For information about creating support requests in general, see how to create a [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+## Prepare to submit a request
+Before you begin your request for a capacity increase, you should make sure that you have all the information you need available and verify that you have the appropriate permissions. Review this article, and gather information like the number and size of cores you want to add, the regions you can use, and the location of resources like your existing labs and virtual networks.
+
+### Permissions
+To create a support request, you must be assigned to one of the following roles at the subscription level:
+ - [Owner](../role-based-access-control/built-in-roles.md)
+ - [Contributor](../role-based-access-control/built-in-roles.md)
+ - [Support Request Contributor](../role-based-access-control/built-in-roles.md)
+
+### Determine the regions for your labs
+Azure Lab Services resources can exist in many regions. You can choose to deploy resources in multiple regions close to your students. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+### Locate and copy lab plan or lab account resource ID
+To add extra capacity to an existing lab, you must specify the lab's resource ID when you make the request.
+
+Use the following steps to locate and copy the resource ID so that you can paste it into your support request.
+1. In the [Azure portal](https://portal.azure.com), navigate to the lab plan or lab account you want to add cores to.
+
+1. Under Settings, select Properties, and then copy the **Resource ID**.
+ :::image type="content" source="./media/how-to-request-capacity-increase/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted.":::
+
+1. Paste the Resource ID into a document for safekeeping; you'll need it to complete the support request.
+
+## Start a new support request
+You can follow these steps to request a limit increase:
+
+1. In the Azure portal, in Support & Troubleshooting, select **Help + support**
+ :::image type="content" source="./media/how-to-request-capacity-increase/support-troubleshooting.png" alt-text="Screenshot of the Azure portal showing Support & troubleshooting with Help + support highlighted.":::
+1. On the Help + support page, select **Create support request**.
+ :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page with Create support request highlighted.":::
+
+1. On the New support request page, use the following information to complete the **Problem description**, and then select **Next**.
+
+ |Name |Value |
+ |||
+ |**Issue type**|Service and subscription limits (quotas)|
+ |**Subscription**|The subscription you want to extend.|
+ |**Quota type**|Azure Lab Services|
+
+1. The **Recommended solution** tab isn't required for service and subscription limits (quotas) issues, so it is skipped.
+
+1. On the Additional details tab, in the **Problem details** section, select **Enter details**.
+ :::image type="content" source="./media/how-to-request-capacity-increase/enter-details-link.png" alt-text="Screenshot of the Additional details page with Enter details highlighted.":::
+
+## Make core limit increase request
+When you request core limit increase (sometimes called an increase in capacity), you must supply some information to help the Azure Lab Services team evaluate and action your request as quickly as possible. The more information you can supply and the earlier you supply it, the more quickly the Azure Lab Services team will be able to process your request.
+
+The information required for the lab accounts used in original version of Lab Services (May 2019) and the lab plans used in the updated version of Lab Services (August 2022) is different. Use the appropriate tab below to guide you as you complete the **Quota details**.
+#### [Lab Accounts](#tab/LabAccounts/)
++
+ |Name |Value |
+ |||
+ |**Deployment Model**|Select **Lab Account (Classic)**|
+ |**Requested total core limit**|Enter the total number of cores for your subscription. Add the number of existing cores to the number of cores you're requesting.|
+ |**Region**|Select the regions that you would like to use. |
+ |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**. </br> If you're adding cores to an existing lab, enter the lab's resource ID.|
+ |**What's the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores.|
+ |**Additional details**|Answer the questions in the additional details box. The more information you can provide here, the easier it will be for the Azure Lab Services team to process your request. For example, you could include your preferred date for the new cores to be available. |
+
+#### [Lab Plans](#tab/Labplans/)
+++
+ |Name |Value |
+ |||
+ |**Deployment Model**|Select **Lab Plan**|
+ |**Region**|Enter the preferred location or region where you want the extra cores.|
+ |**Alternate region**|If you're flexible with the location of your cores, you can select alternate regions.|
+ |**If you plan to use the new capacity with advanced networking, what region does your virtual network reside in?**|If your lab plan uses advanced networking, you must specify the region your virtual network resides in.|
+ |**Virtual Machine Size**|Select the virtual machine size that you require for the new cores.|
+ |**Requested total core limit**|Enter the total number of cores you require; your existing cores + the number you're requesting.|
+ |**What is the minimum number of cores you can start with?**|Your new cores may be made available gradually. Enter the minimum number of cores you require.|
+ |**What's the ideal date to have this by? (MM/DD/YYY)**|Enter the date on which you want the extra cores to be available.|
+ |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**. </br> If you're adding cores to an existing lab, enter the lab's resource ID.|
+ |**What is the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores.|
+ |**Additional details**|Answer the questions in the additional details box. The more information you can provide here, the easier it will be for the Azure Lab Services team to process your request. |
+++
+When you've entered the required information and any additional details, select **Save and continue**.
+
+## Complete the support request
+1. Complete the remainder of the support request **Additional details** tab using the following information:
+
+ ### Advanced diagnostic information
+
+ |Name |Value |
+ |||
+ |**Allow collection of advanced diagnostic information**|Select yes or no.|
+
+ ### Support method
+
+ |Name |Value |
+ |||
+ |**Support plan**|Select your support plan.|
+ |**Severity**|Select the severity of the issue.|
+ |**Preferred contact method**|Select email or phone.|
+ |**Your availability**|Enter your availability.|
+ |**Support language**|Select your language preference.|
+
+ ### Contact information
+
+ |Name |Value |
+ |||
+ |**First name**|Enter your first name.|
+ |**Last name**|Enter your last name.|
+ |**Email**|Enter your contact email.|
+ |**Additional email for notification**|Enter an email for notifications.|
+ |**Phone**|Enter your contact phone number.|
+ |**Country/region**|Enter your location.|
+ |**Save contact changes for future support requests.**|Select the check box to save changes.|
+
+1. Select **Next**.
+
+1. On the **Review + create** tab, review the information, and then select **Create**.
+
+## Next steps
+For more information about capacity limits, see [Capacity limits in Azure Lab Services](capacity-limits.md).
lab-services Quick Create Lab Plan Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-portal.md
When no longer needed, you can delete the resource group, lab plan, and all rela
## Next steps
-In this quickstart, you created a resource group and a lab plan. To learn more about advanced options for lab plans, see [Tutorial: Create a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
+In this quickstart, you created a resource group and a lab plan.
+
+To learn more about advanced options for lab plans, see:
+- [Tutorial: Create a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
+- [Request a capacity increase](how-to-request-capacity-increase.md)
Advance to the next article to learn how to create a lab. > [!div class="nextstepaction"]
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
In this section, you'll create a virtual network, subnet, and Azure Bastion host
| Resource Group | Select **Create new**. </br> In **Name** enter **CreatePubLBQS-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **West US** |
+ | Region | Select **East US** |
4. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
During the creation of the load balancer, you'll configure:
| Resource group | Select **CreatePubLBQS-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **West US**. |
+ | Region | Select **East US**. |
| SKU | Leave the default **Standard**. | | Type | Select **Public**. | | Tier | Leave the default **Regional**. |
In this section, you'll create a NAT gateway for outbound internet access for re
| Resource group | Select **CreatePubLBQS-rg**. | | **Instance details** | | | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **West US**. |
+ | Region | Select **East US**. |
| Availability zone | Select **None**. | | Idle timeout (minutes) | Enter **15**. |
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **CreatePubLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(US) West US)** |
+ | Region | Select **((US) East US)** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **Zone 1** | | Security type | Select **Standard**. |
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
These resources are ephemeral and exist only for the duration of the load test r
## Prerequisites - An existing virtual network and a subnet to use with Azure Load Testing.-- The virtual network must be in the same subscription as the Azure Load Testing resource.
+- The virtual network must be in the same subscription and the same region as the Azure Load Testing resource.
- The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test. Learn more about [configuring your test for high-scale load](./how-to-high-scale-load.md). - The subnet shouldn't be delegated to any other Azure service. For example, it shouldn't be delegated to Azure Container Instances (ACI). Learn more about [subnet delegation](/azure/virtual-network/subnet-delegation-overview). - Azure CLI version 2.2.0 or later (if you're using CI/CD). Run `az --version` to find the version that's installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Previously updated : 09/23/2021 Last updated : 08/30/2022 ms.devlang: azurecli # Train models with Azure Machine Learning
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
+> * [v1](v1/concept-train-machine-learning-model-v1.md)
+> * [v2 (preview)](concept-train-machine-learning-model.md)
+ Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you: + [Azure Machine Learning SDK for Python](#python-sdk): The Python SDK provides several ways to train models, each with different capabilities. | Training method | Description | | -- | -- |
- | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and job configuration. The job configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your job configuration and run a training job. |
+ | [command()](#submit-a-command) | A **typical way to train models** is to submit a command() that includes a training script, environment, and compute information. |
| [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a job configuration when using automated machine learning. |
- | [Machine learning pipeline](#machine-learning-pipeline) | Pipelines are not a different training method, but a **way of defining a workflow using modular, reusable steps**, that can include training as part of the workflow. Machine learning pipelines support using automated machine learning and run configuration to train models. Since pipelines are not focused specifically on training, the reasons for using a pipeline are more varied than the other training methods. Generally, you might use a pipeline when:<br>* You want to **schedule unattended processes** such as long running training jobs or data preparation.<br>* Use **multiple steps** that are coordinated across heterogeneous compute resources and storage locations.<br>* Use the pipeline as a **reusable template** for specific scenarios, such as retraining or batch scoring.<br>* **Track and version data sources, inputs, and outputs** for your workflow.<br>* Your workflow is **implemented by different teams that work on specific steps independently**. Steps can then be joined together in a pipeline to implement the workflow. |
+ | [Machine learning pipeline](#machine-learning-pipeline) | Pipelines are not a different training method, but a **way of defining a workflow using modular, reusable steps** that can include training as part of the workflow. Machine learning pipelines support using automated machine learning and run configuration to train models. Since pipelines are not focused specifically on training, the reasons for using a pipeline are more varied than the other training methods. Generally, you might use a pipeline when:<br>* You want to **schedule unattended processes** such as long running training jobs or data preparation.<br>* Use **multiple steps** that are coordinated across heterogeneous compute resources and storage locations.<br>* Use the pipeline as a **reusable template** for specific scenarios, such as retraining or batch scoring.<br>* **Track and version data sources, inputs, and outputs** for your workflow.<br>* Your workflow is **implemented by different teams that work on specific steps independently**. Steps can then be joined together in a pipeline to implement the workflow. |
+ **Designer**: Azure Machine Learning designer provides an easy entry-point into machine learning for building proof of concepts, or for users with little coding experience. It allows you to train models using a drag and drop web-based UI. You can use Python code as part of the design, or train models without writing any code. + **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training job on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines.
-Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
+Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](concept-compute-target.md). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
## Python SDK The Azure Machine Learning SDK for Python allows you to build and run machine learning workflows with Azure Machine Learning. You can interact with the service from an interactive Python session, Jupyter Notebooks, Visual Studio Code, or other IDE.
-* [What is the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro)
-* [Install/update the SDK](/python/api/overview/azure/ml/install)
+* [Install/update the SDK](/python/api/overview/azure/ml/installv2)
* [Configure a development environment for Azure Machine Learning](how-to-configure-environment.md)
-### Run configuration
+### Submit a command
-A generic training job with Azure Machine Learning can be defined using the [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig). The script run configuration is then used, along with your training script(s) to train a model on a compute target.
+A generic training job with Azure Machine Learning can be defined using the [command()](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command). The command is then used, along with your training script(s) to train a model on the specified compute target.
-You may start with a run configuration for your local computer, and then switch to one for a cloud-based compute target as needed. When changing the compute target, you only change the run configuration you use. A run also logs information about the training job, such as the inputs, outputs, and logs.
+You may start with a command for your local computer, and then switch to one for a cloud-based compute target as needed. When changing the compute target, you only change the compute parameter in the command that you use. A run also logs information about the training job, such as the inputs, outputs, and logs.
-* [What is a run configuration?](v1/concept-azure-machine-learning-architecture.md#run-configurations)
* [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md) * [Examples: Jupyter Notebook and Python examples of training models](https://github.com/Azure/azureml-examples)
-* [How to: Configure a training run](v1/how-to-set-up-training-targets.md)
### Automated Machine Learning
Define the iterations, hyperparameter settings, featurization, and other setting
* [What is automated machine learning?](concept-automated-ml.md) * [Tutorial: Create your first classification model with automated machine learning](tutorial-first-experiment-automated-ml.md)
-* [Examples: Jupyter Notebook examples for automated machine learning](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)
* [How to: Configure automated ML experiments in Python](how-to-configure-auto-train.md)
-* [How to: Autotrain a time-series forecast model](how-to-auto-train-forecast.md)
* [How to: Create, explore, and deploy automated machine learning experiments with Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md) ### Machine learning pipeline
-Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models. In a pipeline, you can train a model using automated machine learning or run configurations.
+Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models.
* [What are ML pipelines in Azure Machine Learning?](concept-ml-pipelines.md)
-* [Create and run machine learning pipelines with Azure Machine Learning SDK](v1/how-to-create-machine-learning-pipelines.md)
-* [Tutorial: Use Azure Machine Learning Pipelines for batch scoring](tutorial-pipeline-batch-scoring-classification.md)
-* [Examples: Jupyter Notebook examples for machine learning pipelines](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines)
-* [Examples: Pipeline with automated machine learning](https://aka.ms/pl-automl)
+* [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
+ ### Understand what happens when you submit a training job
The Azure training lifecycle consists of:
1. The system calculates a hash of: - The base image - Custom docker steps (see [Deploy a model using a custom Docker base image](./how-to-deploy-custom-container.md))
- - The conda definition YAML (see [Create & use software environments in Azure Machine Learning](./how-to-use-environments.md))
+ - The conda definition YAML (see [Manage Azure Machine Learning environments with the CLI (v2)](how-to-manage-environments-v2.md)))
1. The system uses this hash as the key in a lookup of the workspace Azure Container Registry (ACR) 1. If it is not found, it looks for a match in the global ACR 1. If it is not found, the system builds a new image (which will be cached and registered with the workspace ACR)
The Azure training lifecycle consists of:
1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace 1. Scaling down compute, including removing temporary storage
-If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](v1/how-to-debug-pipelines.md) for an example).
## Azure Machine Learning designer
The machine learning CLI is an extension for the Azure CLI. It provides cross-pl
* [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md) * [MLOps on Azure](https://github.com/microsoft/MLOps)
+* [Train models with the CLI (v2)](how-to-train-cli.md)
## VS Code
You can use the VS Code extension to run and manage your training jobs. See the
## Next steps
-Learn how to [Configure a training run](v1/how-to-set-up-training-targets.md).
+Learn how to [Tutorial: Create production ML pipelines with Python SDK v2 (preview) in a Jupyter notebook](tutorial-pipeline-python-sdk.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Where the file *create-instance.yml* is:
* Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access) below. * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ (preview) to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
- * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview).inel
+ * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview)
* Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md). * Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop-preview) below.
+ * Enable auto-stop (preview). Configure a compute instance to automatically shutdown if it is inactive. See [configure auto-stop](#configure-auto-stop-preview) for more details.
SSH access is disabled by default. SSH access can't be changed after creation.
+## Configure auto-stop (preview)
+To avoid getting charged for a compute instance that is switched on but inactive, you can configure auto-stop.
+
+A compute instance is considered inactive if the below conditions are met:
+* No active Jupyter Kernel sessions (this translates to no Notebooks usage via Jupyter, JupyterLab or Interactive notebooks)
+* No active Jupyter terminal sessions
+* No active AzureML runs or experiments
+* No SSH connections
+* No VS code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are auto-terminated if VS code detects no activity for 3 hours.
+
+Note that activity on custom applications installed on the compute instance is not considered. There are also some basic bounds around inactivity time periods; CI must be inactive for a minimum of 15 mins and a maximum of 3 days.
+
+This setting can be configured during CI creation or for existing CIs via the following interfaces:
+* AzureML Studio
+
+ :::image type="content" source="media/how-to-create-attach-studio/idle-shutdown-advanced-settings.jpg" alt-text="Screenshot of the Advanced Settings page for creating a compute instance":::
+ :::image type="content" source="media/how-to-create-attach-studio/idle-shutdown-update.jpg" alt-text="Screenshot of the compute instance details page showing how to update an existing compute instance with idle shutdown":::
+
+* REST API
+
+ Endpoint:
+ ```
+ POST https://management.azure.com/subscriptions/{SUB_ID}/resourceGroups/{RG_NAME}/providers/Microsoft.MachineLearningServices/workspaces/{WS_NAME}/computes/{CI_NAME}/updateIdleShutdownSetting?api-version=2021-07-01
+ ```
+ Body:
+ ```JSON
+ {
+ "idleTimeBeforeShutdown": "PT30M" // this must be a string in ISO 8601 format
+ }
+ ```
+
+* CLIv2 (YAML) -- only configurable during new CI creation
+
+ ```YAML
+ # Note that this is just a snippet for the idle shutdown property. Refer to the "Create" Azure CLI section for more information.
+ idle_time_before_shutdown_minutes: 30
+ ```
+
+* Python SDKv2 -- only configurable during new CI creation
+
+ ```Python
+ ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2", idle_time_before_shutdown_minutes="30")
+ ```
+
+* ARM Templates -- only configurable during new CI creation
+ ```JSON
+ // Note that this is just a snippet for the idle shutdown property in an ARM template
+ {
+ "idleTimeBeforeShutdown":"PT30M" // this must be a string in ISO 8601 format
+ }
+ ```
+ ## Create on behalf of (preview) As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Custom container deployments can use web servers other than the default Python F
## Prerequisites
-* You must have an Azure resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
-* You must have an Azure Machine Learning workspace. You'll have such a workspace if you configured your ML extension per the above article.
+* You, or the service principal you use, must have `Contributor` access to the Azure Resource Group that contains your workspace. You'll have such a resource group if you configured your workspace using the quickstart article.
* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
-# [Azure CLI](#tab/cli)
-
-* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-
-* If you've not already set the defaults for Azure CLI, you should save your default settings. To avoid having to repeatedly pass in the values, run:
-
- ```azurecli
- az account set --subscription <subscription id>
- az configure --defaults workspace=<azureml workspace name> group=<resource group>
- ```
-
-# [Python SDK](#tab/python)
-
-* If you haven't installed Python SDK v2, please install with this command:
-
- ```azurecli
- pip install --pre azure-ai-ml
- ```
-
- For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
--- ## Download source code To follow along with this tutorial, download the source code below.
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
In this article, you learn how to deploy your machine learning model to managed
## Prerequisites
-* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
-* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it.
-* You must have an Azure Machine Learning workspace.
+ * To deploy locally, you must install [Docker Engine](https://docs.docker.com/engine/) on your local computer. We highly recommend this option, so it's easier to debug issues. ### Clone examples repository
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
Title: Use REST to manage ML resources description: How to use REST APIs to create, run, and delete Azure Machine Learning resources, such as a workspace, or register models.--+++ Previously updated : 07/28/2022 Last updated : 09/14/2022
The response should provide an access token good for one hour:
} ```
-Make note of the token, as you'll use it to authenticate all additional administrative requests. You'll do so by setting an Authorization header in all requests:
+Make note of the token, as you'll use it to authenticate all administrative requests. You'll do so by setting an Authorization header in all requests:
```bash curl -h "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ...more args...
providers/Microsoft.Storage/storageAccounts/<YOUR-STORAGE-ACCOUNT-NAME>"
## Create a workspace using customer-managed encryption keys
-By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an [additional set of resources](./concept-data-encryption.md#azure-cosmos-db) in your Azure subscription to store your data.
+By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an [another set of resources](./concept-data-encryption.md#azure-cosmos-db) in your Azure subscription to store your data.
-To create a workspaces that uses your keys for encryption, you need to meet the following prerequisites:
+To create a workspace that uses your keys for encryption, you need to meet the following prerequisites:
* The Azure Machine Learning service principal must have contributor access to your Azure subscription. * You must have an existing Azure Key Vault that contains an encryption key.
-* The Azure Key Vault must exist in the same Azure region where you will create the Azure Machine Learning workspace.
-* The Azure Key Vault must have soft delete and purge protection enabled to protect against data loss in case of accidental deletion.
+* The Azure Key Vault must exist in the same Azure region where you'll create the Azure Machine Learning workspace.
+* The Azure Key Vault must have soft delete and purge protection enabled to protect against data loss if there was accidental deletion.
* You must have an access policy in Azure Key Vault that grants get, wrap, and unwrap access to the Azure Cosmos DB application.
-To create a workspaces that uses a user-assigned managed identity and customer-managed keys for encryption, use the below request body. When using an user-assigned managed identity for the workspace, also set the `userAssignedIdentity` property to the resource ID of the managed identity.
+To create a workspace that uses a user-assigned managed identity and customer-managed keys for encryption, use the below request body. When using a user-assigned managed identity for the workspace, also set the `userAssignedIdentity` property to the resource ID of the managed identity.
```bash curl -X PUT \
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Previously updated : 11/18/2021 Last updated : 09/14/2022 # How to use workspace diagnostics
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](v1/how-to-workspace-diagnostic-api.md)
+> * [v2 (current version)](how-to-workspace-diagnostic-api.md)
+ Azure Machine Learning provides a diagnostic API that can be used to identify problems with your workspace. Errors returned in the diagnostics report include information on how to resolve the problem. You can use the workspace diagnostics from the Azure Machine Learning studio or Python SDK. ## Prerequisites
-* An Azure Machine learning workspace. If you don't have one, see [Create a workspace](quickstart-create-resources.md).
-* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml).
+ ## Diagnostics from studio From [Azure Machine Learning studio](https://ml.azure.com) or the Python SDK, you can run diagnostics on your workspace to check your setup. To run diagnostics, select the '__?__' icon from the upper right corner of the page. Then select __Run workspace diagnostics__.
After diagnostics run, a list of any detected problems is returned. This list in
The following snippet demonstrates how to use workspace diagnostics from Python ```python
-from azureml.core import Workspace
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import Workspace
+from azure.identity import DefaultAzureCredential
-ws = Workspace.from_config()
-
-diag_param = {
- "value": {
- }
- }
+subscription_id = '<your-subscription-id>'
+resource_group = '<your-resource-group-name>'
+workspace = '<your-workspace-name>'
-resp = ws.diagnose_workspace(diag_param)
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
+resp = ml_client.workspaces.begin_diagnose(workspace)
print(resp) ```
The response is a JSON document that contains information on any problems detect
If no problems are detected, an empty JSON document is returned.
-For more information, see the [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-) reference.
+For more information, see the [Workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) reference.
## Next steps
-* [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-)
* [How to manage workspaces in portal or SDK](how-to-manage-workspace.md)
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
For more information on creating a compute cluster and compute cluster, includin
When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images: 1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell.
-1. From the Cloud Shell, use the following command to install the 1.0 CLI for Azure Machine Learning:
+1. From the Cloud Shell, use the following command to install the 2.0 CLI for Azure Machine Learning:
```azurecli-interactive az extension add -n ml
machine-learning Concept Train Machine Learning Model V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model-v1.md
+
+ Title: 'Build & train models (v1)'
+
+description: Learn how to train models with Azure Machine Learning (v1). Explore the different training methods and choose the right one for your project.
++++++ Last updated : 08/30/2022+
+ms.devlang: azurecli
++
+# Train models with Azure Machine Learning (v1)
+
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
+> * [v1](concept-train-machine-learning-model-v1.md)
+> * [v2 (preview)](../concept-train-machine-learning-model.md)
+
+Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
+++ [Azure Machine Learning SDK for Python](#python-sdk): The Python SDK provides several ways to train models, each with different capabilities.+
+ | Training method | Description |
+ | -- | -- |
+ | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and job configuration. The job configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your job configuration and run a training job. |
+ | [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a job configuration when using automated machine learning. |
+ | [Machine learning pipeline](#machine-learning-pipeline) | Pipelines are not a different training method, but a **way of defining a workflow using modular, reusable steps** that can include training as part of the workflow. Machine learning pipelines support using automated machine learning and run configuration to train models. Since pipelines are not focused specifically on training, the reasons for using a pipeline are more varied than the other training methods. Generally, you might use a pipeline when:<br>* You want to **schedule unattended processes** such as long running training jobs or data preparation.<br>* Use **multiple steps** that are coordinated across heterogeneous compute resources and storage locations.<br>* Use the pipeline as a **reusable template** for specific scenarios, such as retraining or batch scoring.<br>* **Track and version data sources, inputs, and outputs** for your workflow.<br>* Your workflow is **implemented by different teams that work on specific steps independently**. Steps can then be joined together in a pipeline to implement the workflow. |
+++ **Designer**: Azure Machine Learning designer provides an easy entry-point into machine learning for building proof of concepts, or for users with little coding experience. It allows you to train models using a drag and drop web-based UI. You can use Python code as part of the design, or train models without writing any code.+++ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training job on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines.+
+Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
+
+## Python SDK
+
+The Azure Machine Learning SDK for Python allows you to build and run machine learning workflows with Azure Machine Learning. You can interact with the service from an interactive Python session, Jupyter Notebooks, Visual Studio Code, or other IDE.
+
+* [What is the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro)
+* [Install/update the SDK](/python/api/overview/azure/ml/install)
+* [Configure a development environment for Azure Machine Learning](../how-to-configure-environment.md)
+
+### Run configuration
+
+A generic training job with Azure Machine Learning can be defined using the [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig). The script run configuration is then used, along with your training script(s) to train a model on a compute target.
+
+You may start with a run configuration for your local computer, and then switch to one for a cloud-based compute target as needed. When changing the compute target, you only change the run configuration you use. A run also logs information about the training job, such as the inputs, outputs, and logs.
+
+* [What is a run configuration?](concept-azure-machine-learning-architecture.md#run-configurations)
+* [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md)
+* [Examples: Jupyter Notebook and Python examples of training models](https://github.com/Azure/azureml-examples)
+* [How to: Configure a training run](how-to-set-up-training-targets.md)
+
+### Automated Machine Learning
+
+Define the iterations, hyperparameter settings, featurization, and other settings. During training, Azure Machine Learning tries different algorithms and parameters in parallel. Training stops once it hits the exit criteria you defined.
+
+> [!TIP]
+> In addition to the Python SDK, you can also use Automated ML through [Azure Machine Learning studio](https://ml.azure.com).
+
+* [What is automated machine learning?](../concept-automated-ml.md)
+* [Tutorial: Create your first classification model with automated machine learning](../tutorial-first-experiment-automated-ml.md)
+* [Examples: Jupyter Notebook examples for automated machine learning](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)
+* [How to: Configure automated ML experiments in Python](how-to-configure-auto-train-v1.md)
+* [How to: Autotrain a time-series forecast model](../how-to-auto-train-forecast.md)
+* [How to: Create, explore, and deploy automated machine learning experiments with Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md)
+
+### Machine learning pipeline
+
+Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models. In a pipeline, you can train a model using automated machine learning or run configurations.
+
+* [What are ML pipelines in Azure Machine Learning?](../concept-ml-pipelines.md)
+* [Create and run machine learning pipelines with Azure Machine Learning SDK](how-to-create-machine-learning-pipelines.md)
+* [Tutorial: Use Azure Machine Learning Pipelines for batch scoring](tutorial-pipeline-python-sdk.md)
+* [Examples: Jupyter Notebook examples for machine learning pipelines](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines)
+* [Examples: Pipeline with automated machine learning](https://aka.ms/pl-automl)
+
+### Understand what happens when you submit a training job
+
+The Azure training lifecycle consists of:
+
+1. Zipping the files in your project folder, ignoring those specified in _.amlignore_ or _.gitignore_
+1. Scaling up your compute cluster
+1. Building or downloading the dockerfile to the compute node
+ 1. The system calculates a hash of:
+ - The base image
+ - Custom docker steps (see [Deploy a model using a custom Docker base image](how-to-deploy-package-models.md))
+ - The conda definition YAML (see [Create & use software environments in Azure Machine Learning](how-to-use-environments.md))
+ 1. The system uses this hash as the key in a lookup of the workspace Azure Container Registry (ACR)
+ 1. If it is not found, it looks for a match in the global ACR
+ 1. If it is not found, the system builds a new image (which will be cached and registered with the workspace ACR)
+1. Downloading your zipped project file to temporary storage on the compute node
+1. Unzipping the project file
+1. The compute node executing `python <entry script> <arguments>`
+1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace
+1. Scaling down compute, including removing temporary storage
+
+If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](how-to-debug-pipelines.md) for an example).
+
+## Azure Machine Learning designer
+
+The designer lets you train models using a drag and drop interface in your web browser.
+++ [What is the designer?](../concept-designer.md)++ [Tutorial: Predict automobile price](../tutorial-designer-automobile-price-train-score.md)+
+## Azure CLI
+
+The machine learning CLI is an extension for the Azure CLI. It provides cross-platform CLI commands for working with Azure Machine Learning. Typically, you use the CLI to automate tasks, such as training a machine learning model.
+
+* [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md)
+* [MLOps on Azure](https://github.com/microsoft/MLOps)
++
+## Next steps
+
+Learn how to [Configure a training run](how-to-set-up-training-targets.md).
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-migrate-from-estimators-to-scriptrunconfig.md
+
+ Title: Migrate from Estimators to ScriptRunConfig
+
+description: Migration guide for migrating from Estimators to ScriptRunConfig for configuring training jobs.
++++++ Last updated : 09/14/2022++++
+# Migrating from Estimators to ScriptRunConfig
++
+Up until now, there have been multiple methods for configuring a training job in Azure Machine Learning via the SDK, including Estimators, ScriptRunConfig, and the lower-level RunConfiguration. To address this ambiguity and inconsistency, we are simplifying the job configuration process in Azure ML. You should now use ScriptRunConfig as the recommended option for configuring training jobs.
+
+Estimators are deprecated with the 1.19. release of the Python SDK. You should also generally avoid explicitly instantiating a RunConfiguration object yourself, and instead configure your job using the ScriptRunConfig class.
+
+This article covers common considerations when migrating from Estimators to ScriptRunConfig.
+
+> [!IMPORTANT]
+> To migrate to ScriptRunConfig from Estimators, make sure you are using >= 1.15.0 of the Python SDK.
+
+## ScriptRunConfig documentation and samples
+Azure Machine Learning documentation and samples have been updated to use [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) for job configuration and submission.
+
+For information on using ScriptRunConfig, refer to the following documentation:
+* [Configure and submit training jobs](how-to-set-up-training-targets.md)
+* [Configuring PyTorch training jobs](how-to-train-pytorch.md)
+* [Configuring TensorFlow training jobs](how-to-train-tensorflow.md)
+* [Configuring scikit-learn training jobs](how-to-train-scikit-learn.md)
+
+In addition, refer to the following samples & tutorials:
+* [Azure/MachineLearningNotebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks)
+* [Azure/azureml-examples](https://github.com/Azure/azureml-examples)
+
+## Defining the training environment
+While the various framework estimators have preconfigured environments that are backed by Docker images, the Dockerfiles for these images are private. Therefore you do not have a lot of transparency into what these environments contain. In addition, the estimators take in environment-related configurations as individual parameters (such as `pip_packages`, `custom_docker_image`) on their respective constructors.
+
+When using ScriptRunConfig, all environment-related configurations are encapsulated in the `Environment` object that gets passed into the `environment` parameter of the ScriptRunConfig constructor. To configure a training job, provide an environment that has all the dependencies required for your training script. If no environment is provided, Azure ML will use one of the Azure ML base images, specifically the one defined by `azureml.core.environment.DEFAULT_CPU_IMAGE`, as the default environment. There are a couple of ways to provide an environment:
+
+* [Use a curated environment](../how-to-use-environments.md#use-a-curated-environment) - curated environments are predefined environments available in your workspace by default. There is a corresponding curated environment for each of the preconfigured framework/version Docker images that backed each framework estimator.
+* [Define your own custom environment](how-to-use-environments.md)
+
+Here is an example of using the curated environment for training:
+
+```python
+from azureml.core import Workspace, ScriptRunConfig, Environment
+
+curated_env_name = '<add Pytorch curated environment name here>'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+
+compute_target = ws.compute_targets['my-cluster']
+src = ScriptRunConfig(source_directory='.',
+ script='train.py',
+ compute_target=compute_target,
+ environment=pytorch_env)
+```
+
+> [!TIP]
+> For a list of curated environments, see [curated environments](../resource-curated-environments.md).
+
+If you want to specify **environment variables** that will get set on the process where the training script is executed, use the Environment object:
+```
+myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"}
+```
+
+For information on configuring and managing Azure ML environments, see:
+* [How to use environments](how-to-use-environments.md)
+* [Curated environments](../resource-curated-environments.md)
+* [Train with a custom Docker image](../how-to-train-with-custom-image.md)
+
+## Using data for training
+### Datasets
+If you are using an Azure ML dataset for training, pass the dataset as an argument to your script using the `arguments` parameter. By doing so, you will get the data path (mounting point or download path) in your training script via arguments.
+
+The following example configures a training job where the FileDataset, `mnist_ds`, will get mounted on the remote compute.
+```python
+src = ScriptRunConfig(source_directory='.',
+ script='train.py',
+ arguments=['--data-folder', mnist_ds.as_mount()], # or mnist_ds.as_download() to download
+ compute_target=compute_target,
+ environment=pytorch_env)
+```
+
+### DataReference (old)
+While we recommend using Azure ML Datasets over the old DataReference way, if you are still using DataReferences for any reason, you must configure your job as follows:
+```python
+# if you want to pass a DataReference object, such as the below:
+datastore = ws.get_default_datastore()
+data_ref = datastore.path('./foo').as_mount()
+
+src = ScriptRunConfig(source_directory='.',
+ script='train.py',
+ arguments=['--data-folder', str(data_ref)], # cast the DataReference object to str
+ compute_target=compute_target,
+ environment=pytorch_env)
+src.run_config.data_references = {data_ref.data_reference_name: data_ref.to_config()} # set a dict of the DataReference(s) you want to the `data_references` attribute of the ScriptRunConfig's underlying RunConfiguration object.
+```
+
+For more information on using data for training, see:
+* [Train with datasets in Azure ML](how-to-train-with-datasets.md)
+
+## Distributed training
+If you need to configure a distributed job for training, do so by specifying the `distributed_job_config` parameter in the ScriptRunConfig constructor. Pass in an [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration), or [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration) for distributed jobs of the respective types.
+
+The following example configures a PyTorch training job to use distributed training with MPI/Horovod:
+```python
+from azureml.core.runconfig import MpiConfiguration
+
+src = ScriptRunConfig(source_directory='.',
+ script='train.py',
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=MpiConfiguration(node_count=2, process_count_per_node=2))
+```
+
+For more information, see:
+* [Distributed training with PyTorch](how-to-train-pytorch.md#distributed-training)
+* [Distributed training with TensorFlow](how-to-train-tensorflow.md#distributed-training)
+
+## Miscellaneous
+If you need to access the underlying RunConfiguration object for a ScriptRunConfig for any reason, you can do so as follows:
+```
+src.run_config
+```
+
+## Next steps
+
+* [Configure and submit training jobs](how-to-set-up-training-targets.md)
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
+
+ Title: Distributed GPU training guide (SDK v1)
+
+description: Learn the best practices for performing distributed training with Azure Machine Learning SDK (v1) supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.
++++++ Last updated : 10/21/2021+++
+# Distributed GPU training guide (SDK v1)
++
+Learn more about how to use distributed GPU training code in Azure Machine Learning (ML). This article will not teach you about distributed training. It will help you run your existing distributed training code on Azure Machine Learning. It offers tips and examples for you to follow for each framework:
+
+* Message Passing Interface (MPI)
+ * Horovod
+ * DeepSpeed
+ * Environment variables from Open MPI
+* PyTorch
+ * Process group initialization
+ * Launch options
+ * DistributedDataParallel (per-process-launch)
+ * Using `torch.distributed.launch` (per-node-launch)
+ * PyTorch Lightning
+ * Hugging Face Transformers
+* TensorFlow
+ * Environment variables for TensorFlow (TF_CONFIG)
+* Accelerate GPU training with InfiniBand
+
+## Prerequisites
+
+Review these [basic concepts of distributed GPU training](../concept-distributed-training.md) such as _data parallelism_, _distributed data parallelism_, and _model parallelism_.
+
+> [!TIP]
+> If you don't know which type of parallelism to use, more than 90% of the time you should use __Distributed Data Parallelism__.
+
+## MPI
+
+Azure ML offers an [MPI job](https://www.mcs.anl.gov/research/projects/mpi/) to launch a given number of processes in each node. You can adopt this approach to run distributed training using either per-process-launcher or per-node-launcher, depending on whether `process_count_per_node` is set to 1 (the default) for per-node-launcher, or equal to the number of devices/GPUs for per-process-launcher. Azure ML constructs the full MPI launch command (`mpirun`) behind the scenes. You can't provide your own full head-node-launcher commands like `mpirun` or `DeepSpeed launcher`.
+
+> [!TIP]
+> The base Docker image used by an Azure Machine Learning MPI job needs to have an MPI library installed. [Open MPI](https://www.open-mpi.org/) is included in all the [AzureML GPU base images](https://github.com/Azure/AzureML-Containers). When you use a custom Docker image, you are responsible for making sure the image includes an MPI library. Open MPI is recommended, but you can also use a different MPI implementation such as Intel MPI. Azure ML also provides [curated environments](../resource-curated-environments.md) for popular frameworks.
+
+To run distributed training using MPI, follow these steps:
+
+1. Use an Azure ML environment with the preferred deep learning framework and MPI. AzureML provides [curated environment](../resource-curated-environments.md) for popular frameworks.
+1. Define `MpiConfiguration` with `process_count_per_node` and `node_count`. `process_count_per_node` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script will be responsible for launching the processes per node.
+1. Pass the `MpiConfiguration` object to the `distributed_job_config` parameter of `ScriptRunConfig`.
+
+```python
+from azureml.core import Workspace, ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import MpiConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = MpiConfiguration(process_count_per_node=4, node_count=2)
+
+run_config = ScriptRunConfig(
+ source_directory= './src',
+ script='train.py',
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+# submit the run configuration to start the job
+run = Experiment(ws, "experiment_name").submit(run_config)
+```
+
+### Horovod
+
+Use the MPI job configuration when you use [Horovod](https://horovod.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html) for distributed training with the deep learning framework.
+
+Make sure your code follows these tips:
+
+* The training code is instrumented correctly with Horovod before adding the Azure ML parts
+* Your Azure ML environment contains Horovod and MPI. The PyTorch and TensorFlow curated GPU environments come pre-configured with Horovod and its dependencies.
+* Create an `MpiConfiguration` with your desired distribution.
+
+### Horovod example
+
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+
+### DeepSpeed
+
+Don't use DeepSpeed's custom launcher to run distributed training with the [DeepSpeed](https://www.deepspeed.ai/) library on Azure ML. Instead, configure an MPI job to launch the training job [with MPI](https://www.deepspeed.ai/getting-started/#mpi-and-azureml-compatibility).
+
+Make sure your code follows these tips:
+
+* Your Azure ML environment contains DeepSpeed and its dependencies, Open MPI, and mpi4py.
+* Create an `MpiConfiguration` with your distribution.
+
+### DeepSeed example
+
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/deepspeed/cifar)
+
+### Environment variables from Open MPI
+
+When running MPI jobs with Open MPI images, the following environment variables for each process launched:
+
+1. `OMPI_COMM_WORLD_RANK` - the rank of the process
+2. `OMPI_COMM_WORLD_SIZE` - the world size
+3. `AZ_BATCH_MASTER_NODE` - primary address with port, `MASTER_ADDR:MASTER_PORT`
+4. `OMPI_COMM_WORLD_LOCAL_RANK` - the local rank of the process on the node
+5. `OMPI_COMM_WORLD_LOCAL_SIZE` - number of processes on the node
+
+> [!TIP]
+> Despite the name, environment variable `OMPI_COMM_WORLD_NODE_RANK` does not corresponds to the `NODE_RANK`. To use per-node-launcher, set `process_count_per_node=1` and use `OMPI_COMM_WORLD_RANK` as the `NODE_RANK`.
+
+## PyTorch
+
+Azure ML supports running distributed jobs using PyTorch's native distributed training capabilities (`torch.distributed`).
+
+> [!TIP]
+> For data parallelism, the [official PyTorch guidance](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel) is to use DistributedDataParallel (DDP) over DataParallel for both single-node and multi-node distributed training. PyTorch also [recommends using DistributedDataParallel over the multiprocessing package](https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel). Azure Machine Learning documentation and examples will therefore focus on DistributedDataParallel training.
+
+### Process group initialization
+
+The backbone of any distributed training is based on a group of processes that know each other and can communicate with each other using a backend. For PyTorch, the process group is created by calling [torch.distributed.init_process_group](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) in __all distributed processes__ to collectively form a process group.
+
+```
+torch.distributed.init_process_group(backend='nccl', init_method='env://', ...)
+```
+
+The most common communication backends used are `mpi`, `nccl`, and `gloo`. For GPU-based training `nccl` is recommended for best performance and should be used whenever possible.
+
+`init_method` tells how each process can discover each other, how they initialize and verify the process group using the communication backend. By default if `init_method` is not specified PyTorch will use the environment variable initialization method (`env://`). `init_method` is the recommended initialization method to use in your training code to run distributed PyTorch on Azure ML. PyTorch will look for the following environment variables for initialization:
+
+- **`MASTER_ADDR`** - IP address of the machine that will host the process with rank 0.
+- **`MASTER_PORT`** - A free port on the machine that will host the process with rank 0.
+- **`WORLD_SIZE`** - The total number of processes. Should be equal to the total number of devices (GPU) used for distributed training.
+- **`RANK`** - The (global) rank of the current process. The possible values are 0 to (world size - 1).
+
+For more information on process group initialization, see the [PyTorch documentation](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group).
+
+Beyond these, many applications will also need the following environment variables:
+- **`LOCAL_RANK`** - The local (relative) rank of the process within the node. The possible values are 0 to (# of processes on the node - 1). This information is useful because many operations such as data preparation only should be performed once per node usually on local_rank = 0.
+- **`NODE_RANK`** - The rank of the node for multi-node training. The possible values are 0 to (total # of nodes - 1).
+
+### PyTorch launch options
+
+The Azure ML PyTorch job supports two types of options for launching distributed training:
+
+- __Per-process-launcher__: The system will launch all distributed processes for you, with all the relevant information (such as environment variables) to set up the process group.
+- __Per-node-launcher__: You provide Azure ML with the utility launcher that will get run on each node. The utility launcher will handle launching each of the processes on a given node. Locally within each node, `RANK` and `LOCAL_RANK` are set up by the launcher. The **torch.distributed.launch** utility and PyTorch Lightning both belong in this category.
+
+There are no fundamental differences between these launch options. The choice is largely up to your preference or the conventions of the frameworks/libraries built on top of vanilla PyTorch (such as Lightning or Hugging Face).
+
+The following sections go into more detail on how to configure Azure ML PyTorch jobs for each of the launch options.
+
+### DistributedDataParallel (per-process-launch)
+
+You don't need to use a launcher utility like `torch.distributed.launch`. To run a distributed PyTorch job:
+
+1. Specify the training script and arguments
+1. Create a `PyTorchConfiguration` and specify the `process_count` and `node_count`. The `process_count` corresponds to the total number of processes you want to run for your job. `process_count` should typically equal `# GPUs per node x # nodes`. If `process_count` isn't specified, Azure ML will by default launch one process per node.
+
+Azure ML will set the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`, and `NODE_RANK` environment variables on each node, and set the process-level `RANK` and `LOCAL_RANK` environment variables.
+
+To use this option for multi-process-per-node training, use Azure ML Python SDK `>= 1.22.0`. Process_count was introduced in 1.22.0.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import PyTorchConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = PyTorchConfiguration(process_count=8, node_count=2)
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ arguments=['--epochs', 50],
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+run = Experiment(ws, 'experiment_name').submit(run_config)
+```
+
+> [!TIP]
+> If your training script passes information like local rank or rank as script arguments, you can reference the environment variable(s) in the arguments:
+>
+> ```python
+> arguments=['--epochs', 50, '--local_rank', $LOCAL_RANK]
+> ```
+
+### Pytorch per-process-launch example
+
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+
+### <a name="per-node-launch"></a> Using torch.distributed.launch (per-node-launch)
+
+PyTorch provides a launch utility in [torch.distributed.launch](https://pytorch.org/docs/stable/distributed.html#launch-utility) that you can use to launch multiple processes per node. The `torch.distributed.launch` module spawns multiple training processes on each of the nodes.
+
+The following steps demonstrate how to configure a PyTorch job with a per-node-launcher on Azure ML. The job achieves the equivalent of running the following command:
+
+```shell
+python -m torch.distributed.launch --nproc_per_node <num processes per node> \
+ --nnodes <num nodes> --node_rank $NODE_RANK --master_addr $MASTER_ADDR \
+ --master_port $MASTER_PORT --use_env \
+ <your training script> <your script arguments>
+```
+
+1. Provide the `torch.distributed.launch` command to the `command` parameter of the `ScriptRunConfig` constructor. Azure ML runs this command on each node of your training cluster. `--nproc_per_node` should be less than or equal to the number of GPUs available on each node. MASTER_ADDR, MASTER_PORT, and NODE_RANK are all set by Azure ML, so you can just reference the environment variables in the command. Azure ML sets MASTER_PORT to `6105`, but you can pass a different value to the `--master_port` argument of torch.distributed.launch command if you wish. (The launch utility will reset the environment variables.)
+2. Create a `PyTorchConfiguration` and specify the `node_count`.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import PyTorchConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = PyTorchConfiguration(node_count=2)
+launch_cmd = "python -m torch.distributed.launch --nproc_per_node 4 --nnodes 2 --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT --use_env train.py --epochs 50".split()
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ command=launch_cmd,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+run = Experiment(ws, 'experiment_name').submit(run_config)
+```
+
+> [!TIP]
+> **Single-node multi-GPU training:**
+> If you are using the launch utility to run single-node multi-GPU PyTorch training, you do not need to specify the `distributed_job_config` parameter of ScriptRunConfig.
+>
+>```python
+> launch_cmd = "python -m torch.distributed.launch --nproc_per_node 4 --use_env train.py --epochs 50".split()
+>
+> run_config = ScriptRunConfig(
+> source_directory='./src',
+> command=launch_cmd,
+> compute_target=compute_target,
+> environment=pytorch_env,
+> )
+> ```
+
+### PyTorch per-node-launch example
+
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+
+### PyTorch Lightning
+
+[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/) is a lightweight open-source library that provides a high-level interface for PyTorch. Lightning abstracts away many of the lower-level distributed training configurations required for vanilla PyTorch. Lightning allows you to run your training scripts in single GPU, single-node multi-GPU, and multi-node multi-GPU settings. Behind the scene, it launches multiple processes for you similar to `torch.distributed.launch`.
+
+For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`.
+To run an experiment using multiple nodes with multiple GPUs, there are 2 options:
+
+- Using PyTorch configuration (recommended): Define `PyTorchConfiguration` and specify `communication_backend="Nccl"`, `node_count`, and `process_count` (note that this is the total number of processes, ie, `num_nodes * process_count_per_node`). In Lightning Trainer module, specify both `num_nodes` and `gpus` to be consistent with `PyTorchConfiguration`. For example, `num_nodes = node_count` and `gpus = process_count_per_node`.
+
+- Using MPI Configuration:
+
+ - Define `MpiConfiguration` and specify both `node_count` and `process_count_per_node`. In Lightning Trainer, specify both `num_nodes` and `gpus` to be respectively the same as `node_count` and `process_count_per_node` from `MpiConfiguration`.
+ - For multi-node training with MPI, Lightning requires the following environment variables to be set on each node of your training cluster:
+ - MASTER_ADDR
+ - MASTER_PORT
+ - NODE_RANK
+ - LOCAL_RANK
+
+ Manually set these environment variables that Lightning requires in the main training scripts:
+
+ ```python
+ import os
+ from argparse import ArgumentParser
+
+ def set_environment_variables_for_mpi(num_nodes, gpus_per_node, master_port=54965):
+ if num_nodes > 1:
+ os.environ["MASTER_ADDR"], os.environ["MASTER_PORT"] = os.environ["AZ_BATCH_MASTER_NODE"].split(":")
+ else:
+ os.environ["MASTER_ADDR"] = os.environ["AZ_BATCHAI_MPI_MASTER_NODE"]
+ os.environ["MASTER_PORT"] = str(master_port)
+
+ try:
+ os.environ["NODE_RANK"] = str(int(os.environ.get("OMPI_COMM_WORLD_RANK")) // gpus_per_node)
+ # additional variables
+ os.environ["MASTER_ADDRESS"] = os.environ["MASTER_ADDR"]
+ os.environ["LOCAL_RANK"] = os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]
+ os.environ["WORLD_SIZE"] = os.environ["OMPI_COMM_WORLD_SIZE"]
+ except:
+ # fails when used with pytorch configuration instead of mpi
+ pass
+
+ if __name__ == "__main__":
+ parser = ArgumentParser()
+ parser.add_argument("--num_nodes", type=int, required=True)
+ parser.add_argument("--gpus_per_node", type=int, required=True)
+ args = parser.parse_args()
+ set_environment_variables_for_mpi(args.num_nodes, args.gpus_per_node)
+
+ trainer = Trainer(
+ num_nodes=args.num_nodes,
+ gpus=args.gpus_per_node
+ )
+ ```
+
+ Lightning handles computing the world size from the Trainer flags `--gpus` and `--num_nodes`.
+
+ ```python
+ from azureml.core import ScriptRunConfig, Experiment
+ from azureml.core.runconfig import MpiConfiguration
+
+ nnodes = 2
+ gpus_per_node = 4
+ args = ['--max_epochs', 50, '--gpus_per_node', gpus_per_node, '--accelerator', 'ddp', '--num_nodes', nnodes]
+ distr_config = MpiConfiguration(node_count=nnodes, process_count_per_node=gpus_per_node)
+
+ run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ arguments=args,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+ )
+
+ run = Experiment(ws, 'experiment_name').submit(run_config)
+ ```
+
+### Hugging Face Transformers
+
+Hugging Face provides many [examples](https://github.com/huggingface/transformers/tree/master/examples) for using its Transformers library with `torch.distributed.launch` to run distributed training. To run these examples and your own custom training scripts using the Transformers Trainer API, follow the [Using `torch.distributed.launch`](#distributeddataparallel-per-process-launch) section.
+
+Sample job configuration code to fine-tune the BERT large model on the text classification MNLI task using the `run_glue.py` script on one node with 8 GPUs:
+
+```python
+from azureml.core import ScriptRunConfig
+from azureml.core.runconfig import PyTorchConfiguration
+
+distr_config = PyTorchConfiguration() # node_count defaults to 1
+launch_cmd = "python -m torch.distributed.launch --nproc_per_node 8 text-classification/run_glue.py --model_name_or_path bert-large-uncased-whole-word-masking --task_name mnli --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mnli_output".split()
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ command=launch_cmd,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+```
+
+You can also use the [per-process-launch](#distributeddataparallel-per-process-launch) option to run distributed training without using `torch.distributed.launch`. One thing to keep in mind if using this method is that the transformers [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html?highlight=launch#trainingarguments) expect the local rank to be passed in as an argument (`--local_rank`). `torch.distributed.launch` takes care of this when `--use_env=False`, but if you are using per-process-launch you'll need to explicitly pass the local rank in as an argument to the training script `--local_rank=$LOCAL_RANK` as Azure ML only sets the `LOCAL_RANK` environment variable.
+
+## TensorFlow
+
+If you're using [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, such as TensorFlow 2.x's `tf.distribute.Strategy` API, you can launch the distributed job via Azure ML using the `TensorflowConfiguration`.
+
+To do so, specify a `TensorflowConfiguration` object to the `distributed_job_config` parameter of the `ScriptRunConfig` constructor. If you're using `tf.distribute.experimental.MultiWorkerMirroredStrategy`, specify the `worker_count` in the `TensorflowConfiguration` corresponding to the number of nodes for your training job.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import TensorflowConfiguration
+
+curated_env_name = 'AzureML-TensorFlow-2.3-GPU'
+tf_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = TensorflowConfiguration(worker_count=2, parameter_server_count=0)
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ compute_target=compute_target,
+ environment=tf_env,
+ distributed_job_config=distr_config,
+)
+
+# submit the run configuration to start the job
+run = Experiment(ws, "experiment_name").submit(run_config)
+```
+
+If your training script uses the parameter server strategy for distributed training, such as for legacy TensorFlow 1.x, you'll also need to specify the number of parameter servers to use in the job, for example, `tf_config = TensorflowConfiguration(worker_count=2, parameter_server_count=1)`.
+
+### TF_CONFIG
+
+In TensorFlow, the **TF_CONFIG** environment variable is required for training on multiple machines. For TensorFlow jobs, Azure ML will configure and set the TF_CONFIG variable appropriately for each worker before executing your training script.
+
+You can access TF_CONFIG from your training script if you need to: `os.environ['TF_CONFIG']`.
+
+Example TF_CONFIG set on a chief worker node:
+```json
+TF_CONFIG='{
+ "cluster": {
+ "worker": ["host0:2222", "host1:2222"]
+ },
+ "task": {"type": "worker", "index": 0},
+ "environment": "cloud"
+}'
+```
+
+### TensorFlow example
+
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed)
+
+## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
+
+As the number of VMs training a model increases, the time required to train that model should decrease. The decrease in time, ideally, should be linearly proportional to the number of training VMs. For instance, if training a model on one VM takes 100 seconds, then training the same model on two VMs should ideally take 50 seconds. Training the model on four VMs should take 25 seconds, and so on.
+
+InfiniBand can be an important factor in attaining this linear scaling. InfiniBand enables low-latency, GPU-to-GPU communication across nodes in a cluster. InfiniBand requires specialized hardware to operate. Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and InfiniBand support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-acclerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
+
+Typically, VM SKUs with an 'r' in their name contain the required InfiniBand hardware, and those without an 'r' typically do not. ('r' is a reference to RDMA, which stands for "remote direct memory access.") For instance, the VM SKU `Standard_NC24rs_v3` is InfiniBand-enabled, but the SKU `Standard_NC24s_v3` is not. Aside from the InfiniBand capabilities, the specs between these two SKUs are largely the same ΓÇô both have 24 cores, 448 GB RAM, 4 GPUs of the same SKU, etc. [Learn more about RDMA- and InfiniBand-enabled machine SKUs](../../virtual-machines/sizes-hpc.md#rdma-capable-instances).
+
+>[!WARNING]
+>The older-generation machine SKU `Standard_NC24r` is RDMA-enabled, but it does not contain SR-IOV hardware required for InfiniBand.
+
+If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, the OS image will come with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
+
+## Next steps
+
+* [Deploy machine learning models to Azure](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
+* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
+
+ Title: Train deep learning Keras models (SDK v1)
+
+description: Learn how to train and register a Keras deep neural network classification model running on TensorFlow using Azure Machine Learning SDK (v1).
++++++ Last updated : 09/28/2020++
+#Customer intent: As a Python Keras developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
++
+# Train Keras models at scale with Azure Machine Learning (SDK v1)
++
+In this article, learn how to run your Keras training scripts with Azure Machine Learning.
+
+The example code in this article shows you how to train and register a Keras classification model built using the TensorFlow backend with Azure Machine Learning. It uses the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify handwritten digits using a deep neural network (DNN) built using the [Keras Python library](https://keras.io) running on top of [TensorFlow](https://www.tensorflow.org/overview).
+
+Keras is a high-level neural network API capable of running top of other popular DNN frameworks to simplify development. With Azure Machine Learning, you can rapidly scale out training jobs using elastic cloud compute resources. You can also track your training runs, version models, deploy models, and much more.
+
+Whether you're developing a Keras model from the ground-up or you're bringing an existing model into the cloud, Azure Machine Learning can help you build production-ready models.
+
+> [!NOTE]
+> If you are using the Keras API **tf.keras** built into TensorFlow and not the standalone Keras package, refer instead to [Train TensorFlow models](how-to-train-tensorflow.md).
+
+## Prerequisites
+
+Run this code on either of these environments:
+
+- Azure Machine Learning compute instance - no downloads or installation necessary
+
+ - Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - In the samples folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > keras > train-hyperparameter-tune-deploy-with-keras** folder.
+
+ - Your own Jupyter Notebook server
+
+ - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.15.0).
+ - [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+ - [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras) `keras_mnist.py` and `utils.py`
+
+ You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets.
++
+## Set up the experiment
+
+This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the FileDataset for the input training data, creating the compute target, and defining the training environment.
+
+### Import packages
+
+First, import the necessary Python libraries.
+
+```Python
+import os
+import azureml
+from azureml.core import Experiment
+from azureml.core import Environment
+from azureml.core import Workspace, Run
+from azureml.core.compute import ComputeTarget, AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+```
+
+### Initialize a workspace
+
+The [Azure Machine Learning workspace](../concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+
+Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+
+```Python
+ws = Workspace.from_config()
+```
+
+### Create a file dataset
+
+A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](how-to-create-register-datasets.md) guide on the `Dataset` package for more information.
+
+```python
+from azureml.core.dataset import Dataset
+
+web_paths = [
+ 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
+ ]
+dataset = Dataset.File.from_files(path=web_paths)
+```
+
+You can use the `register()` method to register the data set to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
+
+```python
+dataset = dataset.register(workspace=ws,
+ name='mnist-dataset',
+ description='training and test dataset',
+ create_new_version=True)
+```
+
+### Create a compute target
+
+Create a compute target for your training job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster.
++
+```Python
+cluster_name = "gpu-cluster"
+
+try:
+ compute_target = ComputeTarget(workspace=ws, name=cluster_name)
+ print('Found existing compute target')
+except ComputeTargetException:
+ print('Creating a new compute target...')
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
+ max_nodes=4)
+
+ compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+
+ compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
+```
++
+For more information on compute targets, see the [what is a compute target](../concept-compute-target.md) article.
+
+### Define your environment
+
+Define the Azure ML [Environment](../concept-environments.md) that encapsulates your training script's dependencies.
+
+First, define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+
+```yaml
+channels:
+- conda-forge
+dependencies:
+- python=3.6.2
+- pip:
+ - azureml-defaults
+ - tensorflow-gpu==2.0.0
+ - keras<=2.3.1
+ - matplotlib
+```
+
+Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
+
+By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you will need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo for more information.
+
+```python
+keras_env = Environment.from_conda_specification(name='keras-env', file_path='conda_dependencies.yml')
+
+# Specify a GPU base image
+keras_env.docker.enabled = True
+keras_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04'
+```
+
+For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+
+## Configure and submit your training run
+
+### Create a ScriptRunConfig
+First get the data from the workspace datastore using the `Dataset` class.
+
+```python
+dataset = Dataset.get_by_name(ws, 'mnist-dataset')
+
+# list the files referenced by mnist-dataset
+dataset.to_path()
+```
+
+Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on.
+
+Any arguments to your training script will be passed via command line if specified in the `arguments` parameter. The DatasetConsumptionConfig for our FileDataset is passed as an argument to the training script, for the `--data-folder` argument. Azure ML will resolve this DatasetConsumptionConfig to the mount-point of the backing datastore, which can then be accessed from the training script.
+
+```python
+from azureml.core import ScriptRunConfig
+
+args = ['--data-folder', dataset.as_mount(),
+ '--batch-size', 50,
+ '--first-layer-neurons', 300,
+ '--second-layer-neurons', 100,
+ '--learning-rate', 0.001]
+
+src = ScriptRunConfig(source_directory=script_folder,
+ script='keras_mnist.py',
+ arguments=args,
+ compute_target=compute_target,
+ environment=keras_env)
+```
+
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+
+> [!WARNING]
+> If you were previously using the TensorFlow estimator to configure your Keras training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
+
+### Submit your run
+
+The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides the interface to the run history while the job is running and after it has completed.
+
+```Python
+run = Experiment(workspace=ws, name='Tutorial-Keras-Minst').submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+### What happens during run execution
+As the run is executed, it goes through the following stages:
+
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+
+- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+
+- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+
+- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+
+## Register the model
+
+Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+
+```Python
+model = run.register_model(model_name='keras-mnist', model_path='outputs/model')
+```
+
+> [!TIP]
+> The deployment how-to
+contains a section on registering models, but you can skip directly to [creating a compute target](how-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+
+You can also download a local copy of the model. This can be useful for doing additional model validation work locally. In the training script, `keras_mnist.py`, a TensorFlow saver object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy from the run history.
+
+```Python
+# Create a model folder in the current directory
+os.makedirs('./model', exist_ok=True)
+
+for f in run.get_file_names():
+ if f.startswith('outputs/model'):
+ output_file_path = os.path.join('./model', f.split('/')[-1])
+ print('Downloading from {} to {} ...'.format(f, output_file_path))
+ run.download_file(name=f, output_file_path=output_file_path)
+```
+
+## Next steps
+
+In this article, you trained and registered a Keras model on Azure Machine Learning. To learn how to deploy a model, continue on to our model deployment article.
+
+* [How and where to deploy models](how-to-deploy-and-where.md)
+* [Track run metrics during training](how-to-log-view-metrics.md)
+* [Tune hyperparameters](../how-to-tune-hyperparameters.md)
+* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
+
+ Title: Train deep learning PyTorch models (SDK v1)
+
+description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning SDK (v1).
+++++ Last updated : 02/28/2022++
+#Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
++
+# Train PyTorch models at scale with Azure Machine Learning SDK (v1)
++
+In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
+
+The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](../concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
+
+Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
+
+## Prerequisites
+
+Run this code on either of these environments:
+
+- Azure Machine Learning compute instance - no downloads or installation necessary
+
+ - Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > pytorch > train-hyperparameter-tune-deploy-with-pytorch** folder.
+
+ - Your own Jupyter Notebook server
+ - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.15.0).
+ - [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+ - [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch) `pytorch_train.py`
+
+ You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets.
++
+## Set up the experiment
+
+This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
+
+### Import packages
+
+First, import the necessary Python libraries.
+
+```Python
+import os
+import shutil
+
+from azureml.core.workspace import Workspace
+from azureml.core import Experiment
+from azureml.core import Environment
+
+from azureml.core.compute import ComputeTarget, AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+```
+
+### Initialize a workspace
+
+The [Azure Machine Learning workspace](../concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+
+Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+
+```Python
+ws = Workspace.from_config()
+```
+
+### Get the data
+
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+
+### Prepare training script
+
+In this tutorial, the training script, `pytorch_train.py`, is already provided. In practice, you can take any custom training script, as is, and run it with Azure Machine Learning.
+
+Create a folder for your training script(s).
+
+```python
+project_folder = './pytorch-birds'
+os.makedirs(project_folder, exist_ok=True)
+shutil.copy('pytorch_train.py', project_folder)
+```
+
+### Create a compute target
+
+Create a compute target for your PyTorch job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster.
++
+```Python
+
+# Choose a name for your CPU cluster
+cluster_name = "gpu-cluster"
+
+# Verify that cluster does not exist already
+try:
+ compute_target = ComputeTarget(workspace=ws, name=cluster_name)
+ print('Found existing compute target')
+except ComputeTargetException:
+ print('Creating a new compute target...')
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
+ max_nodes=4)
+
+ # Create the cluster with the specified name and configuration
+ compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+
+ # Wait for the cluster to complete, show the output log
+ compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
+```
+
+If you instead want to create a CPU cluster, provide a different VM size to the vm_size parameter, such as STANDARD_D2_V2.
++
+For more information on compute targets, see the [what is a compute target](../concept-compute-target.md) article.
+
+### Define your environment
+
+To define the [Azure ML Environment](../concept-environments.md) that encapsulates your training script's dependencies, you can either define a custom environment or use an Azure ML curated environment.
+
+#### Use a curated environment
+
+Azure ML provides prebuilt, [curated environments](../resource-curated-environments.md) if you don't want to define your own environment. There are several CPU and GPU curated environments for PyTorch corresponding to different versions of PyTorch.
+
+If you want to use a curated environment, you can run the following command instead:
+
+```python
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+```
+
+To see the packages included in the curated environment, you can write out the conda dependencies to disk:
+
+```python
+pytorch_env.save_to_directory(path=curated_env_name)
+```
+
+Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:
+
+```python
+pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml')
+```
+
+If you had instead modified the curated environment object directly, you can clone that environment with a new name:
+
+```python
+pytorch_env = pytorch_env.clone(new_name='pytorch-1.6-gpu')
+```
+
+#### Create a custom environment
+
+You can also create your own Azure ML environment that encapsulates your training script's dependencies.
+
+First, define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+
+```yaml
+channels:
+- conda-forge
+dependencies:
+- python=3.6.2
+- pip=21.3.1
+- pip:
+ - azureml-defaults
+ - torch==1.6.0
+ - torchvision==0.7.0
+ - future==0.17.1
+ - pillow
+```
+
+Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
+
+By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you'll need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use. For more information, see [AzureML-Containers GitHub repo](https://github.com/Azure/AzureML-Containers).
+
+```python
+pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml')
+
+# Specify a GPU base image
+pytorch_env.docker.enabled = True
+pytorch_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04'
+```
+
+> [!TIP]
+> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](../how-to-train-with-custom-image.md).
+
+For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+
+## Configure and submit your training run
+
+### Create a ScriptRunConfig
+
+Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Any arguments to your training script will be passed via command line if specified in the `arguments` parameter. The following code will configure a single-node PyTorch job.
+
+```python
+from azureml.core import ScriptRunConfig
+
+src = ScriptRunConfig(source_directory=project_folder,
+ script='pytorch_train.py',
+ arguments=['--num_epochs', 30, '--output_dir', './outputs'],
+ compute_target=compute_target,
+ environment=pytorch_env)
+```
+
+> [!WARNING]
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+
+> [!WARNING]
+> If you were previously using the PyTorch estimator to configure your PyTorch training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
+
+## Submit your run
+
+The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides the interface to the run history while the job is running and after it has completed.
+
+```Python
+run = Experiment(ws, name='Tutorial-pytorch-birds').submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+### What happens during run execution
+
+As the run is executed, it goes through the following stages:
+
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+
+- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+
+- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+
+- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+
+## Register or download a model
+
+Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+
+```Python
+model = run.register_model(model_name='pytorch-birds', model_path='outputs/model.pt')
+```
+
+> [!TIP]
+> The deployment how-to
+contains a section on registering models, but you can skip directly to [creating a compute target](how-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+
+You can also download a local copy of the model by using the Run object. In the training script `pytorch_train.py`, a PyTorch save object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy.
+
+```Python
+# Create a model folder in the current directory
+os.makedirs('./model', exist_ok=True)
+
+# Download the model from run history
+run.download_file(name='outputs/model.pt', output_file_path='./model/model.pt'),
+```
+
+## Distributed training
+
+Azure Machine Learning also supports multi-node distributed PyTorch jobs so that you can scale your training workloads. You can easily run distributed PyTorch jobs and Azure ML will manage the orchestration for you.
+
+Azure ML supports running distributed PyTorch jobs with both Horovod and PyTorch's built-in DistributedDataParallel module.
+
+For more information about distributed training, see the [Distributed GPU training guide](how-to-train-distributed-gpu.md).
+
+## Export to ONNX
+
+To optimize inference with the [ONNX Runtime](../concept-onnx.md), convert your trained PyTorch model to the ONNX format. Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data. For an example, see the [Exporting model from PyTorch to ONNX tutorial](https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb).
+
+## Next steps
+
+In this article, you trained and registered a deep learning, neural network using PyTorch on Azure Machine Learning. To learn how to deploy a model, continue on to our model deployment article.
+
+- [How and where to deploy models](how-to-deploy-and-where.md)
+- [Track run metrics during training](../how-to-log-view-metrics.md)
+- [Tune hyperparameters](../how-to-tune-hyperparameters.md)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
+
+ Title: Train scikit-learn machine learning models (SDK v1)
+
+description: Learn how Azure Machine Learning SDK (v1) enables you to scale out a scikit-learn training job using elastic cloud compute resources.
+++++ Last updated : 03/21/2022++
+#Customer intent: As a Python scikit-learn developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my machine learning models at scale.
++
+# Train scikit-learn models at scale with Azure Machine Learning (SDK v1)
++
+In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning.
+
+The example scripts in this article are used to classify iris flower images to build a machine learning model based on scikit-learn's [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris).
+
+Whether you're training a machine learning scikit-learn model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
+
+## Prerequisites
+
+You can run this code in either an Azure Machine Learning compute instance, or your own Jupyter Notebook:
+
+ - Azure Machine Learning compute instance
+ - Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a compute instance. Every compute instance includes a dedicated notebook server pre-loaded with the SDK and the notebooks sample repository.
+ - Select the notebook tab in the Azure Machine Learning studio. In the samples training folder, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > scikit-learn > train-hyperparameter-tune-deploy-with-sklearn** folder.
+ - You can use the pre-populated code in the sample training folder to complete this tutorial.
+
+ - Create a Jupyter Notebook server and run the code in the following sections.
+
+ - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.13.0).
+ - [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+
+## Set up the experiment
+
+This section sets up the training experiment by loading the required Python packages, initializing a workspace, defining the training environment, and preparing the training script.
+
+### Initialize a workspace
+
+The [Azure Machine Learning workspace](../concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+
+Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+
+```Python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+```
+
+### Prepare scripts
+
+In this tutorial, the [training script **train_iris.py**](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train_iris.py) is already provided for you. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.
+
+> [!NOTE]
+> - The provided training script shows how to log some metrics to your Azure ML run using the `Run` object within the script.
+> - The provided training script uses example data from the `iris = datasets.load_iris()` function. To use and access your own data, see [how to train with datasets](how-to-train-with-datasets.md) to make data available during training.
+
+### Define your environment
+
+To define the Azure ML [Environment](../concept-environments.md) that encapsulates your training script's dependencies, you can either define a custom environment or use and Azure ML curated environment.
+
+#### Use a curated environment
+Optionally, Azure ML provides prebuilt, [curated environments](../resource-curated-environments.md) if you don't want to define your own environment.
+
+If you want to use a curated environment, you can run the following command instead:
+
+```python
+from azureml.core import Environment
+
+sklearn_env = Environment.get(workspace=ws, name='AzureML-Tutorial')
+```
+
+#### Create a custom environment
+
+You can also create your own custom environment. Define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+
+```yaml
+dependencies:
+ - python=3.6.2
+ - scikit-learn
+ - numpy
+ - pip:
+ - azureml-defaults
+```
+
+Create an Azure ML environment from this Conda environment specification. The environment will be packaged into a Docker container at runtime.
+```python
+from azureml.core import Environment
+
+sklearn_env = Environment.from_conda_specification(name='sklearn-env', file_path='conda_dependencies.yml')
+```
+
+For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+
+## Configure and submit your training run
+
+### Create a ScriptRunConfig
+Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on.
+Any arguments to your training script will be passed via command line if specified in the `arguments` parameter.
+
+The following code will configure a ScriptRunConfig object for submitting your job for execution on your local machine.
+
+```python
+from azureml.core import ScriptRunConfig
+
+src = ScriptRunConfig(source_directory='.',
+ script='train_iris.py',
+ arguments=['--kernel', 'linear', '--penalty', 1.0],
+ environment=sklearn_env)
+```
+
+If you want to instead run your job on a remote cluster, you can specify the desired compute target to the `compute_target` parameter of ScriptRunConfig.
+
+```python
+from azureml.core import ScriptRunConfig
+
+compute_target = ws.compute_targets['<my-cluster-name>']
+src = ScriptRunConfig(source_directory='.',
+ script='train_iris.py',
+ arguments=['--kernel', 'linear', '--penalty', 1.0],
+ compute_target=compute_target,
+ environment=sklearn_env)
+```
+
+### Submit your run
+```python
+from azureml.core import Experiment
+
+run = Experiment(ws,'Tutorial-TrainIRIS').submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+> [!WARNING]
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+
+### What happens during run execution
+As the run is executed, it goes through the following stages:
+
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+
+- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+
+- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+
+- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+
+## Save and register the model
+
+Once you've trained the model, you can save and register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+
+Add the following code to your training script, train_iris.py, to save the model.
+
+``` Python
+import joblib
+
+joblib.dump(svm_model_linear, 'model.joblib')
+```
+
+Register the model to your workspace with the following code. By specifying the parameters `model_framework`, `model_framework_version`, and `resource_configuration`, no-code model deployment becomes available. No-code model deployment allows you to directly deploy your model as a web service from the registered model, and the [`ResourceConfiguration`](/python/api/azureml-core/azureml.core.resource_configuration.resourceconfiguration) object defines the compute resource for the web service.
+
+```Python
+from azureml.core import Model
+from azureml.core.resource_configuration import ResourceConfiguration
+
+model = run.register_model(model_name='sklearn-iris',
+ model_path='outputs/model.joblib',
+ model_framework=Model.Framework.SCIKITLEARN,
+ model_framework_version='0.19.1',
+ resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5))
+```
+
+## Deployment
+
+The model you just registered can be deployed the exact same way as any other registered model in Azure ML. The deployment how-to
+contains a section on registering models, but you can skip directly to [creating a compute targethow-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+
+### (Preview) No-code model deployment
+
+Instead of the traditional deployment route, you can also use the no-code deployment feature (preview) for scikit-learn. No-code model deployment is supported for all built-in scikit-learn model types. By registering your model as shown above with the `model_framework`, `model_framework_version`, and `resource_configuration` parameters, you can simply use the [`deploy()`](/python/api/azureml-core/azureml.core.model%28class%29#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) static function to deploy your model.
+
+```python
+web_service = Model.deploy(ws, "scikit-learn-service", [model])
+```
+
+> [!NOTE]
+> These dependencies are included in the pre-built scikit-learn inference container.
+
+```yaml
+ - azureml-defaults
+ - inference-schema[numpy-support]
+ - scikit-learn
+ - numpy
+```
+
+The full [how-to](how-to-deploy-and-where.md) covers deployment in Azure Machine Learning in greater depth.
++
+## Next steps
+
+In this article, you trained and registered a scikit-learn model, and learned about deployment options. See these other articles to learn more about Azure Machine Learning.
+
+* [Track run metrics during training](../how-to-log-view-metrics.md)
+* [Tune hyperparameters](../how-to-tune-hyperparameters.md)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
+
+ Title: Train and deploy a TensorFlow model (SDK v1)
+
+description: Learn how Azure Machine Learning SDK (v1) enables you to scale out a TensorFlow training job using elastic cloud compute resources.
+++++ Last updated : 02/23/2022++
+#Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
++
+# Train TensorFlow models at scale with Azure Machine Learning SDK (v1)
++
+In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning.
+
+This example trains and registers a TensorFlow model to classify handwritten digits using a deep neural network (DNN).
+
+Whether you're developing a TensorFlow model from the ground-up or you're bringing an [existing model](how-to-deploy-and-where.md) into the cloud, you can use Azure Machine Learning to scale out open-source training jobs to build, deploy, version, and monitor production-grade models.
+
+## Prerequisites
+
+Run this code on either of these environments:
+
+- Azure Machine Learning compute instance - no downloads or installation necessary
+ - Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **how-to-use-azureml > ml-frameworks > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow** folder.
+
+- Your own Jupyter Notebook server
+ - [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) (>= 1.15.0).
+ - [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+ - [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow) `tf_mnist.py` and `utils.py`
+
+ You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning, model deployment, and notebook widgets.
++
+## Set up the experiment
+
+This section sets up the training experiment by loading the required Python packages, initializing a workspace, creating the compute target, and defining the training environment.
+
+### Import packages
+
+First, import the necessary Python libraries.
+
+```Python
+import os
+import urllib
+import shutil
+import azureml
+
+from azureml.core import Experiment
+from azureml.core import Workspace, Run
+from azureml.core import Environment
+
+from azureml.core.compute import ComputeTarget, AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+```
+
+### Initialize a workspace
+
+The [Azure Machine Learning workspace](../concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](/python/api/azureml-core/azureml.core.workspace.workspace) object.
+
+Create a workspace object from the `config.json` file created in the [prerequisites section](#prerequisites).
+
+```Python
+ws = Workspace.from_config()
+```
+
+### Create a file dataset
+
+A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they'll be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. For more information the `Dataset` package, see the [How to create register datasets article](how-to-create-register-datasets.md).
+
+```python
+from azureml.core.dataset import Dataset
+
+web_paths = [
+ 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
+ ]
+dataset = Dataset.File.from_files(path = web_paths)
+```
+
+Use the `register()` method to register the data set to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
+
+```python
+dataset = dataset.register(workspace=ws,
+ name='mnist-dataset',
+ description='training and test dataset',
+ create_new_version=True)
+
+# list the files referenced by dataset
+dataset.to_path()
+```
+
+### Create a compute target
+
+Create a compute target for your TensorFlow job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster.
++
+```Python
+cluster_name = "gpu-cluster"
+
+try:
+ compute_target = ComputeTarget(workspace=ws, name=cluster_name)
+ print('Found existing compute target')
+except ComputeTargetException:
+ print('Creating a new compute target...')
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
+ max_nodes=4)
+
+ compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+
+ compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
+```
++
+For more information on compute targets, see the [what is a compute target](../concept-compute-target.md) article.
+
+### Define your environment
+
+To define the Azure ML [Environment](../concept-environments.md) that encapsulates your training script's dependencies, you can either define a custom environment or use an Azure ML curated environment.
+
+#### Use a curated environment
+
+Azure ML provides prebuilt, curated environments if you don't want to define your own environment. Azure ML has several CPU and GPU curated environments for TensorFlow corresponding to different versions of TensorFlow. For more info, see [Azure ML Curated Environments](../resource-curated-environments.md).
+
+If you want to use a curated environment, you can run the following command instead:
+
+```python
+curated_env_name = 'AzureML-TensorFlow-2.2-GPU'
+tf_env = Environment.get(workspace=ws, name=curated_env_name)
+```
+
+To see the packages included in the curated environment, you can write out the conda dependencies to disk:
+
+```python
+
+tf_env.save_to_directory(path=curated_env_name)
+```
+
+Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:
+
+```python
+
+tf_env = Environment.from_conda_specification(name='tensorflow-2.2-gpu', file_path='./conda_dependencies.yml')
+```
+
+If you had instead modified the curated environment object directly, you can clone that environment with a new name:
+
+```python
+
+tf_env = tf_env.clone(new_name='tensorflow-2.2-gpu')
+```
+
+#### Create a custom environment
+
+You can also create your own Azure ML environment that encapsulates your training script's dependencies.
+
+First, define your conda dependencies in a YAML file; in this example the file is named `conda_dependencies.yml`.
+
+```yaml
+channels:
+- conda-forge
+dependencies:
+- python=3.6.2
+- pip:
+ - azureml-defaults
+ - tensorflow-gpu==2.2.0
+```
+
+Create an Azure ML environment from this conda environment specification. The environment will be packaged into a Docker container at runtime.
+
+By default if no base image is specified, Azure ML will use a CPU image `azureml.core.environment.DEFAULT_CPU_IMAGE` as the base image. Since this example runs training on a GPU cluster, you'll need to specify a GPU base image that has the necessary GPU drivers and dependencies. Azure ML maintains a set of base images published on Microsoft Container Registry (MCR) that you can use, see the [Azure/AzureML-Containers GitHub repo](https://github.com/Azure/AzureML-Containers) for more information.
+
+```python
+tf_env = Environment.from_conda_specification(name='tensorflow-2.2-gpu', file_path='./conda_dependencies.yml')
+
+# Specify a GPU base image
+tf_env.docker.enabled = True
+tf_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04'
+```
+
+> [!TIP]
+> Optionally, you can just capture all your dependencies directly in a custom Docker image or Dockerfile, and create your environment from that. For more information, see [Train with custom image](../how-to-train-with-custom-image.md).
+
+For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+
+## Configure and submit your training run
+
+### Create a ScriptRunConfig
+
+Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Any arguments to your training script will be passed via command line if specified in the `arguments` parameter.
+
+```python
+from azureml.core import ScriptRunConfig
+
+args = ['--data-folder', dataset.as_mount(),
+ '--batch-size', 64,
+ '--first-layer-neurons', 256,
+ '--second-layer-neurons', 128,
+ '--learning-rate', 0.01]
+
+src = ScriptRunConfig(source_directory=script_folder,
+ script='tf_mnist.py',
+ arguments=args,
+ compute_target=compute_target,
+ environment=tf_env)
+```
+
+> [!WARNING]
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+
+> [!WARNING]
+> If you were previously using the TensorFlow estimator to configure your TensorFlow training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
+
+### Submit a run
+
+The [Run object](/python/api/azureml-core/azureml.core.run%28class%29) provides the interface to the run history while the job is running and after it has completed.
+
+```Python
+run = Experiment(workspace=ws, name='Tutorial-TF-Mnist').submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+### What happens during run execution
+
+As the run is executed, it goes through the following stages:
+
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified instead, the cached image backing that curated environment will be used.
+
+- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
+
+- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the `script` is executed. Outputs from stdout and the **./logs** folder are streamed to the run history and can be used to monitor the run.
+
+- **Post-Processing**: The **./outputs** folder of the run is copied over to the run history.
+
+## Register or download a model
+
+Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
+
+Optional: by specifying the parameters `model_framework`, `model_framework_version`, and `resource_configuration`, no-code model deployment becomes available. This allows you to directly deploy your model as a web service from the registered model, and the `ResourceConfiguration` object defines the compute resource for the web service.
+
+```Python
+from azureml.core import Model
+from azureml.core.resource_configuration import ResourceConfiguration
+
+model = run.register_model(model_name='tf-mnist',
+ model_path='outputs/model',
+ model_framework=Model.Framework.TENSORFLOW,
+ model_framework_version='2.0',
+ resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5))
+```
+
+You can also download a local copy of the model by using the Run object. In the training script `tf_mnist.py`, a TensorFlow saver object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy.
+
+```Python
+# Create a model folder in the current directory
+os.makedirs('./model', exist_ok=True)
+run.download_files(prefix='outputs/model', output_directory='./model', append_prefix=False)
+```
+
+## Distributed training
+
+Azure Machine Learning also supports multi-node distributed TensorFlow jobs so that you can scale your training workloads. You can easily run distributed TensorFlow jobs and Azure ML will manage the orchestration for you.
+
+Azure ML supports running distributed TensorFlow jobs with both Horovod and TensorFlow's built-in distributed training API.
+
+For more information about distributed training, see the [Distributed GPU training guide](how-to-train-distributed-gpu.md).
+
+## Deploy a TensorFlow model
+
+The deployment how-to contains a section on registering models, but you can skip directly to [creating a compute target](how-to-deploy-and-where.md#choose-a-compute-target) for deployment, since you already have a registered model.
+
+### (Preview) No-code model deployment
+
+Instead of the traditional deployment route, you can also use the no-code deployment feature (preview) for TensorFlow. By registering your model as shown above with the `model_framework`, `model_framework_version`, and `resource_configuration` parameters, you can use the `deploy()` static function to deploy your model.
+
+```python
+service = Model.deploy(ws, "tensorflow-web-service", [model])
+```
+
+The full [how-to](how-to-deploy-and-where.md) covers deployment in Azure Machine Learning in greater depth.
+
+## Next steps
+
+In this article, you trained and registered a TensorFlow model, and learned about options for deployment. See these other articles to learn more about Azure Machine Learning.
+
+- [Track run metrics during training](how-to-log-view-metrics.md)
+- [Tune hyperparameters](../how-to-tune-hyperparameters.md)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-workspace-diagnostic-api.md
+
+ Title: Workspace diagnostics (v1)
+
+description: Learn how to use Azure Machine Learning workspace diagnostics with the Python SDK v1.
++++++ Last updated : 09/14/2022++++
+# How to use workspace diagnostics (SDK v1)
+
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
+> * [v1](how-to-workspace-diagnostic-api.md)
+> * [v2 (current version)](../how-to-workspace-diagnostic-api.md)
+
+Azure Machine Learning provides a diagnostic API that can be used to identify problems with your workspace. Errors returned in the diagnostics report include information on how to resolve the problem.
+
+In this article, learn how to use the workspace diagnostics from the Azure Machine Learning Python SDK v1.
+
+## Prerequisites
+
+* An Azure Machine learning workspace. If you don't have one, see [Create a workspace](../quickstart-create-resources.md).
+* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml).
+
+## Diagnostics from Python
+
+The following snippet demonstrates how to use workspace diagnostics from Python
++
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+
+diag_param = {
+ "value": {
+ }
+ }
+
+resp = ws.diagnose_workspace(diag_param)
+print(resp)
+```
+
+The response is a JSON document that contains information on any problems detected with the workspace. The following JSON is an example response:
+
+```json
+{
+ 'value': {
+ 'user_defined_route_results': [],
+ 'network_security_rule_results': [],
+ 'resource_lock_results': [],
+ 'dns_resolution_results': [{
+ 'code': 'CustomDnsInUse',
+ 'level': 'Warning',
+ 'message': "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default dns. You need to configure your DNS server and check https://docs.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom dns is set up correctly."
+ }],
+ 'storage_account_results': [],
+ 'key_vault_results': [],
+ 'container_registry_results': [],
+ 'application_insights_results': [],
+ 'other_results': []
+ }
+}
+```
+
+If no problems are detected, an empty JSON document is returned.
+
+For more information, see the [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-) reference.
+
+## Next steps
+
+* [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-)
+* [How to manage workspaces in portal or SDK](../how-to-manage-workspace.md)
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Managed Grafana has the following known limitations:
* All users must have accounts in an Azure Active Directory. Microsoft (also known as MSA) and 3rd-party accounts aren't supported. As a workaround, use the default tenant of your Azure subscription with your Grafana instance and add other users as guests.
-* Installing, uninstalling and upgrading plugins from the Grafana Catalog aren't allowed.
+* Installing, uninstalling and upgrading plugins from the Grafana Catalog isn't possible.
* Data source query results are capped at 80 MB. To mitigate this constraint, reduce the size of the query, for example, by shortening the time duration.
marketplace Azure App Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-solution.md
Previously updated : 07/05/2021 Last updated : 9/14/2022 # Configure a solution template plan
marketplace Azure App Test Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-test-publish.md
Previously updated : 09/27/2021 Last updated : 9/14/2022 # Test and publish an Azure application offer
marketplace Commercial Marketplace Lead Management Instructions Dynamics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md
Previously updated : 03/30/2020 Last updated : 9/14/2022 # Configure lead management for Dynamics 365 Customer Engagement
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
ms. Previously updated : 03/18/2021 Last updated : 09/12/2022 # Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
-Follow this article to learn how to add multiple server credentials on the appliance configuration manager to perform software inventory (discover installed applications), agentless dependency analysis and discover web apps, and SQL Server instances and databases.
+Follow this article to learn how to add multiple server credentials on the appliance configuration manager to perform software inventory (discover installed applications), agentless dependency analysis, and discover web apps, SQL Server instances and databases.
The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance used by Azure Migrate: Discovery and assessment to discover on-premises servers and send server configuration and performance metadata to Azure. The appliance can also be used to perform software inventory, agentless dependency analysis and discover of web app, and SQL Server instances and databases. > [!Note]
-> Currently the discovery of web apps and SQL Server instances and databases is only available in appliance used for discovery and assessment of servers running in VMware environment.
+> Currently, the discovery of ASP.NET web apps is only available in the appliance used for discovery and assessment of servers running in a VMware environment.
-If you want to use these features, you can provide server credentials by following the steps below. In case of servers running on vCenter Server(s) and Hyper-V host(s)/cluster(s), the appliance will attempt to automatically map the credentials to the servers to perform the discovery features.
+If you want to use these features, you can provide server credentials by following the steps below. For servers running on vCenter Server(s) and Hyper-V host(s)/cluster(s), the appliance will attempt to automatically map the credentials to the servers to perform the discovery features.
## Add server credentials
The types of server credentials supported are listed in the table below:
Type of credentials | Description |
-**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. It is recommended to provide the credentials in the UPN format, for example, user1@contoso.com. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/>For the appliance to validate the domain credentials with the domain controller, it should be able to resolve the domain name. Ensure that you have provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to start the discovery.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
+**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. It's recommended to provide the credentials in the UPN format, for example, user1@contoso.com. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/> To validate the domain credentials with the domain controller, the appliance should be able to resolve the domain name. Ensure that you've provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance won't attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to start the discovery.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you've configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password.
-**SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you have configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
+**SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you've configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
> [!Note]
-> Currently the SQL Server authentication credentials can only be provided in appliance used for discovery and assessment of servers running in VMware environment.
+> Currently, the SQL Server authentication credentials can only be provided in appliance used for discovery and assessment of servers running in VMware environment.
Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover web apps, and SQL Server instances and databases.
Feature | Windows credentials | Linux credentials
### Recommended practices to provide credentials -- It is recommended to create a dedicated domain user account with the [required permissions](add-server-credentials.md#required-permissions), which is scoped to perform software inventory, agentless dependency analysis and discovery of web app, and SQL Server instances and databases on the desired servers.-- It is recommended to provide at least one successfully validated domain credential or at least one non-domain credential to initiate software inventory.-- To discover SQL Server instances and databases, you can provide domain credentials, if you have configured Windows authentication mode on your SQL Servers.-- You can also provide SQL Server authentication credentials if you have configured SQL Server authentication mode on your SQL Servers but it is recommended to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can first complete the software inventory.
+- It's recommended to create a dedicated domain user account with the [required permissions](add-server-credentials.md#required-permissions), which is scoped to perform software inventory, agentless dependency analysis and discovery of web app, and SQL Server instances and databases on the desired servers.
+- It's recommended to provide at least one successfully validated domain credential or at least one non-domain credential to initiate software inventory.
+- To discover SQL Server instances and databases, you can provide domain credentials, if you've configured Windows authentication mode on your SQL Servers.
+- You can also provide SQL Server authentication credentials if you've configured SQL Server authentication mode on your SQL Servers but it's recommended to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can first complete the software inventory.
## Credentials handling on appliance - All the credentials provided on the appliance configuration manager are stored locally on the appliance server and not sent to Azure. - The credentials stored on the appliance server are encrypted using Data Protection API (DPAPI).-- After you have added credentials, appliance attempts to automatically map the credentials to perform discovery on the respective servers.
+- After you've added credentials, appliance attempts to automatically map the credentials to perform discovery on the respective servers.
- The appliance uses the credentials automatically mapped on a server for all the subsequent discovery cycles until the credentials are able to fetch the required discovery data. If the credentials stop working, appliance again attempts to map from the list of added credentials and continue the ongoing discovery on the server.-- The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. The appliance will not attempt to map the domain credentials that have failed validation.-- If the appliance cannot map any domain or non-domain credentials against a server, you will see "Credentials not available" status against the server in your project.
+- The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. The appliance won't attempt to map the domain credentials that have failed validation.
+- If the appliance can't map any domain or non-domain credentials against a server, you'll see "Credentials not available" status against the server in your project.
## Next steps
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
To add server credentials:
1. Select **Add Credentials**. 1. In the dropdown menu, select **Credentials type**.
- You can provide domain/, Windows(non-domain)/, Linux(non-domain)/, and SQL Server authentication credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
+ You can provide domain, Windows(non-domain), Linux(non-domain), and SQL Server authentication credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
1. For each type of credentials, enter: * A friendly name. * A username.
mysql Azure Pipelines Mysql Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/azure-pipelines-mysql-deploy.md
+
+ Title: Azure Pipelines task for Azure Database for MySQL Single Server
+description: Enable Azure Database for MySQL Flexible Server CLI task for using with Azure Pipelines
++++++ Last updated : 09/14/2022++
+# Azure Pipelines for Azure Database for MySQL Single Server
+
+Get started with Azure Database for MySQL by deploying a database update with Azure Pipelines. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/).
+
+You'll use the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment.md). The Azure Database for MySQL Deployment task only works with Azure Database for MySQL Single Server.
+
+## Prerequisites
+
+Before you begin, you need:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An active Azure DevOps organization. [Sign up for Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up).
+- A GitHub repository that you can use for your pipeline. If you donΓÇÖt have an existing repository, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
+
+This quickstart uses the resources created in either of these guides as a starting point:
+- [Create an Azure Database for MySQL server using Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal)
+- [Create an Azure Database for MySQL server using Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)
++
+## Create your pipeline
+
+You'll use the basic starter pipeline as a basis for your pipeline.
+
+1. Sign in to your Azure DevOps organization and go to your project.
+
+2. In your project, navigate to the **Pipelines** page. Then choose the action to create a new pipeline.
+
+3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
+
+4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
+
+5. When the list of repositories appears, select your desired repository.
+
+6. Azure Pipelines will analyze your repository and offer configuration options. Select **Starter pipeline**.
+
+ :::image type="content" source="media/azure-pipelines-mysql-task/configure-pipeline-option.png" alt-text="Screenshot of Select Starter pipeline.":::
+
+## Create a secret
+
+You'll need to know your database server name, SQL username, and SQL password to use with the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment).
+
+For security, you'll want to save your SQL password as a secret variable in the pipeline settings UI for your pipeline.
+
+1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**.
+1. Select **Variables**.
+1. Add a new variable named `SQLpass` and select **Keep this value secret** to encrypt and save the variable.
+
+ :::image type="content" source="media/azure-pipelines-mysql-task/save-secret-variable.png" alt-text="Screenshot of adding a secret variable.":::
+
+1. Select **Ok** and **Save** to add the variable.
+
+## Verify permissions for your database
+
+To access your MySQL database with Azure Pipelines, you need to set your database to accept connections from all Azure resources.
+
+1. In the Azure portal, open your database resource.
+1. Select **Connection security**.
+1. Toggle **Allow access to Azure services** to **Yes**.
+
+ :::image type="content" source="media/azure-pipelines-mysql-task/allow-azure-access-mysql.png" alt-text="Screenshot of setting MySQL to allow Azure connections.":::
+
+## Add the Azure Database for MySQL Deployment task
+
+In this example, we'll create a new databases named `quickstartdb` and add an inventory table. The inline SQL script will:
+
+- Delete `quickstartdb` if it exists and create a new `quickstartdb` database.
+- Delete the table `inventory` if it exists and creates a new `inventory` table.
+- Insert three rows into `inventory`.
+- Show all the rows.
+- Update the value of the first row in `inventory`.
+- Delete the second row in `inventory`.
+
+You'll need to replace the following values in your deployment task.
+
+|Input |Description |Example |
+||||
+|`azureSubscription` | Authenticate with your Azure Subscription with a [service connection](/azure/devops/pipelines/library/connect-to-azure). | `My Subscription` |
+|`ServerName` | The name of your Azure Database for MySQL server. | `fabrikam.mysql.database.azure.com` |
+|`SqlUsername` | The user name of your Azure Database for MySQL. | `mysqladmin@fabrikam` |
+|`SqlPassword` | The password for the username. This should be defined as a secret variable. | `$(SQLpass)` |
+
+```yaml
+
+trigger:
+- main
+
+pool:
+ vmImage: ubuntu-latest
+
+steps:
+- task: AzureMysqlDeployment@1
+ inputs:
+ azureSubscription: '<your-subscription>
+ ServerName: '<db>.mysql.database.azure.com'
+ SqlUsername: '<username>@<db>'
+ SqlPassword: '$(SQLpass)'
+ TaskNameSelector: 'InlineSqlTask'
+ SqlInline: |
+ DROP DATABASE IF EXISTS quickstartdb;
+ CREATE DATABASE quickstartdb;
+ USE quickstartdb;
+
+ -- Create a table and insert rows
+ DROP TABLE IF EXISTS inventory;
+ CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
+ INSERT INTO inventory (name, quantity) VALUES ('banana', 150);
+ INSERT INTO inventory (name, quantity) VALUES ('orange', 154);
+ INSERT INTO inventory (name, quantity) VALUES ('apple', 100);
+
+ -- Read
+ SELECT * FROM inventory;
+
+ -- Update
+ UPDATE inventory SET quantity = 200 WHERE id = 1;
+ SELECT * FROM inventory;
+
+ -- Delete
+ DELETE FROM inventory WHERE id = 2;
+ SELECT * FROM inventory;
+ IpDetectionMethod: 'AutoDetect'
+```
+
+## Deploy and verify resources
+
+Select **Save and run** to deploy your pipeline. The pipeline job will be launched and after few minutes, the job status should indicate `Success`.
+
+You can verify that your pipeline ran successfully within the `AzureMysqlDeployment` task in the pipeline run.
+
+Open the task and verify that the last two entries show two rows in `inventory`. There are two rows because the second row has been deleted.
+++
+## Clean up resources
+
+When youΓÇÖre done working with your pipeline, delete `quickstartdb` in your Azure Database for MySQL. You can also delete the deployment pipeline you created.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](/azure/app-service/tutorial-dotnetcore-sqldb-app)
openshift Howto Gpu Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-gpu-workloads.md
ARO supports the following GPU workers:
* NC4as T4 v3 * NC8as T4 v3 * NC16as T4 v3
-* NC464as T4 v3
+* NC64as T4 v3
> [!NOTE] > When requesting quota, remember that Azure is per core. To request a single NC4as T4 v3 node, you will need to request quota in groups of 4. If you wish to request an NC16as T4 v3, you will need to request quota of 16.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|G*|Standard_G5|32|448| |G|Standard_GS5|32|448| |Mms|Standard_M128ms|128|3892|
-|NC4asT4v3|Standard_NC4as_T4_v3|4|28|
-|NC8asT4v3|Standard_NC8as_T4_v3|8|56|
-|NC16asT4v3|Standard_NC16as_T4_v3|16|110|
-|NC64asT4v3|Standard_NC64as_T4_v3|64|440|
\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Mms|Standard_M128ms|128|3892| ### Storage optimized- |Series|Size|vCPU|Memory: GiB| |-|-|-|-| |L4s|Standard_L4s|4|32|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|L48s_v2|Standard_L48s_v2|32|384| |L64s_v2|Standard_L48s_v2|64|512|
+### GPU workload
+|Series|Size|vCPU|Memory: GiB|
+|-|-|-|-|
+|NC4asT4v3|Standard_NC4as_T4_v3|4|28|
+|NC8asT4v3|Standard_NC8as_T4_v3|8|56|
+|NC16asT4v3|Standard_NC16as_T4_v3|16|110|
+|NC64asT4v3|Standard_NC64as_T4_v3|64|440|
+ ### Memory and storage optimized |Series|Size|vCPU|Memory: GiB|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|G*|Standard_G5|32|448| |G|Standard_GS5|32|448|
-\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
+\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[dict_xsyn](https://www.postgresql.org/docs/13/dict-xsyn.html) | 1.0 | text search dictionary template for extended synonym processing| > |[earthdistance](https://www.postgresql.org/docs/13/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth| > |[fuzzystrmatch](https://www.postgresql.org/docs/13/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+>|[hypopg](https://github.com/HypoPG/hypopg) | 1.3.1 | extension adding support for hypothetical indexes |
> |[hstore](https://www.postgresql.org/docs/13/hstore.html) | 1.7 | data type for storing sets of (key, value) pairs| > |[intagg](https://www.postgresql.org/docs/13/intagg.html) | 1.1 | integer aggregator and enumerator. (Obsolete)| > |[intarray](https://www.postgresql.org/docs/13/intarray.html) | 1.3 | functions, operators, and index support for 1-D arrays of integers|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[dict_xsyn](https://www.postgresql.org/docs/13/dict-xsyn.html) | 1.0 | text search dictionary template for extended synonym processing| > |[earthdistance](https://www.postgresql.org/docs/13/earthdistance.html) | 1.1 | calculate great-circle distances on the surface of the Earth| > |[fuzzystrmatch](https://www.postgresql.org/docs/13/fuzzystrmatch.html) | 1.1 | determine similarities and distance between strings|
+>|[hypopg](https://github.com/HypoPG/hypopg) | 1.3.1 | extension adding support for hypothetical indexes |
> |[hstore](https://www.postgresql.org/docs/13/hstore.html) | 1.7 | data type for storing sets of (key, value) pairs| > |[intagg](https://www.postgresql.org/docs/13/intagg.html) | 1.1 | integer aggregator and enumerator. (Obsolete)| > |[intarray](https://www.postgresql.org/docs/13/intarray.html) | 1.3 | functions, operators, and index support for 1-D arrays of integers|
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
Previously updated : 06/29/2022 Last updated : 09/14/2022
Azure Database for PostgreSQL supports the following database versions.
## Major version support
-Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community, as provided in the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
+Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community. Refer to [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
## Minor version support
Azure Database for PostgreSQL automatically performs minor version upgrades to t
The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
-| Version | What's New | Azure support start date | Retirement date|
+| Version | What's New | Azure support start date | Retirement date (Azure)|
| -- | -- | | -- | | [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021 | [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021 | [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2023
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 [Single Server, Flexible Server] <br> Nov 9, 2023 [Hyperscale Citus]
| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024 | [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025 | [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | October 1, 2021 (Hyperscale Citus) <br> June 29, 2022 (Flexible Server)| November 12, 2026
+## PostgreSQL 11 support in Single Server and Flexible Server
+
+Azure is extending support for PostgreSQL 11 in Single Server and Flexible Server by one more year until **November 9, 2024**.
+
+- You will be able to create and use your PostgreSQL 11 servers until November 9, 2024 without any restrictions. This extended support is provided to help you with more time to plan and [migrate to Flexible server](../migrate/concepts-single-to-flexible.md) for higher PostgreSQL versions.
+- Until November 9, 2023, Azure will continue to update your PostgreSQL 11 server with PostgreSQL community provided minor versions.
+- Between November 9, 2023 and November 9, 2024, you can continue to use your PostgreSQL 11 servers and create new PostgreSQL servers without any restrictions. However, other retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) apply.
+- Beyond Nov 9 2024, all retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) apply.
+
## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL
-You may continue to run the retired version in Azure Database for PostgreSQL. However, please note the following restrictions after the retirement date for each PostgreSQL database version:
-- As the community will not be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+You may continue to run the retired version in Azure Database for PostgreSQL. However, note the following restrictions after the retirement date for each PostgreSQL database version:
+- As the community will not be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL will not patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
- If any support issue you may experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we may not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions. - You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers. - New service capabilities developed by Azure Database for PostgreSQL may only be available to supported database server versions. - Uptime SLAs will apply solely to Azure Database for PostgreSQL service-related issues and not to any downtime caused by database engine-related bugs. - In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified to upgrade the server before bringing the server online.
+
## PostgreSQL version syntax Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
|The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 gateway** (for 5G) or **SGi gateway** (for 4G). | | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). We recommend that you collect these addresses to allow the UEs to resolve domain names. </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network (for example, if you want to use this data network for local [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) only). | **DNS Addresses** |
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). You must collect these addresses to allow the UEs to resolve domain names. </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to access the public internet. | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.</br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| ## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each of these networks, allocate a subnet and then identify the listed IP ad
- Default gateway. - One IP address for port 6 on the Azure Stack Edge Pro device. - One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.-- Optionally, one or more Domain Name System (DNS) server addresses. ## Allocate user equipment (UE) IP address pools
For each site you're deploying, do the following:
- Decide whether you want to enable Network Address and Port Translation (NAPT) for the data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.
+## Configure Domain Name System (DNS) servers
+
+> [!IMPORTANT]
+> If you don't configure DNS servers for a data network, all UEs using that network will be unable to resolve domain names and access the public internet.
+
+DNS allows the translation between human-readable domain names and their associated machine-readable IP addresses. Depending on your requirements, you have the following options for configuring a DNS server for your data network:
+
+- If you need the UEs connected to this data network to resolve domain names, you must configure one or more DNS servers. You must use a private DNS server if you need DNS resolution of internal hostnames. If you're only providing internet access to public DNS names, you can use a public or private DNS server.
+- If you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers (instead of the DNS servers signalled to them by the packet core), you can omit this configuration.
+ ## Prepare your networks For each site you're deploying, do the following.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
|**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. | | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
- | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if the UEs in this data network don't need to access the public internet. |
| **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. | 1. Select **Review + create**.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Data Network Name** | Enter the name of the data network. | |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
- | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if the UEs in this data network don't need to access the public internet. |
|**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| 1. Select **Review + create**.
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
Previously updated : 10/08/2021 Last updated : 09/14/2022
purview Create Microsoft Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-dotnet.md
+
+ Title: 'Quickstart: Create Microsoft Purview (formerly Azure Purview) account using .NET SDK'
+description: This article will guide you through creating a Microsoft Purview (formerly Azure Purview) account using .NET SDK.
+++
+ms.devlang: csharp
+ Last updated : 06/17/2022++
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using .NET SDK
+
+In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create a Microsoft Purview (formerly Azure Purview) account.
+
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
++
+### Visual Studio
+
+The walkthrough in this article uses Visual Studio 2019. The procedures for Visual Studio 2013, 2015, or 2017 may differ slightly.
+
+### Azure .NET SDK
+
+Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/) on your machine.
+
+## Create an application in Azure Active Directory
+
+1. In [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal), create an application that represents the .NET application you're creating in this tutorial. For the sign-on URL, you can provide a dummy URL as shown in the article (`https://contoso.org/exampleapp`).
+1. In [Get values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in), get the **application ID** and **tenant ID**, and note down these values that you use later in this tutorial.
+1. In [Certificates and secrets](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options), get the **authentication key**, and note down this value that you use later in this tutorial.
+1. In [Assign the application to a role](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application), assign the application to the **Contributor** role at the subscription level so that the application can create data factories in the subscription.
+
+## Create a Visual Studio project
+
+Next, create a C# .NET console application in Visual Studio:
+
+1. Launch **Visual Studio**.
+2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
+3. In **Project name**, enter **PurviewQuickStart**.
+4. Select **Create** to create the project.
+
+## Install NuGet packages
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console**.
+2. In the **Package Manager Console** pane, run the following commands to install packages. For more information, see the [Microsoft.Azure.Management.Purview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Purview/).
+
+ ```powershell
+ Install-Package Microsoft.Azure.Management.Purview
+ Install-Package Microsoft.Azure.Management.ResourceManager -IncludePrerelease
+ Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
+ ```
+>[!TIP]
+> If you are getting an error that reads: **Package \<package name> is not found in the following primary sources(s):** and it is listing a local folder, you need to update your package sources in Visual Studio to include the nuget site as an online source.
+> 1. Go to **Tools**
+> 1. Select **NuGet Package Manager**
+> 1. Select **Package Manage Settings**
+> 1. Select **Package Sources**
+> 1. Add https://nuget.org/api/v2/ as a source.
+
+## Create a Microsoft Purview client
+
+1. Open **Program.cs**, include the following statements to add references to namespaces.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Linq;
+ using Microsoft.Rest;
+ using Microsoft.Rest.Serialization;
+ using Microsoft.Azure.Management.ResourceManager;
+ using Microsoft.Azure.Management.Purview;
+ using Microsoft.Azure.Management.Purview.Models;
+ using Microsoft.IdentityModel.Clients.ActiveDirectory;
+ ```
+
+2. Add the following code to the **Main** method that sets the variables. Replace the placeholders with your own values. For a list of Azure regions in which Microsoft Purview is currently available, search on **Microsoft Purview** and select the regions that interest you on the following page: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+ ```csharp
+ // Set variables
+ string tenantID = "<your tenant ID>";
+ string applicationId = "<your application ID>";
+ string authenticationKey = "<your authentication key for the application>";
+ string subscriptionId = "<your subscription ID where the data factory resides>";
+ string resourceGroup = "<your resource group where the data factory resides>";
+ string region = "<the location of your resource group>";
+ string purviewAccountName =
+ "<specify the name of purview account to create. It must be globally unique.>";
+ ```
+
+3. Add the following code to the **Main** method that creates an instance of **PurviewManagementClient** class. You use this object to create a Microsoft Purview Account.
+
+ ```csharp
+ // Authenticate and create a purview management client
+ var context = new AuthenticationContext("https://login.windows.net/" + tenantID);
+ ClientCredential cc = new ClientCredential(applicationId, authenticationKey);
+ AuthenticationResult result = context.AcquireTokenAsync(
+ "https://management.azure.com/", cc).Result;
+ ServiceClientCredentials cred = new TokenCredentials(result.AccessToken);
+ var client = new PurviewManagementClient(cred)
+ {
+ SubscriptionId = subscriptionId
+ };
+ ```
+
+## Create an account
+
+Add the following code to the **Main** method that will create the **Microsoft Purview Account**.
+
+```csharp
+// Create a purview Account
+Console.WriteLine("Creating Microsoft Purview Account " + purviewAccountName + "...");
+Account account = new Account()
+{
+Location = region,
+Identity = new Identity(type: "SystemAssigned"),
+Sku = new AccountSku(name: "Standard", capacity: 4)
+};
+try
+{
+ client.Accounts.CreateOrUpdate(resourceGroup, purviewAccountName, account);
+ Console.WriteLine(client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState);
+}
+catch (ErrorResponseModelException purviewException)
+{
+Console.WriteLine(purviewException.StackTrace);
+ }
+ Console.WriteLine(
+ SafeJsonConvert.SerializeObject(account, client.SerializationSettings));
+ while (client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState ==
+ "PendingCreation")
+ {
+ System.Threading.Thread.Sleep(1000);
+ }
+Console.WriteLine("\nPress any key to exit...");
+Console.ReadKey();
+```
+
+## Run the code
+
+Build and start the application, then verify the execution.
+
+The console prints the progress of creating Microsoft Purview Account.
+
+### Sample output
+
+```json
+Creating Microsoft Purview Account testpurview...
+Succeeded
+{
+ "sku": {
+ "capacity": 4,
+ "name": "Standard"
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "location": "southcentralus"
+}
+
+Press any key to exit...
+```
+
+## Verify the output
+
+Go to the **Microsoft Purview accounts** page in the [Azure portal](https://portal.azure.com) and verify the account created using the above code.
+
+## Delete Microsoft Purview account
+
+To programmatically delete a Microsoft Purview account, add the following lines of code to the program:
+
+```csharp
+Console.WriteLine("Deleting the Microsoft Purview Account");
+client.Accounts.Delete(resourceGroup, purviewAccountName);
+```
+
+## Check if Microsoft Purview account name is available
+
+To check availability of a purview account, use the following code:
+
+```csharp
+CheckNameAvailabilityRequest checkNameAvailabilityRequest = newCheckNameAvailabilityRequest()
+{
+ Name = purviewAccountName,
+ Type = "Microsoft.Purview/accounts"
+};
+Console.WriteLine("Check Microsoft Purview account name");
+Console.WriteLine(client.Accounts.CheckNameAvailability(checkNameAvailabilityRequest).NameAvailable);
+```
+
+The above code with print 'True' if the name is available and 'False' if the name isn't available.
+
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, delete the account, and check for name availability. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
+
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
+* [Create a collection](quickstart-create-collection.md)
purview Create Microsoft Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal-faq.md
+
+ Title: Create an exception to deploy Microsoft Purview
+description: This article describes how to create an exception to deploy Microsoft Purview while leaving existing Azure policies in place to maintain security.
++++ Last updated : 08/26/2021++
+# Create an exception to deploy Microsoft Purview
+
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Microsoft Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and optionally an Event Hubs namespace. When you [create Microsoft Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create a Microsoft Purview account.
+
+To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create an exception.
+
+## Create an Azure policy exception for Microsoft Purview
+
+1. Navigate to the [Azure portal](https://portal.azure.com) and search for **Policy**
+
+ :::image type="content" source="media/create-purview-portal-faq/search-for-policy.png" alt-text="Screenshot showing the Azure portal search bar, searching for Policy keyword.":::
+
+1. Follow [Create a custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) or modify existing policy to add two exceptions with `not` operator and `resourceBypass` tag:
+
+ ```json
+ {
+ "mode": "All",
+ "policyRule": {
+ "if": {
+ "anyOf": [
+ {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.Storage/storageAccounts"
+ },
+ {
+ "not": {
+ "field": "tags['<resourceBypass>']",
+ "exists": true
+ }
+ }]
+ },
+ {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": "tags['<resourceBypass>']",
+ "exists": true
+ }
+ }]
+ }]
+ },
+ "then": {
+ "effect": "deny"
+ }
+ },
+ "parameters": {}
+ }
+ ```
+
+ > [!Note]
+ > The tag could be anything beside `resourceBypass` and it's up to you to define value when creating Microsoft Purview in later steps as long as the policy can detect the tag.
+
+ :::image type="content" source="media/create-catalog-portal/policy-definition.png" alt-text="Screenshot showing how to create policy definition.":::
+
+1. [Create a policy assignment](../governance/policy/assign-policy-portal.md) using the custom policy created.
+
+ :::image type="content" source="media/create-catalog-portal/policy-assignment.png" alt-text="Screenshot showing how to create policy assignment" lightbox="./media/create-catalog-portal/policy-assignment.png":::
+
+> [!Note]
+> If you have **Azure Policy** and need to add exception as in **Prerequisites**, you need to add the correct tag. For example, you can add `resourceBypass` tag:
+> :::image type="content" source="media/create-catalog-portal/add-purview-tag.png" alt-text="Add tag to Microsoft Purview account.":::
+
+## Next steps
+
+To set up Microsoft Purview by using Private Link, see [Use private endpoints for your Microsoft Purview account](./catalog-private-link.md).
purview Create Microsoft Purview Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account and configure permissions to begin using it.
++ Last updated : 06/20/2022++++
+# Quickstart: Create an account in the Microsoft Purview governance portal
+
+This quickstart describes the steps to Create a Microsoft Purview (formerly Azure Purview) account through the Azure portal. Then we'll get started on the process of classifying, securing, and discovering your data in the Microsoft Purview Data Map!
+
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your data estate. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview governance services across your organization, [see our deployment best practices](deployment-best-practices.md).
++
+## Create an account
+
+1. Search for **Microsoft Purview** in the [Azure portal](https://portal.azure.com).
+
+ :::image type="content" source="media/create-catalog-portal/purview-accounts-page.png" alt-text="Screenshot showing the purview accounts page in the Azure portal":::
+
+1. Select **Create** to create a new Microsoft Purview account.
+
+ :::image type="content" source="media/create-catalog-portal/select-create.png" alt-text="Screenshot of the Microsoft Purview accounts page with the create button highlighted in the Azure portal.":::
+
+ Or instead, you can go to the marketplace, search for **Microsoft Purview**, and select **Create**.
+
+ :::image type="content" source="media/create-catalog-portal/search-marketplace.png" alt-text="Screenshot showing Microsoft Purview in the Azure Marketplace, with the create button highlighted.":::
+
+1. On the new Create Microsoft Purview account page under the **Basics** tab, select the Azure subscription where you want to create your account.
+
+1. Select an existing **resource group** or create a new one to hold your account.
+
+ To learn more about resource groups, see our article on [using resource groups to manage your Azure resources](../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
+
+1. Enter a **Microsoft Purview account name**. Spaces and symbols aren't allowed.
+ The name of the Microsoft Purview account must be globally unique. If you see the following error, change the name of Microsoft Purview account and try creating again.
+
+ :::image type="content" source="media/create-catalog-portal/name-error.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with an account name that is already in use, and the error message highlighted.":::
+
+1. Choose a **location**.
+ The list shows only locations that support the Microsoft Purview governance portal. The location you choose will be the region where your Microsoft Purview account and meta data will be stored. Sources can be housed in other regions.
+
+ > [!Note]
+ > The Microsoft Purview, formerly Azure Purview, does not support moving accounts across regions, so be sure to deploy to the correction region. You can find out more information about this in [move operation support for resources](../azure-resource-manager/management/move-support-resources.md).
+
+1. You can choose to enable the optional Event Hubs namespace by selecting the toggle. It's disabled by default. Enable this option if you want to be able to programmatically monitor your Microsoft Purview account using Event Hubs and Atlas Kafka**:
+ - [Use Event Hubs and .NET to send and receive Atlas Kafka topics messages](manage-kafka-dotnet.md)
+ - [Publish and consume events for Microsoft Purview with Atlas Kafka](concept-best-practices-automation.md#streaming-atlas-kafka)
+
+ :::image type="content" source="media/create-catalog-portal/event-hubs-namespace.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted under the Managed resources section of the Create Microsoft Purview account page.":::
+
+ >[!NOTE]
+ > This option can be enabled or disabled after you have created your account in **Managed Resources** under settings on your Microsoft Purview account page in the Azure Portal.
+ >
+ > :::image type="content" source="media/create-catalog-portal/enable-disable-event-hubs.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted on the Managed resources page of the Microsoft Purview account page in the Azure Portal.":::
+
+1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created account will appear in the list on your **Microsoft Purview accounts** page.
+
+ :::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with the Review + Create button highlighted":::
+
+## Open the Microsoft Purview governance portal
+
+After your account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open the Microsoft Purview governance portal:
+
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
+
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account name, and sign in to your workspace.
+
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, and how to access it.
+
+Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
+
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview Data Map:
+
+* [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Create a collection](quickstart-create-collection.md)
+* [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Create Microsoft Purview Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-powershell.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account with PowerShell/Azure CLI'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using Azure PowerShell/Azure CLI.
++ Last updated : 06/17/2022+++
+ms.devlang: azurecli
+#Customer intent: As a data steward, I want create a new Microsoft Purview Account so that I can scan and classify my data.
+
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Azure PowerShell/Azure CLI
+
+In this Quickstart, you'll create a Microsoft Purview account using Azure PowerShell/Azure CLI. [PowerShell reference for Microsoft Purview](/powershell/module/az.purview/) is available, but this article will take you through all the steps needed to create an account with PowerShell.
+
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview governance services across your organization, [see our deployment best practices](deployment-best-practices.md).
++
+### Install PowerShell
+
+ Install either Azure PowerShell or Azure CLI in your client machine to deploy the template: [Command-line deployment](../azure-resource-manager/templates/template-tutorial-create-first-template.md?tabs=azure-cli#command-line-deployment)
+
+## Create an account
+
+1. Sign in with your Azure credential
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az login
+ ```
+
+
+
+1. If you have multiple Azure subscriptions, select the subscription you want to use:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ Set-AzContext [SubscriptionID/SubscriptionName]
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az account set --subscription [SubscriptionID/SubscriptionName]
+ ```
+
+
+
+1. Create a resource group for your account. You can skip this step if you already have one:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name myResourceGroup -Location 'East US'
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az group create \
+ --name myResourceGroup \
+ --location "East US"
+ ```
+
+
+
+1. Create or Deploy the account:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ Use the [New-AzPurviewAccount](/powershell/module/az.purview/new-azpurviewaccount) cmdlet to create the Microsoft Purview account:
+
+ ```azurepowershell
+ New-AzPurviewAccount -Name yourPurviewAccountName -ResourceGroupName myResourceGroup -Location eastus -IdentityType SystemAssigned -SkuCapacity 4 -SkuName Standard -PublicNetworkAccess Enabled
+ ```
+
+ # [Azure CLI](#tab/azure-cli)
+
+ 1. Create a Microsoft Purview template file such as `purviewtemplate.json`. You can update `name`, `location`, and `capacity` (`4` or `16`):
+
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "<yourPurviewAccountName>",
+ "type": "Microsoft.Purview/accounts",
+ "apiVersion": "2020-12-01-preview",
+ "location": "EastUs",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "networkAcls": {
+ "defaultAction": "Allow"
+ }
+ },
+ "dependsOn": [],
+ "sku": {
+ "name": "Standard",
+ "capacity": "4"
+ },
+ "tags": {}
+ }
+ ],
+ "outputs": {}
+ }
+ ```
+
+ 1. Deploy Microsoft Purview template
+
+ To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+
+ ```azurecli
+ az deployment group create --resource-group "<myResourceGroup>" --template-file "<PATH TO purviewtemplate.json>"
+ ```
+
+
+
+1. The deployment command returns results. Look for `ProvisioningState` to see whether the deployment succeeded.
+
+1. If you deployed the account using a service principal, instead of a user account, you'll also need to run the below command in the Azure CLI:
+
+ ```azurecli
+ az purview account add-root-collection-admin --account-name [Microsoft Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
+ ```
+
+ This command will grant the user account [collection admin](catalog-permissions.md#roles) permissions on the root collection in your Microsoft Purview account. This allows the user to access the Microsoft Purview governance portal and add permission for other users. For more information about permissions in Microsoft Purview, see our [permissions guide](catalog-permissions.md). For more information about collections, see our [manage collections article](how-to-create-and-manage-collections.md).
+
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account.
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
+
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
+* [Create a collection](quickstart-create-collection.md)
purview Create Microsoft Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-python.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python'
+description: This article will guide you through creating a Microsoft Purview (formerly Azure Purview) account using Python.
+++
+ms.devlang: python
+ Last updated : 06/17/2022+++
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python
+
+In this quickstart, youΓÇÖll create a Microsoft Purview (formerly Azure Purview) account programatically using Python. [The python reference for Microsoft Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
+
+The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
++
+## Install the Python package
+
+1. Open a terminal or command prompt with administrator privileges.
+2. First, install the Python package for Azure management resources:
+
+ ```python
+ pip install azure-mgmt-resource
+ ```
+
+3. To install the Python package for Microsoft Purview, run the following command:
+
+ ```python
+ pip install azure-mgmt-purview
+ ```
+
+ The [Python SDK for Microsoft Purview](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+
+4. To install the Python package for Azure Identity authentication, run the following command:
+
+ ```python
+ pip install azure-identity
+ ```
+
+ > [!NOTE]
+ > The "azure-identity" package might have conflicts with "azure-cli" on some common dependencies. If you meet any authentication issue, remove "azure-cli" and its dependencies, or use a clean machine without installing "azure-cli" package.
+
+## Create a purview client
+
+1. Create a file named **purview.py**. Add the following statements to add references to namespaces.
+
+ ```python
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+ ```
+
+2. Add the following code to the **Main** method that creates an instance of PurviewManagementClient class. You'll use this object to create a purview account, delete purview accounts, check name availability, and other resource provider operations.
+
+ ```python
+ def main():
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Location name, where Microsoft Purview account must be created.
+ location = '<location name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+ ```
+
+## Create a purview account
+
+1. Add the following code to the **Main** method that creates a **purview account**. If your resource group already exists, comment out the first `create_or_update` statement.
+
+ ```python
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location =location )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Microsoft Purview Account Name: ", pa.name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Microsoft Purview account")
+ break
+ time.sleep(30)
+ ```
+
+2. Now, add the following statement to invoke the **main** method when the program is run:
+
+ ```python
+ # Start the main method
+ main()
+ ```
+
+## Full script
+
+HereΓÇÖs the full Python code:
+
+```python
+
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location ="southcentralus" )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Microsoft Purview Account Name: ", purview_name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error in submitting job to create account")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Microsoft Purview account")
+ break
+ time.sleep(30)
+
+# Start the main method
+main()
+```
+
+## Run the code
+
+Build and start the application. The console prints the progress of Microsoft Purview account creation. Wait until itΓÇÖs completed.
+HereΓÇÖs the sample output:
+
+```console
+location: southcentralus Microsoft Purview Account Name: purviewpython7 Id: /subscriptions/8c2c7b23-848d-40fe-b817-690d79ad9dfd/resourceGroups/Demo_Catalog/providers/Microsoft.Purview/accounts/purviewpython7 tags: None
+Creating
+Creating
+Succeeded
+```
+
+## Verify the output
+
+Go to the **Microsoft Purview accounts** page in the Azure portal and verify the account created using the above code.
+
+## Delete Microsoft Purview account
+
+To delete purview account, add the following code to the program, then run:
+
+```python
+pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
+```
+
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, delete the account, and check for name availability. You can now download the Python SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to the Microsoft Purview governance portal.
+
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
+* [Grant users permissions to the governance portal](catalog-permissions.md)
+* [Create a collection](quickstart-create-collection.md)
+
purview Quickstart ARM Create Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-ARM-create-microsoft-purview.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using an ARM Template'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using an ARM Template.
++ Last updated : 04/05/2022+++++
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using an ARM template
+
+This quickstart describes the steps to deploy a Microsoft Purview (formerly Azure Purview) account using an Azure Resource Manager (ARM) template.
+
+After you've created the account, you can begin registering your data sources and using the Microsoft Purview governance portal to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
+
+To deploy a Microsoft Purview account to your subscription using an ARM template, follow the guide below.
++
+## Deploy a custom template
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal where you can customize values and deploy.
+The template will deploy a Microsoft Purview account into a new or existing resource group in your subscription.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.azurepurview%2Fazure-purview-deployment%2Fazuredeploy.json)
++
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-purview-deployment/).
+
+<! Below link needs to be updated to Purview quickstart, which I'm currently working on. >
+
+The following resources are defined in the template:
+
+* [Microsoft.Purview/accounts](/azure/templates/microsoft.purview/accounts?pivots=deployment-language-arm-template)
+
+The template performs the following tasks:
+
+* Creates a Microsoft Purview account in a specified resource group.
+
+## Open Microsoft Purview governance portal
+
+After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open Microsoft Purview governance portal:
+
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
+
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
+
+## Get started with your Purview resource
+
+After deployment, the first activities are usually:
+
+* [Create a collection](quickstart-create-collection.md)
+* [Register a resource](azure-purview-connector-overview.md)
+* [Scan the resource](concept-scans-and-ingestion.md)
+
+At this time, these actions aren't able to be taken through an Azure Resource Manager template. Follow the guides above to get started!
+
+## Clean up resources
+
+To clean up the resources deployed in this quickstart, delete the resource group, which deletes all resources in the group.
+You can delete the resources either through the Azure portal, or using the PowerShell script below.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account and how to access the Microsoft Purview governance portal.
+
+Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
+
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
+
+> [!div class="nextstepaction"]
+> [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
+> [Create a collection](quickstart-create-collection.md)
+> [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Quickstart Bicep Create Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-bicep-create-microsoft-purview.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using a Bicep file'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using a Bicep file.
++ Last updated : 09/12/2022++++
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using a Bicep file
+
+This quickstart describes the steps to deploy a Microsoft Purview (formerly Azure Purview) account using a Bicep file.
+
+After you've created the account, you can begin registering your data sources, and using the Microsoft Purview governance portal to understand and govern your data landscape. By connecting to data across your on-premises, multicloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md)
+
+To deploy a Microsoft Purview account to your subscription using a Bicep file, follow the guide below.
++
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-purview-deployment/).
+
+<! Below link needs to be updated to Purview quickstart, which I'm currently working on. >
+
+The following resources are defined in the Bicep file:
+
+* [Microsoft.Purview/accounts](/azure/templates/microsoft.purview/accounts?pivots=deployment-language-bicep)
+
+The Bicep file performs the following tasks:
+
+* Creates a Microsoft Purview account in a specified resource group.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+You'll be prompted to enter the following values:
+
+* **Purview name**: enter a name for the Microsoft Purview account.
+
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Open Microsoft Purview governance portal
+
+After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open Microsoft Purview governance portal:
+
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
+
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
+
+## Get started with your Purview resource
+
+After deployment, the first activities are usually:
+
+* [Create a collection](quickstart-create-collection.md)
+* [Register a resource](azure-purview-connector-overview.md)
+* [Scan the resource](concept-scans-and-ingestion.md)
+
+At this time, these actions aren't able to be taken through a Bicep file. Follow the guides above to get started!
+
+## Clean up resources
+
+When you no longer need them, use the Azure portal, Azure CLI, or Azure PowerShell to remove the resource group, firewall, and all related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account and how to access the Microsoft Purview governance portal.
+
+Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
+
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
+
+> [!div class="nextstepaction"]
+> [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
+> [Create a collection](quickstart-create-collection.md)
+> [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Reference Microsoft Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-microsoft-purview-glossary.md
+
+ Title: Microsoft Purview governance portal product glossary
+description: A glossary defining the terminology used throughout the Microsoft Purview governance portal
+++++ Last updated : 06/17/2022+
+# Microsoft Purview governance portal product glossary
+
+Below is a glossary of terminology used throughout the Microsoft Purview governance portal, and documentation.
+
+## Advanced resource sets
+A set of features activated at the Microsoft Purview instance level that, when enabled, enrich resource set assets by computing extra aggregations on the metadata to provide information such as partition counts, total size, and schema counts. Resource set pattern rules are also included.
+## Annotation
+Information that is associated with data assets in the Microsoft Purview Data Map, for example, glossary terms and classifications. After they're applied, annotations can be used within Search to aid in the discovery of the data assets.
+## Approved
+The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.
+## Asset
+Any single object that is stored within a Microsoft Purview Data Catalog.
+> [!NOTE]
+> A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage.
+## Azure Information Protection
+A cloud solution that supports labeling of documents and emails to classify and protect information. Labeled items can be protected by encryption, marked with a watermark, or restricted to specific actions or users, and is bound to the item. This cloud-based solution relies on Azure Rights Management Service (RMS) for enforcing restrictions.
+## Business glossary
+A searchable list of specialized terms that an organization uses to describe key business words and their definitions. Using a business glossary can provide consistent data usage across the organization.
+## Capacity unit
+A measure of data map usage. All Microsoft Purview Data Maps include one capacity unit by default, which provides up to 2 GB of metadata storage and has a throughput of 25 data map operations/second.
+## Classification report
+A report that shows key classification details about the scanned data.
+## Classification
+A type of annotation used to identify an attribute of an asset or a column such as "Age", ΓÇ£Email Address", and "Street Address". These attributes can be assigned during scans or added manually.
+## Classification rule
+A classification rule is a set of conditions that determine how scanned data should be classified when content matches the specified pattern.
+## Classified asset
+An asset where Microsoft Purview extracts schema and applies classifications during an automated scan. The scan rule set determines which assets get classified. If the asset is considered a candidate for classification and no classifications are applied during scan time, an asset is still considered a classified asset.
+## Collection
+An organization-defined grouping of assets, terms, annotations, and sources. Collections allow for easier fine-grained access control and discoverability of assets within a data catalog.
+## Collection admin
+A role that can assign roles in the Microsoft Purview governance portal. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+## Column pattern
+A regular expression included in a classification rule that represents the column names that you want to match.
+## Contact
+An individual who is associated with an entity in the data catalog.
+## Control plane operation
+An operation that manages resources in your subscription, such as role-based access control and Azure policy that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources.
+## Credential
+A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset.
+## Data Catalog
+A searchable inventory of assets and their associated metadata that allows users to find and curate data across a data estate. The Data Catalog also includes a business glossary where subject matter experts can provide terms and definitions to add a business context to an asset.
+## Data curator
+A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
+## Data map
+A metadata repository that is the foundation of the Microsoft Purview governance portal. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview governance portal.
+## Data map operation
+A create, read, update, or delete action performed on an entity in the data map. For example, creating an asset in the data map is considered a data map operation.
+## Data owner
+An individual or group responsible for managing a data asset.
+## Data pattern
+A regular expression that represents the data that is stored in a data field. For example, a data pattern for employee ID could be Employee{GUID}.
+## Data plane operation
+An operation within a specific Microsoft Purview instance, such as editing an asset or creating a glossary term. Each instance has predefined roles, such as "data reader" and "data curator" that control which data plane operations a user can perform.
+## Data reader
+A role that provides read-only access to data assets, classifications, classification rules, collections, glossary terms, and insights.
+## Data Sharing
+Microsoft Purview Data Sharing is a set of features in Microsoft Purview that enables you to securely share data across organizations.
+## Data Share contributor
+A role that can share data within an organization and with other organizations using data share capabilities in Microsoft Purview. Data share contributors can view, create, update, and delete sent and received shares.
+## Data source admin
+A role that can manage data sources and scans. A user in the Data source admin role doesn't have access to Microsoft Purview governance portal. Combining this role with the Data reader or Data curator roles at any collection scope provides Microsoft Purview governance portal access.
+## Data steward
+An individual or group responsible for maintaining nomenclature, data quality standards, security controls, compliance requirements, and rules for the associated object.
+## Data dictionary
+A list of canonical names of database columns and their corresponding data types. It's often used to describe the format and structure of a database, and the relationship between its elements.
+## Discovered asset
+An asset that the Microsoft Purview Data Map identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping.
+## Distinct match threshold
+The total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. For example, a distinct match threshold of eight for employee ID requires that there are at least eight unique data values among the sampled values in the column that match the data pattern set for employee ID.
+## Expert
+An individual within an organization who understands the full context of a data asset or glossary term.
+## Full scan
+A scan that processes all assets within a selected scope of a data source.
+## Fully Qualified Name (FQN)
+A path that defines the location of an asset within its data source.
+## Glossary term
+An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms.
+## Incremental scan
+A scan that detects and processes assets that have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source.
+## Ingested asset
+An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview Data Map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
+## Insight reader
+A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role.
+## Data Estate Insights
+An area of the Microsoft Purview governance portal that provides up-to-date reports and actionable insights about the data estate.
+## Integration runtime
+The compute infrastructure used to scan in a data source.
+## Lineage
+How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis.
+## Management
+An area within the Microsoft Purview Governance Portal where you can manage connections, users, roles, and credentials. Also referred to as "Management center."
+## Minimum match threshold
+The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied.
+
+For example, a minimum match threshold of 60% for employee ID requires that 60% of all distinct values among the sampled data in a column match the data pattern set for employee ID. If the scanner samples 128 values in a column and finds 60 distinct values in that column, then at least 36 of the distinct values (60%) must match the employee ID data pattern for the classification to be applied.
+## Policy
+A statement or collection of statements that controls how access to data and data sources should be authorized.
+## Object type
+A categorization of assets based upon common data structures. For example, an Azure SQL Server table and Oracle database table both have an object type of table.
+## On-premises data
+Data that is in a data center controlled by a customer, for example, not in the cloud or software as a service (SaaS).
+## Owner
+An individual or group in charge of managing a data asset.
+## Pattern rule
+A configuration that overrides how the Microsoft Purview Data Map groups assets as resource sets and displays them within the catalog.
+## Microsoft Purview instance
+A single Microsoft Purview (formerly Azure Purview) account.
+## Registered source
+A source that has been added to a Microsoft Purview instance and is now managed as a part of the Data catalog.
+## Related terms
+Glossary terms that are linked to other terms within the organization.
+## Resource set
+A single asset that represents many partitioned files or objects in storage. For example, the Microsoft Purview Data Map stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.
+## Role
+Permissions assigned to a user within a Microsoft Purview instance. Roles, such as Microsoft Purview Data Curator or Microsoft Purview Data Reader, determine what can be done within the product.
+## Root collection
+A system-generated collection that has the same friendly name as the Microsoft Purview account. All assets belong to the root collection by default.
+## Scan
+A Microsoft Purview Data Map process that discovers and examines metadata in a source or set of sources to populate the data map. A scan automatically connects to a source, extracts metadata, captures lineage, and applies classifications. Scans can be run manually or on a schedule.
+## Scan rule set
+A set of rules that define which data types and classifications a scan ingests into a catalog.
+## Scan trigger
+A schedule that determines the recurrence of when a scan runs.
+## Schema classification
+A classification applied to one of the columns in an asset schema.
+## Search
+A feature that allows users to find items in the data catalog by entering in a set of keywords.
+## Search relevance
+The scoring of data assets that determine the order search results are returned. Multiple factors determine an asset's relevance score.
+## Self-hosted integration runtime
+An integration runtime installed on an on-premises machine or virtual machine inside a private network that is used to connect to data on-premises or in a private network.
+## Sensitivity label
+Annotations that classify and protect an organizationΓÇÖs data. The Microsoft Purview Data Map integrates with Microsoft Purview Information Protection for creation of sensitivity labels.
+## Sensitivity label report
+A summary of which sensitivity labels are applied across the data estate.
+## Service
+A product that provides standalone functionality and is available to customers by subscription or license.
+## Share
+A group of assets that are shared as a single entity.
+## Source
+A system where data is stored. Sources can be hosted in various places such as a cloud or on-premises. You register and scan sources so that you can manage them in the Microsoft Purview governance portal.
+## Source type
+A categorization of the registered sources used in the Microsoft Purview Data Map, for example, Azure SQL Database, Azure Blob Storage, Amazon S3, or SAP ECC.
+## Steward
+An individual who defines the standards for a glossary term. They're responsible for maintaining quality standards, nomenclature, and rules for the assigned entity.
+## Term template
+A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes.
+## Workflow
+An automated process that coordinates the creation and modification of catalog entities, including validation and approval. Workflows define repeatable business processes to achieve high quality data, policy compliance, and user collaboration across an organization.
+
+## Next steps
+
+To get started with other Microsoft Purview governance services, see [Quickstart: Create a Microsoft Purview (formerly Azure Purview) account](create-catalog-portal.md).
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
Previously updated : 11/10/2021 Last updated : 09/14/2022 # Connect to Azure Data Lake Gen1 in Microsoft Purview
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
Previously updated : 11/02/2021 Last updated : 09/14/2022 # Connect to Azure Cosmos database (SQL API) in Microsoft Purview
purview Use Microsoft Purview Governance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/use-microsoft-purview-governance-portal.md
+
+ Title: Use the Microsoft Purview governance portal
+description: This article describes how to use the Microsoft Purview governance portal.
++++ Last updated : 02/12/2022++
+# Use the Microsoft Purview governance portal
+
+This article gives an overview of some of the main features of Microsoft Purview.
+
+## Prerequisites
+
+* An Active Microsoft Purview account is already created in Azure portal and the user has permissions to access [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+
+## Launch Microsoft Purview account
+
+* To launch your Microsoft Purview account, go to Microsoft Purview accounts in Azure portal, select the account you want to launch and launch the account.
+
+ :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with the Microsoft Purview governance portal button highlighted." border="true":::
+
+* Another way to launch Microsoft Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
+
+## Home page
+
+**Home** is the starting page for the Microsoft Purview client.
++
+The following list summarizes the main features of **Home page**. Each number in the list corresponds to a highlighted number in the preceding screenshot.
+
+1. Friendly name of the catalog. You can set catalog name in **Management** > **Account information**.
+
+2. Catalog analytics shows the number of:
+
+ * Data sources
+ * Assets
+ * Glossary terms
+
+3. The search box allows you to search for data assets across the data catalog.
+
+4. The quick access buttons give access to frequently used functions of the application. The buttons that are presented, depend on the role assigned to your user account at the root collection.
+
+ * For *collection admin*, the available button is **Knowledge center**.
+ * For *data curator*, the buttons are **Browse assets**, **Manage glossary**, and **Knowledge center**.
+ * For *data reader*, the buttons are **Browse assets**, **View glossary**, and **Knowledge center**.
+ * For *data source admin* + *data curator*, the buttons are **Browse assets**, **Manage glossary**, and **Knowledge center**.
+ * For *data source admin* + *data reader*, the buttons are **Browse assets**, **View glossary**, and **Knowledge center**.
+
+ > [!NOTE]
+ > For more information about Microsoft Purview roles, see [Access control in Microsoft Purview](catalog-permissions.md).
+
+5. The left navigation bar helps you locate the main pages of the application.
+6. The **Recently accessed** tab shows a list of recently accessed data assets. For information about accessing assets, see [Search the Data Catalog](how-to-search-catalog.md) and [Browse by asset type](how-to-browse-catalog.md). **My items** tab is a list of data assets owned by the logged-on user.
+7. **Links** contains links to region status, documentation, pricing, overview, and Microsoft Purview status
+8. The top navigation bar contains information about release notes/updates, change purview account, notifications, help, and feedback sections.
+
+## Knowledge center
+
+Knowledge center is where you can find all the videos and tutorials related to Microsoft Purview.
+
+## Localization
+
+Microsoft Purview is localized in 18 languages. To change the language used, go to the **Settings** from the top bar and select the desired language from the dropdown.
++
+> [!NOTE]
+> Only generally available features are localized. Features still in preview are in English regardless of which language is selected.
++
+## Guided tours
+
+Each UX in the Microsoft Purview governance portal will have guided tours to give overview of the page. To start the guided tour, select **help** on the top bar and select **guided tours**.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add a security principal](tutorial-scan-data.md)
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
In many cases, the original value extracted needs to be normalized. For example,
Also, ensuring that parser output fields matches type defined in the schema is critical for parsers to work. For example, you may need to convert a string representing date and time to a datetime field. Functions such as `todatetime` and `tohex` are helpful in these cases.
-For example, the original unique event ID may be sent as an integer, but ASIM requires the value to be a string, to ensure broad compatibility among data sources. Therefore, when assigning the source field use `extned` and `tostring` instead of `project-rename`:
+For example, the original unique event ID may be sent as an integer, but ASIM requires the value to be a string, to ensure broad compatibility among data sources. Therefore, when assigning the source field use `extend` and `tostring` instead of `project-rename`:
```KQL | extend EventOriginalUid = tostring(ReportId),
Learn more about the ASIM in general:
- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM) - [Advanced Security Information Model (ASIM) overview](normalization.md) - [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md)-- [Advanced Security Information Model (ASIM) content](normalization-content.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
| \*:1194 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | | | \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. | | \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
-| \*.azure.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
| \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain.md
The following table includes links to articles that demonstrate how to configure
Setting up an apex domain is a common scenario to configure once your domain name is set up. Creating an apex domain is achieved by configuring an `ALIAS` or `ANAME` record or through `CNAME` flattening. Some domain registrars like GoDaddy and Google don't support these DNS records. If your domain registrar doesn't support all the DNS records you need, consider using [Azure DNS to configure your domain](custom-domain-azure-dns.md).
-The following are terms you'll encounter as your set up a custom domain.
+The following are terms you'll encounter as you set up a custom domain.
* **Apex or root domains**: Given the domain `www.example.com`, the `www` prefix is known as the subdomain, while the remaining segment of `example.com` is referred to as the apex domain.
storage Object Replication Prevent Cross Tenant Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-prevent-cross-tenant-policies.md
Title: Prevent object replication across Azure Active Directory tenants (preview)
+ Title: Prevent object replication across Azure Active Directory tenants
description: Prevent cross-tenant object replication
Previously updated : 09/02/2021 Last updated : 09/13/2022
-# Prevent object replication across Azure Active Directory tenants (preview)
+# Prevent object replication across Azure Active Directory tenants
Object replication asynchronously copies block blobs from a container in one storage account to a container in another storage account. When you configure an object replication policy, you specify the source account and container and the destination account and container. After the policy is configured, Azure Storage automatically copies the results of create, update, and delete operations on a source object to the destination object. For more information about object replication in Azure Storage, see [Object replication for block blobs](object-replication-overview.md).
-By default, an authorized user is permitted to configure an object replication policy where the source account is in one Azure Active Directory (Azure AD) tenant, and the destination account is in a different tenant. If your security policies require that you restrict object replication to storage accounts that reside within the same tenant only, you can disallow the creation of policies where the source and destination accounts are in different tenants (preview). By default, cross-tenant object replication is enabled for a storage account unless you explicitly disallow it.
+By default, an authorized user is permitted to configure an object replication policy where the source account is in one Azure Active Directory (Azure AD) tenant, and the destination account is in a different tenant. If your security policies require that you restrict object replication to storage accounts that reside within the same tenant only, you can disallow the creation of policies where the source and destination accounts are in different tenants. By default, cross-tenant object replication is enabled for a storage account unless you explicitly disallow it.
This article describes how to remediate cross-tenant object replication for your storage accounts. It also describes how to create policies to enforce a prohibition on cross-tenant object replication for new and existing storage accounts.
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Previously updated : 03/04/2022 Last updated : 09/13/2022
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support pr
When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+> SFTP support is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
## Valid host keys
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 06/23/2022 Last updated : 09/13/2022
This article describes limitations and known issues of SFTP support for Azure Blob Storage. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+> SFTP support is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
> [!IMPORTANT] > Because you must enable hierarchical namespace for your account to use SFTP, all of the known issues that are described in the Known issues with [Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account.
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
Previously updated : 03/28/2022 Last updated : 09/13/2022
Blob storage now supports the SSH File Transfer Protocol (SFTP). This article contains recommendations that will help you to optimize the performance of your storage requests. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md). > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+> SFTP support is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
## Use concurrent connections to increase throughput
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Previously updated : 06/14/2022 Last updated : 09/13/2022
You can securely connect to the Blob Storage endpoint of an Azure Storage accoun
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md). > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+> SFTP support is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
## Prerequisites
When using a private endpoint the connection string is `myaccount.myuser@myaccou
## Networking considerations
-SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service. When using SFTP, may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they are not specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
+SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service. When using SFTP, you may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they are not specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
> [!NOTE] > Audit tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the storage account endpoint. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](../common/transport-layer-security-configure-minimum-version.md).
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 06/03/2022 Last updated : 09/13/2022
# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management.
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management.
> [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
+> SFTP support is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-> After testing your end-to-end scenarios with SFTP, please share your experience via [this form](https://forms.office.com/r/MgjezFV1NR).
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
Azure allows secure data transfer to Blob Storage accounts using Azure Blob service REST API, Azure SDKs, and tools such as AzCopy. However, legacy workloads often use traditional file transfer protocols such as SFTP. You could update custom applications to use the REST API and Azure SDKs, but only by making significant code changes.
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
Title: Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities | Microsoft Docs
-description: Shows you how to use Resource Manager templates to upgrade from Azure Blob storage to Data Lake Storage.
+description: Shows you how to use Resource Manager templates to upgrade from Azure Blob Storage to Data Lake Storage.
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Azure Storage encrypts all data in a storage account at rest. By default, data i
This article shows how to configure encryption with customer-managed keys for an existing storage account. The customer-managed keys are stored in a key vault.
-To learn how to configure customer-managed keys for a new storage account, see [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md).
+To learn how to configure customer-managed keys for a new storage account, see [Configure customer-managed keys in an Azure key vault for an new storage account](customer-managed-keys-configure-new-account.md).
To learn how to configure encryption with customer-managed keys stored in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
storage File Sync Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md
description: Learn how to configure networking to use Azure File Sync to cache f
Previously updated : 04/13/2021 Last updated : 09/14/2022
# Azure File Sync networking considerations You can connect to an Azure file share in two ways: -- Accessing the share directly via the SMB or FileREST protocols. This access pattern is primarily employed when to eliminate as many on-premises servers as possible.-- Creating a cache of the Azure file share on an on-premises server (or on an Azure VM) with Azure File Sync, and accessing the file share's data from the on-premises server with your protocol of choice (SMB, NFS, FTPS, etc.) for your use case. This access pattern is handy because it combines the best of both on-premises performance and cloud scale and serverless attachable services, such as Azure Backup.
+- Accessing the share directly via the SMB or FileREST protocols. This access pattern is primarily employed to eliminate as many on-premises servers as possible.
+- Creating a cache of the Azure file share on an on-premises server (or Azure VM) with Azure File Sync, and accessing the file share's data from the on-premises server with your protocol of choice (SMB, NFS, FTPS, etc.) for your use case. This access pattern is handy because it combines the best of both on-premises performance and cloud scale and serverless attachable services, such as Azure Backup.
-This article focuses on how to configure networking for when your use case calls for using Azure File Sync to cache files on-premises rather than directly mounting the Azure file share over SMB. For more information about networking considerations for an Azure Files deployment, see [Azure Files networking considerations](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+This article focuses on how to configure networking when your use case calls for using Azure File Sync to cache files on-premises rather than directly mounting the Azure file share over SMB. For more information about networking considerations for an Azure Files deployment, see [Azure Files networking considerations](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
-Networking configuration for Azure File Sync spans two different Azure objects: a Storage Sync Service and an Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. A Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
+Networking configuration for Azure File Sync spans two different Azure objects: a Storage Sync Service and an Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. A Storage Sync Service is a management construct that represents registered servers, which are Windows file servers with an established trust relationship with Azure File Sync, and sync groups, which define the topology of the sync relationship.
> [!Important]
-> Azure File Sync does not support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
+> Azure File Sync doesn't support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
## Connecting Windows file server to Azure with Azure File Sync To set up and use Azure Files and Azure File Sync with an on-premises Windows file server, no special networking to Azure is required beyond a basic internet connection. To deploy Azure File Sync, you install the Azure File Sync agent on the Windows file server you would like to sync with Azure. The Azure File Sync agent achieves synchronization with an Azure file share via two channels: -- The FileREST protocol, which is an HTTPS-based protocol for accessing your Azure file share. Because the FileREST protocol is uses standard HTTPS for data transfer, only port 443 must be accessible outbound. Azure File Sync does not use the SMB protocol to transfer data from your on-premises Windows Servers to your Azure file share.-- The Azure File Sync sync protocol, which is an HTTPS-based protocol for exchanging synchronization knowledge, i.e. the version information about the files and folders in your environment, between endpoints in your environment. This protocol is also used to exchange metadata about the files and folders in your environment, such as timestamps and access control lists (ACLs).
+- The FileREST protocol, which is an HTTPS-based protocol used for accessing your Azure file share. Because the FileREST protocol uses standard HTTPS for data transfer, only port 443 must be accessible outbound. Azure File Sync does not use the SMB protocol to transfer data between your on-premises Windows Servers and your Azure file share.
+- The Azure File Sync sync protocol, which is an HTTPS-based protocol used for exchanging synchronization knowledge, i.e. the version information about the files and folders between endpoints in your environment. This protocol is also used to exchange metadata about the files and folders in your environment, such as timestamps and access control lists (ACLs).
-Because Azure Files offers direct SMB protocol access on Azure file shares, customers often wonder if they need to configure special networking to mount the Azure file shares with SMB for the Azure File Sync agent to access. This is not only not required, but is also discouraged, except for administrator scenarios, due to the lack of quick change detection on changes made directly to the Azure file share (changes may not be discovered for more than 24 hours depending on the size and number of items in the Azure file share). If you desire to use the Azure file share directly, i.e. not use Azure File Sync to cache on-premises, you can learn more about the networking considerations for direct access by consulting [Azure Files networking overview](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+Because Azure Files offers direct SMB protocol access on Azure file shares, customers often wonder if they need to configure special networking to mount the Azure file shares using SMB for the Azure File Sync agent to access. This is not required and is discouraged except in administrator scenarios, due to the lack of quick change detection on changes made directly to the Azure file share (changes may not be discovered for more than 24 hours depending on the size and number of items in the Azure file share). If you want to use the Azure file share directly, i.e. not use Azure File Sync to cache on-premises, see [Azure Files networking overview](../files/storage-files-networking-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
Although Azure File Sync does not require any special networking configuration, some customers may wish to configure advanced networking settings to enable the following scenarios: -- Interoperating with your organization's proxy server configuration.
+- Interoperate with your organization's proxy server configuration.
- Open your organization's on-premises firewall to the Azure Files and Azure File Sync services.-- Tunnel Azure Files and Azure File Sync over ExpressRoute or a VPN connection.
+- Tunnel Azure Files and Azure File Sync traffic over ExpressRoute or a VPN connection.
### Configuring proxy servers
-Many organizations use a proxy server as an intermediary between resources inside their on-premises network and resources outside their network, such as in Azure. Proxy servers are useful for many applications such as network isolation and security, and monitoring and logging. Azure File Sync can interoperate fully with a proxy server, however you must manually configure the proxy endpoint settings for your environment with Azure File Sync. This must be done via PowerShell using the Azure File Sync server cmdlets.
+Many organizations use a proxy server as an intermediary between resources inside their on-premises network and resources outside their network, such as in Azure. Proxy servers are useful for many applications such as network isolation and security, and monitoring and logging. Azure File Sync can interoperate fully with a proxy server, however you must manually configure the proxy endpoint settings for your environment with Azure File Sync. This must be done via PowerShell using the Azure File Sync server cmdlet `Set-StorageSyncProxyConfiguration`.
For more information on how to configure Azure File Sync with a proxy server, see [Configuring Azure File Sync with a proxy server](file-sync-firewall-and-proxy.md). ### Configuring firewalls and service tags
-You may isolate your file servers from most internet location for security purposes. To use Azure File Sync in your environment, you need to adjust your firewall to open it up to select Azure services. You can do this by retrieving the IP address ranges for the services you required through a mechanism called [service tags](../../virtual-network/service-tags-overview.md).
+Many organizations isolate their file servers from most internet locations for security purposes. To use Azure File Sync in such an environment, you need to adjust your firewall to open it up to select Azure services. You can do this by retrieving the IP address ranges for the services you required through a mechanism called [service tags](../../virtual-network/service-tags-overview.md).
Azure File Sync requires the IP address ranges for the following services, as identified by their service tags:
Azure File Sync requires the IP address ranges for the following services, as id
| Azure Resource Manager | The Azure Resource Manager is the management interface for Azure. All management calls, including Azure File Sync server registration and ongoing sync server tasks, are made through the Azure Resource Manager. | `AzureResourceManager` | | Azure Active Directory | Azure Active Directory, or Azure AD, contains the user principals required to authorize server registration against a Storage Sync Service, and the service principals required for Azure File Sync to be authorized to access your cloud resources. | `AzureActiveDirectory` |
-If you are using Azure File Sync within Azure, even if its a different region, you can use the name of the service tag directly in your network security group to allow traffic to that service. To learn more about how to do this, see [Network security groups](../../virtual-network/network-security-groups-overview.md).
+If you're using Azure File Sync within Azure, even if it's in a different region, you can use the name of the service tag directly in your network security group to allow traffic to that service. To learn more, see [Network security groups](../../virtual-network/network-security-groups-overview.md).
-If you are using Azure File Sync on-premises, you can use the service tag API to get specific IP address ranges for your firewall's allow list. There are two methods for getting this information:
+If you're using Azure File Sync on-premises, you can use the service tag API to get specific IP address ranges for your firewall's allowlist. There are two methods for getting this information:
- The current list of IP address ranges for all Azure services supporting service tags are published weekly on the Microsoft Download Center in the form of a JSON document. Each Azure cloud has its own JSON document with the IP address ranges relevant for that cloud: - [Azure Public](https://www.microsoft.com/download/details.aspx?id=56519)
If you are using Azure File Sync on-premises, you can use the service tag API to
- [Azure PowerShell](/powershell/module/az.network/Get-AzNetworkServiceTag) - [Azure CLI](/cli/azure/network#az-network-list-service-tags)
-To learn more about how to use the service tag API to retrieve the addresses of your services, see [Allow list for Azure File Sync IP addresses](file-sync-firewall-and-proxy.md#allow-list-for-azure-file-sync-ip-addresses).
+To learn more about how to use the service tag API to retrieve the addresses of your services, see [Allowlist for Azure File Sync IP addresses](file-sync-firewall-and-proxy.md#allow-list-for-azure-file-sync-ip-addresses).
### Tunneling traffic over a virtual private network or ExpressRoute
-Some organizations require communication with Azure to go over a network tunnel, such as a virtual private private network (VPN) or ExpressRoute, for an additional layer of security or to ensure communication with Azure follows a deterministic route.
+Some organizations require communication with Azure to go over a network tunnel, such as a virtual private network (VPN) or ExpressRoute, for an additional layer of security or to ensure communication with Azure follows a deterministic route.
-When you establish a network tunnel between your on-premises network and Azure, you are peering your on-premises network with one or more virtual networks in Azure. A [virtual network](../../virtual-network/virtual-networks-overview.md), or VNet, is similar to a traditional network that you'd operate on-premises. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed in a resource group.
+When you establish a network tunnel between your on-premises network and Azure, you are peering your on-premises network with one or more virtual networks in Azure. A [virtual network](../../virtual-network/virtual-networks-overview.md), or VNet, is similar to a traditional network that you'd operate on-premises. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed in a resource group.
-Azure Files and File Sync support the following mechanisms to tunnel traffic between your on-premises servers and Azure:
+Azure Files and Azure File Sync support the following mechanisms to tunnel traffic between your on-premises servers and Azure:
- [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md): A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group along side of a storage account or other Azure resources. Because Azure File Sync is meant to be used with an on-premises Windows file server, you would normally use a [Site-to-Site (S2S) VPN](../../vpn-gateway/design.md#s2smulti), although it is technically possible to use a [Point-to-Site (P2S) VPN](../../vpn-gateway/point-to-site-about.md). Site-to-Site (S2S) VPN connections connect your Azure virtual network and your organization's on-premises network. A S2S VPN connection enables you to configure a VPN connection once, for a VPN server or device hosted on your organization's network, rather than doing for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](../files/storage-files-configure-s2s-vpn.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json). -- [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
+- [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route (private connection) between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a key consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
### Private endpoints
-In addition to the default public endpoints Azure Files and File Sync provide through the storage account and Storage Sync Service, Azure Files and File Sync provides the option to have one or more private endpoints per resource. When you create a private endpoint for an Azure resource, it gets a private IP address from within the address space of your virtual network, much like how your on-premises Windows file server has an IP address within the dedicated address space of your on-premises network.
+In addition to the default public endpoints Azure Files and Azure File Sync provide through the storage account and Storage Sync Service, they provide the option to have one or more private endpoints per resource to privately and securely connect to Azure file shares from on-premises using VPN or ExpressRoute and from within an Azure VNet. When you create a private endpoint for an Azure resource, it gets a private IP address from within the address space of your virtual network, much like how your on-premises Windows file server has an IP address within the dedicated address space of your on-premises network.
> [!Important] > In order to use private endpoints on the Storage Sync Service resource, you must use Azure File Sync agent version 10.1 or greater. Agent versions prior to 10.1 do not support private endpoints on the Storage Sync Service. All prior agent versions support private endpoints on the storage account resource.
In addition to the default public endpoints Azure Files and File Sync provide th
An individual private endpoint is associated with a specific Azure virtual network subnet. Storage accounts and Storage Sync Services may have private endpoints in more than one virtual network. Using private endpoints enables you to:-- Securely connect to your Azure resources from on-premises networks using a VPN or ExpressRoute connection with private-peering.-- Secure your Azure resources by disabling the the public endpoints for Azure Files and File Sync. By default, creating a private endpoint does not block connections to the public endpoint.
+- Securely connect to your Azure resources from on-premises networks using a VPN or ExpressRoute connection with private peering.
+- Secure your Azure resources by disabling the public endpoints for Azure Files and File Sync. By default, creating a private endpoint does not block connections to the public endpoint.
- Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network (and peering boundaries). To create a private endpoint, see [Configuring private endpoints for Azure File Sync](file-sync-networking-endpoints.md). ### Private endpoints and DNS
-When you create a private endpoint, by default we also create a (or update an existing) private DNS zone corresponding to the `privatelink` subdomain. For public cloud regions, these DNS zones are `privatelink.file.core.windows.net` for Azure Files and `privatelink.afs.azure.net` for Azure File Sync.
+When you create a private endpoint, by default we also create (or update an existing) private DNS zone corresponding to the `privatelink` subdomain. For public cloud regions, these DNS zones are `privatelink.file.core.windows.net` for Azure Files and `privatelink.afs.azure.net` for Azure File Sync.
> [!Note]
-> This article uses the storage account DNS suffix for the Azure Public regions, `core.windows.net`. This commentary also applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute the the appropriate suffixes for your environment.
+> This article uses the storage account DNS suffix for the Azure Public regions, `core.windows.net`. This commentary also applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute the appropriate suffixes for your environment.
-When you create private endpoints for a storage account and a Storage Sync Service, we create A records for them in their respective private DNS zones. We also update the public DNS entry such that the regular fully qualified domain names are CNAMEs for the relevant privatelink name. This enables the fully qualified domain names to point at the private endpoint IP address(es) when the requester is inside of the virtual network and to point at the public endpoint IP address(es) when the requester is outside of the virtual network.
+When you create private endpoints for a storage account and a Storage Sync Service, we create A records for them in their respective private DNS zones. We also update the public DNS entry such that the regular fully qualified domain names are CNAMEs for the relevant `privatelink` name. This enables the fully qualified domain names to point at the private endpoint IP address(es) when the requester is inside of the virtual network and to point at the public endpoint IP address(es) when the requester is outside of the virtual network.
-For Azure Files, each private endpoint has a single fully qualified domain name, following the pattern `storageaccount.privatelink.file.core.windows.net`, mapped to one private IP address for the private endpoint. For Azure File Sync, each private endpoint has four fully qualified domain names, for the four different endpoints that Azure File Sync exposes: management, sync (primary), sync (secondary), and monitoring. The fully qualified domain names for these endpoints will normally follow the the name of the Storage Sync Service unless the name contains non-ASCII characters. For example, if your Storage Sync Service name is `mysyncservice` in the West US 2 region, the equivalent endpoints would be `mysyncservicemanagement.westus2.afs.azure.net`, `mysyncservicesyncp.westus2.afs.azure.net`, `mysyncservicesyncs.westus2.afs.azure.net`, and `mysyncservicemonitoring.westus2.afs.azure.net`. Each private endpoint for a Storage Sync Service will contain 4 distinct IP addresses.
+For Azure Files, each private endpoint has a single fully qualified domain name, following the pattern `storageaccount.privatelink.file.core.windows.net`, mapped to one private IP address for the private endpoint. For Azure File Sync, each private endpoint has four fully qualified domain names, for the four different endpoints that Azure File Sync exposes: management, sync (primary), sync (secondary), and monitoring. The fully qualified domain names for these endpoints will normally follow the name of the Storage Sync Service unless the name contains non-ASCII characters. For example, if your Storage Sync Service name is `mysyncservice` in the West US 2 region, the equivalent endpoints would be `mysyncservicemanagement.westus2.afs.azure.net`, `mysyncservicesyncp.westus2.afs.azure.net`, `mysyncservicesyncs.westus2.afs.azure.net`, and `mysyncservicemonitoring.westus2.afs.azure.net`. Each private endpoint for a Storage Sync Service will contain 4 distinct IP addresses.
Since your Azure private DNS zone is connected to the virtual network containing the private endpoint, you can observe the DNS configuration when by calling the `Resolve-DnsName` cmdlet from PowerShell in an Azure VM (alternately `nslookup` in Windows and Linux):
IP4Address : 52.239.194.40
This reflects the fact that the Azure Files and Azure File Sync can expose both their public endpoints and one or more private endpoints per resource. To ensure that the fully qualified domain names for your resources resolve to the private endpoints private IP addresses, you must change the configuration on your on-premises DNS servers. This can be accomplished in several ways: -- Modifying the hosts file on your clients to make the fully qualified domain names for your storage accounts and Storage Sync Services resolve to the desired private IP addresses. This is strongly discouraged for production environments, since you will need make these changes to every client that needs to access your private endpoints. Changes to your private endpoints/resources (deletions, modifications, etc.) will not be automatically handled.
+- Modifying the hosts file on your clients to make the fully qualified domain names for your storage accounts and Storage Sync Services resolve to the desired private IP addresses. This is strongly discouraged for production environments, since you will need to make these changes to every client that needs to access your private endpoints. Changes to your private endpoints/resources (deletions, modifications, etc.) will not be automatically handled.
- Creating DNS zones on your on-premises servers for `privatelink.file.core.windows.net` and `privatelink.afs.azure.net` with A records for your Azure resources. This has the advantage that clients in your on-premises environment will be able to automatically resolve Azure resources without needing to configure each client, however this solution is similarly brittle to modifying the hosts file because changes are not reflected. Although this solution is brittle, it may be the best choice for some environments.-- Forward the `core.windows.net` and `afs.azure.net` zones from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To workaround this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` and `afs.azure.net` on to the equivalent Azure private DNS zones. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
+- Forward the `core.windows.net` and `afs.azure.net` zones from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` and `afs.azure.net` to the equivalent Azure private DNS zones. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](../files/storage-files-networking-dns.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
-## Encryption in-transit
-Connections made from the Azure File Sync agent to your Azure file share or Storage Sync Service are always encrypted. Although Azure storage accounts have a setting to disable requiring encryption in transit for communications to Azure Files (and the other Azure storage services that are managed out of the storage account), disabling this setting will not affect Azure File Sync's encryption when communicating with the Azure Files. By default, all Azure storage accounts have encryption in transit enabled.
+## Encryption in transit
+Connections made from the Azure File Sync agent to your Azure file share or Storage Sync Service are always encrypted. Although Azure storage accounts have a setting to disable requiring encryption in transit for communications to Azure Files (and the other Azure storage services that are managed out of the storage account), disabling this setting will not affect Azure File Sync's encryption when communicating with the Azure Files. By default, all Azure storage accounts have encryption in transit enabled.
For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json). ## See also - [Planning for an Azure File Sync deployment](file-sync-planning.md)-- [Deploy Azure File Sync](file-sync-deployment-guide.md)
+- [Deploy Azure File Sync](file-sync-deployment-guide.md)
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Use the following SQL data type mapping when loading Parquet files:
| INT64 | TIME (MILLIS) | time | | INT64 | TIMESTAMP (MILLIS) | datetime2 | | [Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) | LIST | varchar(max) |
-| [Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md | MAP | varchar(max) |
+| [Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) | MAP | varchar(max) |
>[!IMPORTANT] >- SQL dedicated pools do not currently support Parquet data types with MICROS and NANOS precision.
->- You may experience the following error if types are mismatched between Parquet and SQL or if you have unsupported Parquet data types:
->**"HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: ClassCastException: ..."**
+>- You may experience the following error if types are mismatched between Parquet and SQL or if you have unsupported Parquet data types: `HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: ClassCastException:...`
>- Loading a value outside the range of 0-127 into a tinyint column for Parquet and ORC file format is not supported. For an example of creating external objects, see [Create external tables](../sql/develop-tables-external-tables.md?tabs=sql-pool).
synapse-analytics Sql Data Warehouse Develop Ctas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-ctas.md
Title: CREATE TABLE AS SELECT (CTAS)
-description: Explanation and examples of the CREATE TABLE AS SELECT (CTAS) statement in Synapse SQL for developing solutions.
+description: Explanation and examples of the CREATE TABLE AS SELECT (CTAS) statement in dedicated SQL pool (formerly SQL DW) for developing solutions.
Previously updated : 03/26/2019 Last updated : 06/09/2022
# CREATE TABLE AS SELECT (CTAS)
-This article explains the CREATE TABLE AS SELECT (CTAS) T-SQL statement in Synapse SQL for developing solutions. The article also provides code examples.
+This article explains the CREATE TABLE AS SELECT (CTAS) T-SQL statement in dedicated SQL pool (formerly SQL DW) for developing solutions. The article also provides code examples.
## CREATE TABLE AS SELECT
FROM [dbo].[FactInternetSales];
Perhaps one of the most common uses of CTAS is creating a copy of a table in order to change the DDL. Let's say you originally created your table as `ROUND_ROBIN`, and now want to change it to a table distributed on a column. CTAS is how you would change the distribution column. You can also use CTAS to change partitioning, indexing, or column types.
-Let's say you created this table by using the default distribution type of `ROUND_ROBIN`, not specifying a distribution column in the `CREATE TABLE`.
+Let's say you created this table by specifying HEAP and using the default distribution type of `ROUND_ROBIN`.
```sql CREATE TABLE FactInternetSales
CREATE TABLE FactInternetSales
TaxAmt money NOT NULL, Freight money NOT NULL, CarrierTrackingNumber nvarchar(25),
- CustomerPONumber nvarchar(25));
+ CustomerPONumber nvarchar(25)
+)
+WITH(
+ HEAP,
+ DISTRIBUTION = ROUND_ROBIN
+);
``` Now you want to create a new copy of this table, with a `Clustered Columnstore Index`, so you can take advantage of the performance of Clustered Columnstore tables. You also want to distribute this table on `ProductKey`, because you're anticipating joins on this column and want to avoid data movement during joins on `ProductKey`. Lastly, you also want to add partitioning on `OrderDateKey`, so you can quickly delete old data by dropping old partitions. Here is the CTAS statement, which copies your old table into a new table.
RENAME OBJECT FactInternetSales_new TO FactInternetSales;
DROP TABLE FactInternetSales_old; ```
-## Use CTAS to work around unsupported features
-
-You can also use CTAS to work around a number of the unsupported features listed below. This method can often prove helpful, because not only will your code be compliant, but it will often run faster on Synapse SQL. This performance is a result of its fully parallelized design. Scenarios include:
-
-* ANSI JOINS on UPDATEs
-* ANSI JOINs on DELETEs
-* MERGE statement
-
-> [!TIP]
-> Try to think "CTAS first." Solving a problem by using CTAS is generally a good approach, even if you're writing more data as a result.
-
-## ANSI join replacement for update statements
-
-You might find that you have a complex update. The update joins more than two tables together by using ANSI join syntax to perform the UPDATE or DELETE.
-
-Imagine you had to update this table:
-
-```sql
-CREATE TABLE [dbo].[AnnualCategorySales]
-( [EnglishProductCategoryName] NVARCHAR(50) NOT NULL
-, [CalendarYear] SMALLINT NOT NULL
-, [TotalSalesAmount] MONEY NOT NULL
-)
-WITH
-(
- DISTRIBUTION = ROUND_ROBIN
-);
-```
-
-The original query might have looked something like this example:
-
-```sql
-UPDATE acs
-SET [TotalSalesAmount] = [fis].[TotalSalesAmount]
-FROM [dbo].[AnnualCategorySales] AS acs
-JOIN (
- SELECT [EnglishProductCategoryName]
- , [CalendarYear]
- , SUM([SalesAmount]) AS [TotalSalesAmount]
- FROM [dbo].[FactInternetSales] AS s
- JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
- JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
- JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.[ProductSubcategoryKey]
- JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.[ProductCategoryKey]
- WHERE [CalendarYear] = 2004
- GROUP BY
- [EnglishProductCategoryName]
- , [CalendarYear]
- ) AS fis
-ON [acs].[EnglishProductCategoryName] = [fis].[EnglishProductCategoryName]
-AND [acs].[CalendarYear] = [fis].[CalendarYear];
-```
-
-Synapse SQL doesn't support ANSI joins in the `FROM` clause of an `UPDATE` statement, so you can't use the previous example without modifying it.
-
-You can use a combination of a CTAS and an implicit join to replace the previous example:
-
-```sql
Create an interim table
-CREATE TABLE CTAS_acs
-WITH (DISTRIBUTION = ROUND_ROBIN)
-AS
-SELECT ISNULL(CAST([EnglishProductCategoryName] AS NVARCHAR(50)),0) AS [EnglishProductCategoryName]
-, ISNULL(CAST([CalendarYear] AS SMALLINT),0) AS [CalendarYear]
-, ISNULL(CAST(SUM([SalesAmount]) AS MONEY),0) AS [TotalSalesAmount]
-FROM [dbo].[FactInternetSales] AS s
-JOIN [dbo].[DimDate] AS d ON s.[OrderDateKey] = d.[DateKey]
-JOIN [dbo].[DimProduct] AS p ON s.[ProductKey] = p.[ProductKey]
-JOIN [dbo].[DimProductSubCategory] AS u ON p.[ProductSubcategoryKey] = u.[ProductSubcategoryKey]
-JOIN [dbo].[DimProductCategory] AS c ON u.[ProductCategoryKey] = c.[ProductCategoryKey]
-WHERE [CalendarYear] = 2004
-GROUP BY [EnglishProductCategoryName]
-, [CalendarYear];
- Use an implicit join to perform the update
-UPDATE AnnualCategorySales
-SET AnnualCategorySales.TotalSalesAmount = CTAS_ACS.TotalSalesAmount
-FROM CTAS_acs
-WHERE CTAS_acs.[EnglishProductCategoryName] = AnnualCategorySales.[EnglishProductCategoryName]
-AND CTAS_acs.[CalendarYear] = AnnualCategorySales.[CalendarYear] ;
-Drop the interim table
-DROP TABLE CTAS_acs;
-```
-
-## ANSI join replacement for MERGE
-
-In Azure Synapse Analytics, [MERGE](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) (preview) with NOT MATCHED BY TARGET requires the target to be a HASH distributed table. Users can use the ANSI JOIN with [UPDATE](/sql/t-sql/queries/update-transact-sql?view=azure-sqldw-latest&preserve-view=true) or [DELETE](/sql/t-sql/statements/delete-transact-sql?view=azure-sqldw-latest&preserve-view=true) as a workaround to modify target table data based on the result from joining with another table. Here is an example.
-
-```sql
-CREATE TABLE dbo.Table1
- (ColA INT NOT NULL, ColB DECIMAL(10,3) NOT NULL);
-GO
-CREATE TABLE dbo.Table2
- (ColA INT NOT NULL, ColB DECIMAL(10,3) NOT NULL);
-GO
-INSERT INTO dbo.Table1 VALUES(1, 10.0);
-INSERT INTO dbo.Table2 VALUES(1, 0.0);
-GO
-UPDATE dbo.Table2
-SET dbo.Table2.ColB = dbo.Table2.ColB + dbo.Table1.ColB
-FROM dbo.Table2
- INNER JOIN dbo.Table1
- ON (dbo.Table2.ColA = dbo.Table1.ColA);
-GO
-SELECT ColA, ColB
-FROM dbo.Table2;
-
-```
- ## Explicitly state data type and nullability of output When migrating code, you might find you run across this type of coding pattern:
CTAS is one of the most important statements in Synapse SQL. Make sure you thoro
## Next steps
-For more development tips, see the [development overview](sql-data-warehouse-overview-develop.md).
+For more development tips, see the [development overview](sql-data-warehouse-overview-develop.md).
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
Title: Monitor workload - Azure portal
+ Title: Monitor workload - Azure portal
description: Monitor Synapse SQL using the Azure portal ---- Previously updated : 02/04/2020 Last updated : 09/13/2022+++ # Monitor workload - Azure portal
This article describes how to use the Azure portal to monitor your workload. Thi
## Create a Log Analytics workspace
-Navigate to the browse blade for Log Analytics workspaces and create a workspace
+In the Azure portal, navigate to the page for Log Analytics workspaces, or use the Azure services search window to create a new Log Analytics workspace.
-![Log Analytics workspaces](./media/sql-data-warehouse-monitor-workload-portal/log_analytics_workspaces.png)
-![Screenshot shows the Log Analytics workspaces where you can select Add.](./media/sql-data-warehouse-monitor-workload-portal/add_analytics_workspace.png)
-![Screenshot shows the Log Analytics workspace where you can enter values.](./media/sql-data-warehouse-monitor-workload-portal/add_analytics_workspace_2.png)
-
-For more details on workspaces, visit the following [documentation](../../azure-monitor/logs/quick-create-workspace.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.jsond#create-a-workspace).
+For more information on workspaces, see [Create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
## Turn on Resource logs
Configure diagnostic settings to emit logs from your SQL pool. Logs consist of t
- [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) - [sys.dm_pdw_sql_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
-![Enabling resource logs](./media/sql-data-warehouse-monitor-workload-portal/enable_diagnostic_logs.png)
+
+Logs can be emitted to Azure Storage, Stream Analytics, or Log Analytics. For this tutorial, select Log Analytics. Select all desired categories and metrics and choose **Send to Log Analytics workspace**.
-Logs can be emitted to Azure Storage, Stream Analytics, or Log Analytics. For this tutorial, select Log Analytics.
-![Specify logs](./media/sql-data-warehouse-monitor-workload-portal/specify_logs.png)
+Select **Save** to create the new diagnostic setting. It may take a few minutes for data to appear in queries.
## Run queries against Log Analytics
-Navigate to your Log Analytics workspace where you can do the following:
+Navigate to your Log Analytics workspace where you can:
- Analyze logs using log queries and save queries for reuse - Save queries for reuse - Create log alerts - Pin query results to a dashboard
-For details on the capabilities of log queries, visit the following [documentation](/azure/data-explorer/kusto/query/?bc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2ftoc.json).
+For details on the capabilities of log queries using Kusto, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
-![Log Analytics workspace editor](./media/sql-data-warehouse-monitor-workload-portal/log_analytics_workspace_editor.png)
-![Log Analytics workspace queries](./media/sql-data-warehouse-monitor-workload-portal/log_analytics_workspace_queries.png)
## Sample log queries
+Set the [scope of your queries](../../azure-monitor/logs/scope.md) to the Log Analytics workspace resource.
+ ```Kusto //List all queries AzureDiagnostics
AzureDiagnostics
//Count of all queued queries AzureDiagnostics | where Category contains "waits"
-| where Type_s == "UserConcurrencyResourceType"
+| where Type == "UserConcurrencyResourceType"
| summarize totalQueuedQueries = dcount(RequestId_s) ``` ## Next steps
-Now that you have set up and configured Azure monitor logs, [customize Azure dashboards](../../azure-portal/azure-portal-dashboards.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) to share across your team.
+- Now that you have set up and configured Azure monitor logs, [customize Azure dashboards](../../azure-portal/azure-portal-dashboards.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) to share across your team.
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
description: How to set up screen capture protection for Azure Virtual Desktop. Previously updated : 08/30/2021 Last updated : 09/14/2022
The screen capture protection feature prevents sensitive information from being
## Prerequisites The screen capture protection feature is configured on the session host level and enforced on the client. Only clients that support this feature can connect to the remote session.
-Following clients currently support screen capture protection:
-* Windows Desktop client supports screen capture protection for full desktops only.
-* macOS client version 10.7.0 or later supports screen capture protection for both RemoteApp and full desktops.
+The following clients currently support screen capture protection:
-Suppose the user attempts to use an unsupported client to connect to the protected session host. In that case, the connection will fail with error 0x1151.
+- The Windows Desktop client supports screen capture protection for full desktops only.
+- The macOS client (version 10.7.0 or later) supports screen capture protection for both RemoteApps and full desktops.
+
+If a user tries to connect to a capture-protected session host with an unsupported client, the connection won't work and will instead show an error message labeled "0x1151."
## Configure screen capture protection
-1. To configure screen capture protection, you need to install administrative templates that add rules and settings for Azure Virtual Desktop.
-2. Download the [Azure Virtual Desktop policy templates file](https://aka.ms/avdgpo) (AVDGPTemplate.cab) and extract the contents of the cab file and zip archive.
-3. Copy the **terminalserver-avd.admx** file to **%windir%\PolicyDefinitions** folder.
-4. Copy the **en-us\terminalserver-avd.adml** file to **%windir%\PolicyDefinitions\en-us** folder.
-5. To confirm the files copied correctly, open the Group Policy Editor and navigate to **Computer Configuration** -> **Administrative Templates** -> **Windows Components** -> **Remote Desktop Services** -> **Remote Desktop Session Host** -> **Azure Virtual Desktop**
-6. You should see one or more Azure Virtual Desktop policies, as shown below.
+To configure screen capture protection:
+
+1. Download the [Azure Virtual Desktop policy templates file](https://aka.ms/avdgpo) (AVDGPTemplate.cab) and extract the contents of the cab file and zip archive.
+2. Copy the **terminalserver-avd.admx** file to the **%windir%\PolicyDefinitions** folder.
+3. Copy the **en-us\terminalserver-avd.adml** file to the **%windir%\PolicyDefinitions\en-us** folder.
+4. To confirm the files copied correctly, open the Group Policy Editor and go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see one or more Azure Virtual Desktop policies, as shown in the following screenshot.
:::image type="content" source="media/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the group policy editor" lightbox="media/azure-virtual-desktop-gpo.png"::: > [!TIP] > You can also install administrative templates to the group policy Central Store in your Active Directory domain.
- > For more information about Central Store for Group Policy Administrative Templates, see [How to create and manage the Central Store for Group Policy Administrative Templates in Windows](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+ > For more information, see [How to create and manage the Central Store for Group Policy Administrative Templates in Windows](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
-7. Open the **"Enable screen capture protection"** policy and set it to **"Enabled"**.
+5. Finally, open the **"Enable screen capture protection"** policy and set it to **"Enabled"**.
## Limitations and known issues
-* This feature protects the Remote Desktop window from being captured through a specific set of public operating system features and APIs. However, there's no guarantee that this feature will strictly protect content, for example, where someone takes photography of the screen.
-* Customers should use the feature together with disabling clipboard, drive, and printer redirection. Disabling redirection will help to prevent the user from copying the captured screen content from the remote session.
-* Users can't share the Remote Desktop window using local collaboration software, such as Microsoft Teams, when the feature is enabled. If Microsoft Teams is used, both the local Teams app and Teams running with media optimizations can't share the protected content.
+- This feature protects the Remote Desktop window from being captured through a specific set of public operating system features and Application Programming Interfaces (APIs). However, there's no guarantee that this feature will strictly protect content in scenarios where a user were to take a photo of their screen with a physical camera.
+- For maximum security, customers should use this feature while also disabling clipboard, drive, and printer redirection. Disabling redirection prevents users from copying any captured screen content from the remote session.
+- Users can't share their Remote Desktop window using local collaboration software, such as Microsoft Teams, while this feature is enabled. When they use Microsoft Teams, neither the local Teams app nor Teams with media optimization can share protected content.
## Next steps
-* To learn about Azure Virtual Desktop security best practices, see [Azure Virtual Desktop security best practices](security-guide.md).
+Learn about how to secure your Azure Virtual Desktop deployment at [Security best practices](security-guide.md).
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
This update was released in August 2022 and includes the following changes:
This update was released in August 2022 and includes the following changes: -- Agent first-party extensions architecture completed-- Fixed Teams error related to Azure Virtual Desktop telemetry-- RDAgentBootloader - revision update to 1.0.4.0-- SessionHostHealthCheckReport is now centralized in a NuGet package to be shared with first-party Teams-- Fixes to AppAttach
+- Agent first-party extensions architecture completed.
+- Fixed Teams error related to Azure Virtual Desktop telemetry.
+- RDAgentBootloader - revision update to 1.0.4.0.
+- SessionHostHealthCheckReport is now centralized in a NuGet package to be shared with first-party Teams.
+- Fixes to AppAttach.
+
+## Version 1.0.4739.1000
+
+This update was released in July 2022 and includes the following changes:
+
+- Report session load to Log Analytics for admins to get information on when MaxSessionLimit is reached.
+- Adding AADTenant ID claim to the registration token.
+- Report closing errors to diagnostics explicitly.
## Version 1.0.4574.1600
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
+
+ Title: Use a disk encryption set across Azure AD tenants (preview)
+description: Learn how to use customer-managed keys with your Azure disks in different Azure AD tenants.
+++ Last updated : 09/13/2022++++
+# Encrypt managed disks with cross-tenant customer-managed keys (preview)
+
+> [!IMPORTANT]
+> Cross-tenant encryption with customer-managed keys (CMK) is currently in public preview.
+> This preview version is provided without a service level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article covers building a solution where you encrypt managed disks with customer-managed keys using Azure Key Vaults stored in a different Azure Active Directory (Azure AD) tenant. This configuration can be ideal for several scenarios, one example being Azure support for service providers that want to offer bring-your-own encryption keys to their customers where resources from the service provider's tenant are encrypted with keys from their customer's tenant.
+
+A disk encryption set with federated identity in a cross-tenant CMK workflow spans service provider/ISV tenant resources (disk encryption set, managed identities, and app registrations) and customer tenant resources (enterprise apps, user role assignments, and key vault). In this case, the source Azure resource is the service provider's disk encryption set.
+
+If you have any questions about cross-tenant customer-managed keys with managed disks, email <crosstenantcmkvteam@service.microsoft.com>.
+
+## Prerequisites
+- Install the latest [Azure PowerShell module](/powershell/azure/install-az-ps).
+- You must enable the preview on your subscription. Use the following command to enable the preview:
+ ```azurepowershell
+ Register-AzProviderFeature -FeatureName "EncryptionAtRestWithCrossTenantKey" -ProviderNamespace "Microsoft.Compute"
+ ```
+
+ It may take some time for the feature registration to complete. You can confirm if it has with the following command:
+
+ ```azurepowershell
+ Get-AzProviderFeature -FeatureName "EncryptionAtRestWithCrossTenantKey" -ProviderNamespace "Microsoft.Compute"
+ ```
+
+## Limitations
+
+Currently this feature is only available in the West Central US region. Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
+++
+## Create a disk encryption set
+
+Now that you've created your Azure Key Vault and performed the required Azure AD configurations, deploy a disk encryption set configured to work across tenants and associate it with a key in the key vault. You can do this using an ARM template, REST API, Azure PowerShell, or Azure CLI.
+
+# [ARM/REST](#tab/azure-portal)
+
+Use an ARM template or REST API.
+
+### ARM
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "desname": {
+ "defaultValue": "<Enter ISV disk encryption set name>",
+ "type": "String"
+ },
+ "region": {
+ "defaultValue": "WestCentralUS",
+ "type": "String"
+ },
+ "userassignedmicmk": {
+ "defaultValue": "/subscriptions/<Enter ISV Subscription Id>/resourceGroups/<Enter ISV resource group name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<Enter ISV User Assigned Identity Name>",
+ "type": "String"
+ },
+ "cmkfederatedclientId": {
+ "defaultValue": "<Enter ISV Multi-Tenant App Id>",
+ "type": "String"
+ },
+ "keyVaultURL": {
+ "defaultValue": "<Enter Client Key URL>",
+ "type": "String"
+ },
+ "encryptionType": {
+ "defaultValue": "EncryptionAtRestWithCustomerKey",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Compute/diskEncryptionSets",
+ "apiVersion": "2021-12-01",
+ "name": "[parameters('desname')]",
+ "location": "[parameters('region')]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[parameters('userassignedmicmk')]": {}
+ }
+ },
+ "properties": {
+ "activeKey": {
+ "keyUrl": "[parameters('keyVaultURL')]"
+ },
+ "federatedClientId": "[parameters('cmkfederatedclientId')]",
+ "encryptionType": "[parameters('encryptionType')]"
+ }
+ }
+ ]
+}
+```
+
+### REST API
+
+Use bearer token as authorization header and application/JSON as content type in BODY. (Network tab, filter to management.azure while performing any ARM request on portal.)
+
+```rest
+PUT https://management.azure.com/subscriptions/<Enter ISV Subscription Id>/resourceGroups/<Enter ISV Resource Group Name>/providers/Microsoft.Compute/diskEncryptionSets/<Enter ISV Disk Encryption Set Name>?api-version=2021-12-01
+Authorization: Bearer ...
+Content-Type: application/json
+
+{
+ "name": "<Enter ISV disk encryption set name>",
+ "id": "/subscriptions/<Enter ISV Subscription Id>/resourceGroups/<Enter ISV resource group name>/providers/Microsoft.Compute/diskEncryptionSets/<Enter ISV disk encryption set name>/",
+ "type": "Microsoft.Compute/diskEncryptionSets",
+ "location": "westcentralus",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+"/subscriptions/<Enter ISV Subscription Id>/resourceGroups/<Enter ISV resource group name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<Enter ISV User Assigned Identity Name>
+": {}
+ }
+ },
+ "properties": {
+ "activeKey": {
+ "keyUrl": "<Enter Client Key URL>"
+ },
+ "encryptionType": "EncryptionAtRestWithCustomerKey",
+ "federatedClientId": "<Enter ISV Multi-Tenant App Id>"
+ }
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+To use Azure PowerShell, install the latest Az module or the Az.Storage module. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-Az-ps).
++
+In the script below, `-FederatedClientId` should be the application ID (client ID) of the multi-tenant application. You'll also need to provide the subscription ID, resource group name, and identity name.
+
+```azurepowershell-interactive
+$userAssignedIdentities = @{"/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identityName" = @{}};
+
+$config = New-AzDiskEncryptionSetConfig `
+ -Location 'westcentralus' `
+ -KeyUrl "https://vault1.vault.azure.net:443/keys/key1/mykey" `
+ -IdentityType 'UserAssigned' `
+ -RotationToLatestKeyVersionEnabled $True `
+ -UserAssignedIdentity $userAssignedIdentities `
+ -FederatedClientId "00000000-0000-0000-0000-000000000000" `
+ $config `
+ | New-AzDiskEncryptionSet -ResourceGroupName 'rg1' -Name 'enc1'
+```
+
+# [Azure CLI](#tab/azure-cli)
++
+In the command below, `myAssignedId` should be the resource ID of the user-assigned managed identity that you created earlier, and `myFederatedClientId` should be the application ID (client ID) of the multi-tenant application.
+
+```azurecli-interactive
+az disk-encryption-set create --resource-group MyResourceGroup --name MyDiskEncryptionSet --key-url MyKey --mi-user-assigned myAssignedId --federated-client-id myFederatedClientId --location westcentralus
+```
+++
+## Next steps
+
+See also:
+
+- [Encrypt disks using customer-managed keys in Azure DevTest Labs](../devtest-labs/encrypt-disks-customer-managed-keys.md)
+- [Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks](disks-enable-customer-managed-keys-portal.md)
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Samples of the metrics specified in the `performanceCounters` section are collec
"unit": "Percent", "annotation": [ {
- "displayName" : "Aggregate CPU %idle time",
+ "displayName" : "cpu idle time",
"locale" : "en-us" } ]
In a two-vCPU VM, if one vCPU is 100 percent busy and the other is 100 percent i
Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning | - | -
-`PercentIdleTime` | `cpu/usage_idle` | Percentage of time during the aggregation window that processors ran the kernel idle loop
-`PercentProcessorTime` | `cpu/usage_active` | Percentage of time running a non-idle thread
-`PercentIOWaitTime` | `cpu/usage_iowait` | Percentage of time waiting for IO operations to finish
-`PercentInterruptTime` | `cpu/usage_irq` | Percentage of time running hardware or software interrupts and DPCs (deferred procedure calls)
-`PercentUserTime` | `cpu/usage_user` | Of non-idle time during the aggregation window, the percentage of time spent in user mode at normal priority
-`PercentNiceTime` | `cpu/usage_nice` | Of non-idle time, the percentage spent at lowered (nice) priority
-`PercentPrivilegedTime` | `cpu/usage_system` | Of non-idle time, the percentage spent in privileged (kernel) mode
+`PercentIdleTime` | `cpu idle time` | Percentage of time during the aggregation window that processors ran the kernel idle loop
+`PercentProcessorTime` | `cpu percentage guest os` | Percentage of time running a non-idle thread
+`PercentIOWaitTime` | `cpu io wait time` | Percentage of time waiting for IO operations to finish
+`PercentInterruptTime` | `cpu interrupt time` | Percentage of time running hardware or software interrupts and DPCs (deferred procedure calls)
+`PercentUserTime` | `cpu user time` | Of non-idle time during the aggregation window, the percentage of time spent in user mode at normal priority
+`PercentNiceTime` | `cpu nice time` | Of non-idle time, the percentage spent at lowered (nice) priority
+`PercentPrivilegedTime` | `cpu privileged time` | Of non-idle time, the percentage spent in privileged (kernel) mode
The first four counters should sum to 100 percent. The last three counters also sum to 100 percent. These three counters subdivide the sum of `PercentProcessorTime`, `PercentIOWaitTime`, and `PercentInterruptTime`.
The Memory class of metrics provides information about memory use, paging, and s
Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning | - | -
-`AvailableMemory` | `mem/available` | Available physical memory in MiB
-`PercentAvailableMemory` | `mem/available_percent` | Available physical memory as a percentage of total memory
-`UsedMemory` | `mem/used` | In-use physical memory (MiB)
-`PercentUsedMemory` | `mem/used_percent` | In-use physical memory as a percentage of total memory
-`PagesPerSec` | `kernel_vmstat/total_pages` | Total paging (read/write)
-`PagesReadPerSec` | `kernel_vmstat/pgpgin` | Pages read from the backing store, such as swap file, program file, and mapped file
-`PagesWrittenPerSec` | `kernel_vmstat/pgpgout` | Pages written to the backing store, such as swap file and mapped file
-`AvailableSwap` | `swap/free` | Unused swap space (MiB)
-`PercentAvailableSwap` | `swap/free_percent` | Unused swap space as a percentage of the total swap
-`UsedSwap` | `swap/used` | In-use swap space (MiB)
-`PercentUsedSwap` | `swap/used_percent` | In-use swap space as a percentage of the total swap
+`AvailableMemory` | `memory available` | Available physical memory in MiB
+`PercentAvailableMemory` | `mem. percent available` | Available physical memory as a percentage of total memory
+`UsedMemory` | `memory used` | In-use physical memory (MiB)
+`PercentUsedMemory` | `memory percentage` | In-use physical memory as a percentage of total memory
+`PagesPerSec` | `pages` | Total paging (read/write)
+`PagesReadPerSec` | `page reads` | Pages read from the backing store, such as swap file, program file, and mapped file
+`PagesWrittenPerSec` | `page writes` | Pages written to the backing store, such as swap file and mapped file
+`AvailableSwap` | `swap available` | Unused swap space (MiB)
+`PercentAvailableSwap` | `swap percent available` | Unused swap space as a percentage of the total swap
+`UsedSwap` | `swap used` | In-use swap space (MiB)
+`PercentUsedSwap` | `swap percent used` | In-use swap space as a percentage of the total swap
This class of metrics has only one instance. The `"condition"` attribute has no useful settings and should be omitted.
LAD doesn't expose bandwidth metrics. You can get these metrics from host metric
Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning | - | -
-`BytesTransmitted` | `net/bytes_sent` | Total bytes sent since startup
-`BytesReceived` | `net/bytes_recv` | Total bytes received since startup
-`BytesTotal` | `net/bytes_total` | Total bytes sent or received since startup
-`PacketsTransmitted` | `net/packets_sent` | Total packets sent since startup
-`PacketsReceived` | `net/packets_recv` | Total packets received since startup
-`TotalRxErrors` | `net/err_in` | Number of receive errors since startup
-`TotalTxErrors` | `net/err_out` | Number of transmit errors since startup
-`TotalCollisions` | `net/drop_total` | Number of collisions reported by the network ports since startup
+`BytesTransmitted` | `network out guest os` | Total bytes sent since startup
+`BytesReceived` | `network in guest os` | Total bytes received since startup
+`BytesTotal` | `network total bytes` | Total bytes sent or received since startup
+`PacketsTransmitted` | `packets sent` | Total packets sent since startup
+`PacketsReceived` | `packets received` | Total packets received since startup
+`TotalRxErrors` | `packets received errors` | Number of receive errors since startup
+`TotalTxErrors` | `packets sent errors` | Number of transmit errors since startup
+`TotalCollisions` | `network collisions` | Number of collisions reported by the network ports since startup
### builtin metrics for the File system class
The File system class of metrics provides information about file system usage. A
Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning | - | -
-`FreeSpace` | `disk/free` | Available disk space in bytes
-`UsedSpace` | `disk/used` | Used disk space in bytes
-`PercentFreeSpace` | `disk/free_percent` | Percentage of free space
-`PercentUsedSpace` | `disk/used_percent` | Percentage of used space
-`PercentFreeInodes` | `disk/inodes_free_percent` | Percentage of unused index nodes (inodes)
-`PercentUsedInodes` | `disk/inodes_used_percent` | Percentage of allocated (in use) inodes summed across all file systems
-`BytesReadPerSecond` | `diskio/read_bytes_filesystem` | Bytes read per second
-`BytesWrittenPerSecond` | `diskio/write_bytes_filesystem` | Bytes written per second
-`BytesPerSecond` | `diskio/total_bytes_filesystem` | Bytes read or written per second
-`ReadsPerSecond` | `diskio/reads_filesystem` | Read operations per second
-`WritesPerSecond` | `diskio/writes_filesystem` | Write operations per second
-`TransfersPerSecond` | `diskio/total_transfers_filesystem` | Read or write operations per second
+`FreeSpace` | `filesystem free space` | Available disk space in bytes
+`UsedSpace` | `filesystem used space` | Used disk space in bytes
+`PercentFreeSpace` | `filesystem % free space` | Percentage of free space
+`PercentUsedSpace` | `filesystem % used space` | Percentage of used space
+`PercentFreeInodes` | `filesystem % free inodes` | Percentage of unused index nodes (inodes)
+`PercentUsedInodes` | `filesystem % used inodes` | Percentage of allocated (in use) inodes summed across all file systems
+`BytesReadPerSecond` | `filesystem read bytes/sec` | Bytes read per second
+`BytesWrittenPerSecond` | `filesystem write bytes/sec` | Bytes written per second
+`BytesPerSecond` | `filesystem bytes/sec` | Bytes read or written per second
+`ReadsPerSecond` | `filesystem reads/sec` | Read operations per second
+`WritesPerSecond` | `filesystem writes/sec` | Write operations per second
+`TransfersPerSecond` | `filesystem transfers/sec` | Read or write operations per second
### builtin metrics for the Disk class
When a device has multiple file systems, the counters for that device are, effec
Counter | `azure.vm.linux.guestmetrics` Display Name | Meaning | - | -
-`ReadsPerSecond` | `diskio/reads` | Read operations per second
-`WritesPerSecond` | `diskio/writes` | Write operations per second
-`TransfersPerSecond` | `diskio/total_transfers` | Total operations per second
-`AverageReadTime` | `diskio/read_time` | Average seconds per read operation
-`AverageWriteTime` | `diskio/write_time` | Average seconds per write operation
-`AverageTransferTime` | `diskio/io_time` | Average seconds per operation
-`AverageDiskQueueLength` | `diskio/iops_in_progress` | Average number of queued disk operations
-`ReadBytesPerSecond` | `diskio/read_bytes` | Number of bytes read per second
-`WriteBytesPerSecond` | `diskio/write_bytes` | Number of bytes written per second
-`BytesPerSecond` | `diskio/total_bytes` | Number of bytes read or written per second
+`ReadsPerSecond` | `disk reads` | Read operations per second
+`WritesPerSecond` | `disk writes` | Write operations per second
+`TransfersPerSecond` | `disk transfers` | Total operations per second
+`AverageReadTime` | `disk read time` | Average seconds per read operation
+`AverageWriteTime` | `disk write time` | Average seconds per write operation
+`AverageTransferTime` | `disk transfer time` | Average seconds per operation
+`AverageDiskQueueLength` | `disk queue length` | Average number of queued disk operations
+`ReadBytesPerSecond` | `disk read guest os` | Number of bytes read per second
+`WriteBytesPerSecond` | `disk write guest os` | Number of bytes written per second
+`BytesPerSecond` | `disk total bytes` | Number of bytes read or written per second
## Example LAD 4.0 configuration
In each case, data is also uploaded to:
"annotation": [ { "locale": "en-us",
- "displayName": "Aggregate CPU %utilization"
+ "displayName": "cpu percentage guest os"
} ], "condition": "IsAggregate=TRUE",
In each case, data is also uploaded to:
"annotation": [ { "locale": "en-us",
- "displayName": "Used disk space on /"
+ "displayName": "Used disfilesystem used space"
} ], "condition": "Name=\"/\"",
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
A proximity placement group is a logical grouping used to make sure that Azure c
Create a proximity placement group using [`az ppg create`](/cli/azure/ppg#az-ppg-create). ```azurecli-interactive
-az group create --name myPPGGroup --location westus
+az group create --name myPPGGroup --location eastus
az ppg create \ -n myPPG \ -g myPPGGroup \
- -l westus \
+ -l eastus \
-t standard
+ --intent-vm-sizes Standard_E64s_v4 Standard_M416ms_v2 \
+ -z 1
``` ## List proximity placement groups
You can list all of your proximity placement groups using [az ppg list](/cli/azu
```azurecli-interactive az ppg list -o table ```
+## Show proximity placement group
+
+You can see the proximity placement group details and resources using [az ppg show](/cli/azure/ppg#az-ppg-show)
+
+```azurecli-interactive
+az ppg show --name myPPG --resource-group myPPGGroup
+{  "availabilitySets": [],  
+ "colocationStatus": null,  
+ "id": "/subscriptions/[subscriptionId]/resourceGroups/myPPGGroup/providers/Microsoft.Compute/proximityPlacementGroups/MyPPG",  
+ "intent": {    
+ "vmSizes": [      
+ "Standard_E64s_v4",      
+ "Standard_M416ms_v2"    
+ ]  
+ },  
+ "location": "eastus",  
+ "name": "MyPPG",  
+ "proximityPlacementGroupType": "Standard",  
+ "resourceGroup": "myPPGGroup",  
+ "tags": {},  
+ "type": "Microsoft.Compute/proximityPlacementGroups",  
+ "virtualMachineScaleSets": [],  
+ "virtualMachines": [],  
+ "zones": [    
+ "1" 
+ ]
+}
+```
## Create a VM
az vm create \
--image UbuntuLTS \ --ppg myPPG \ --generate-ssh-keys \
- --size Standard_D1_v2 \
- -l westus
+ --size Standard_E64s_v4 \
+ -l eastus
``` You can see the VM in the proximity placement group using [az ppg show](/cli/azure/ppg#az-ppg-show).
You can also create a scale set in your proximity placement group. Use the same
## Next steps
-Learn more about the [Azure CLI](/cli/azure/ppg) commands for proximity placement groups.
+Learn more about the [Azure CLI](/cli/azure/ppg) commands for proximity placement groups.
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/proximity-placement-groups.md
Create a proximity placement group using the [New-AzProximityPlacementGroup](/po
$resourceGroup = "myPPGResourceGroup" $location = "East US" $ppgName = "myPPG"
+$zone = "1"
+$vmSize1 = "Standard_E64s_v4"
+$vmSize2 = "Standard_M416ms_v2"
New-AzResourceGroup -Name $resourceGroup -Location $location $ppg = New-AzProximityPlacementGroup ` -Location $location ` -Name $ppgName ` -ResourceGroupName $resourceGroup ` -ProximityPlacementGroupType Standard
+ -Zone $zone
+ -IntentVMSizeList $vmSize1, $vmSize2
``` ## List proximity placement groups
$ppg = New-AzProximityPlacementGroup `
You can list all of the proximity placement groups using the [Get-AzProximityPlacementGroup](/powershell/module/az.compute/get-azproximityplacementgroup) cmdlet. ```azurepowershell-interactive
-Get-AzProximityPlacementGroup
+Get-AzProximityPlacementGroup -ResourceGroupName $resourceGroup -Name $ppgName
+
+ResourceGroupName : myPPGResourceGroup
+ProximityPlacementGroupType : Standard
+Id : /subscriptions/[subscriptionId]/resourceGroups/myPPGResourceGroup/providers/Microsoft.Compute/proximityPlacementGroups/myPPG
+Name : myPPG
+Type : Microsoft.Compute/proximityPlacementGroups
+Location : eastus
+Tags : {}
+Intent :
+ VmSizes[0] : Standard_E64s_v4
+ VmSizes[1] : Standard_M416ms_v2
+Zones[0] : 1
```
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The control plane for the [SAP deployment automation framework on Azure](automat
## Deployer
-The [deployer](automation-deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](automation-deployment-framework.md). It's a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands.
+The [deployer](automation-deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](automation-deployment-framework.md). It's a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands.
The configuration of the deployer is performed in a Terraform tfvars variable file.
The table below contains the Terraform parameters, these parameters need to be
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type |
-> | -- | | - |
-> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP Library that contains the Terraform state files | Required |
+> | -- | | - |
+> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP Library that contains the Terraform state files | Required |
### Environment Parameters
The table below contains the parameters that define the resource naming.
The table below contains the parameters that define the resource group. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | - |
+> | Variable | Description | Type |
+> | -- | -- | - |
> | `resource_group_name` | Name of the resource group to be created | Optional |
-> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
-> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
+> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
### Network Parameters The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field.
+ - For the green field scenario, the virtual network address space and the subnet address prefixes must be specified
- For the brown field scenario, the Azure resource identifier for the virtual network and the subnets must be specified
-The recommended CIDR of the virtual network address space is /27, which allows space for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range of /26.
+The recommended CIDR of the virtual network address space is /27, which allows space for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range of /26.
The recommended CIDR value for the management subnet is /28 that allows 16 IP addresses. The recommended CIDR value for the firewall subnet is /26 that allows 64 IP addresses.
The table below contains the networking parameters.
> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown field deployments. | > | `management_subnet_nsg_name` | The name of the Network Security Group name | Optional | | > | `management_subnet_nsg_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | Mandatory For brown field deployments. |
-> | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | |
+> | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | |
> | | | | | > | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Firewall subnet | Mandatory | For brown field deployments. |
-> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
> | | | | | > | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Bastion subnet | Mandatory | For brown field deployments. | > | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. | > | | | | | > | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown field deployments using the web app |
-> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments using the web app |
+> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments using the web app |
> [!NOTE] > When using an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms
-
+ ### Deployer Virtual Machine Parameters
-The table below contains the parameters related to the deployer virtual machine.
+The table below contains the parameters related to the deployer virtual machine.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type |
The table below contains the parameters related to the deployer virtual machine.
> | `deployer_size` | Defines the Virtual machine SKU to use, for example Standard_D4s_v3 | Optional | > | `deployer_count` | Defines the number of Deployers | Optional | > | `deployer_image` | Defines the Virtual machine image to use, see below | Optional |
+> | `plan` | Defines the plan associated to the Virtual machine image, see below | Optional |
> | `deployer_disk_type` | Defines the disk type, for example Premium_LRS | Optional | > | `deployer_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional | > | `deployer_private_ip_address` | Defines the Private IP address to use | Optional |
The table below contains the parameters related to the deployer virtual machine.
> | `auto_configure_deployer` | Defines deployer will be configured with the required software (Terraform and Ansible) | Optional |
-The Virtual Machine image is defined using the following structure:
-```python
-{
- os_type=""
- source_image_id=""
- publisher="Canonical"
- offer="0001-com-ubuntu-server-focal"
- sku="20_04-lts"
- version="latest"
+The Virtual Machine image is defined using the following structure:
+```python
+{
+ "os_type" = ""
+ "source_image_id" = ""
+ "publisher" = "Canonical"
+ "offer" = "0001-com-ubuntu-server-focal"
+ "sku" = "20_04-lts"
+ "version" = "latest"
} ```
+The plan defined using the following structure:
+```python
+{
+ "use" = false
+ "name" = "0001-com-ubuntu-server-focal"
+ "publisher" = "Canonical"
+ "product" = "20_04-lts"
+ }
+```
+
+> [!NOTE]
+> Note that using the plan attribute will require that the image in question has been used at least once in the subscription. This is because the first usage prompts the user to accept the License terms and the automation has no mean to approve it.
+++ ### Authentication Parameters The table below defines the parameters used for defining the Virtual Machine authentication > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | | | |
+> | Variable | Description | Type |
+> | | | |
> | `deployer_vm_authentication_type` | Defines the default authentication for the Deployer | Optional | > | `deployer_authentication_username` | Administrator account name | Optional | > | `deployer_authentication_password` | Administrator password | Optional |
The table below defines the parameters used for defining the Virtual Machine aut
The table below defines the parameters used for defining the Key Vault information > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | | | - |
+> | Variable | Description | Type |
+> | | | - |
> | `user_keyvault_id` | Azure resource identifier for the user key vault | Optional | > | `spn_keyvault_id` | Azure resource identifier for the user key vault containing the SPN details | Optional | > | `deployer_private_key_secret_name` | The Azure Key Vault secret name for the deployer private key | Optional |
The table below defines the parameters used for defining the Key Vault informati
> | Variable | Description | Type | Notes | > | | - | -- | -- | > | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | |
-> | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | |
+> | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | |
> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments | > | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | | > | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
bastion_deployment=true
## SAP Library
-The [SAP Library](automation-deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane.
+The [SAP Library](automation-deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane.
The configuration of the SAP Library is performed in a Terraform tfvars variable file.
The configuration of the SAP Library is performed in a Terraform tfvars variable
The table below contains the Terraform parameters, these parameters need to be entered manually when not using the deployment scripts > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | - |
-> | `deployer_tfstate_key` | The state file name for the deployer | Required |
+> | Variable | Description | Type |
+> | -- | - | - |
+> | `deployer_tfstate_key` | The state file name for the deployer | Required |
### Environment Parameters
The table below contains the parameters that define the resource naming.
The table below contains the parameters that define the resource group. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | - |
+> | Variable | Description | Type |
+> | -- | -- | - |
> | `resource_group_name` | Name of the resource group to be created | Optional |
-> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
-> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
-
+> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
-### Deployer Parameters
-The table below contains the parameters that define the resource group and the resource naming.
-
-> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | | - | | - |
-> | `deployer_environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
-> | `deployer_location` | The Azure region in which to deploy. | Mandatory | |
-> | `deployer_vnet` | The logical name for the deployer VNet | Mandatory | |
### SAP Installation media storage account > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | | - |
+> | Variable | Description | Type |
+> | - | | - |
> | `library_sapmedia_arm_id` | Azure resource identifier | Optional | ### Terraform remote state storage account > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | - |
+> | Variable | Description | Type |
+> | -- | -- | - |
> | `library_terraform_state_arm_id` | Azure resource identifier | Optional | ### Extra parameters > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -- | -- |
-> | `dns_label` | DNS name of the private DNS zone | Optional |
-> | `use_private_endpoint` | Use private endpoints | Optional |
+> | Variable | Description | Type |
+> | - | -- | -- |
+> | `dns_label` | DNS name of the private DNS zone | Optional |
+> | `use_private_endpoint` | Use private endpoints | Optional |
### Example parameters file for sap library (required parameters only)
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Replace MGMT with your environment as necessary.
```powershell Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]'
-$TF_VAR_app_registration_app_id=(az ad app create --display-name MGMT-webapp-registration --enable-id-token-issuance true --sign-in-audience AzureADMyOrg --required-resource-access ./manifest.json --query "appId").Replace('"',"")
+$TF_VAR_app_registration_app_id=(az ad app create --display-name MGMT-webapp-registration --enable-id-token-issuance true --sign-in-audience AzureADMyOrg --required-resource-access .\manifest.json --query "appId").Replace('"',"")
echo $TF_VAR_app_registration_app_id az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
-rm ./manifest.json
+del manifest.json
```
Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pi
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP system deployment (infrastructure)' by choosing 'Rename/Move' from the three-dot menu on the right.
-## SAP web app deployment pipeline
-
-Create the SAP web app deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipeline YAML File. Specify the pipeline with the following settings:
-
-| Setting | Value |
-| - | |
-| Branch | main |
-| Path | `deploy/pipelines/21-deploy-web-app.yaml` |
-| Name | Web app deployment |
-
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Web app deployment' by choosing 'Rename/Move' from the three-dot menu on the right.
-
-> [!NOTE]
-> In order for the web app to function correctly, the SAP workload zone deployment and SAP system deployment pipelines must be named as specified.
- ## SAP software acquisition pipeline Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
Create the Configuration Web App pipeline by choosing _New Pipeline_ from the Pi
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Configuration Web App' by choosing 'Rename/Move' from the three-dot menu on the right.
+> [!NOTE]
+> In order for the web app to function correctly, the SAP workload zone deployment and SAP system deployment pipelines must be named as specified.
+ ## Deployment removal pipeline Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipel
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal' by choosing 'Rename/Move' from the three-dot menu on the right.
+## Control plane removal pipeline
+
+Create the control plane deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+
+| Setting | Value |
+| - | -- |
+| Branch | main |
+| Path | `deploy/pipelines/12-remove-control-plane.yaml` |
+| Name | Control plane removal |
+
+Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Control plane removal' by choosing 'Rename/Move' from the three-dot menu on the right.
+ ## Deployment removal pipeline using Azure Resource Manager Create the deployment removal ARM pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| Branch | main | | | S-Username | `<SAP Support user account name>` | | | S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. |
-| `tf_version` | 1.2.6 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| `tf_version` | 1.2.8 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
Save the variables.
s-password="<SAP Support user password>"
az devops login
-az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_KEY_CHECKING=false Deployment_Configuration_Path=WORKSPACES Branch=main S-Username=$s-user S-Password=$s-password --output yaml
+az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_KEY_CHECKING=false Deployment_Configuration_Path=WORKSPACES Branch=main S-Username=$s-user S-Password=$s-password tf_varsion=1.2.8 --output yaml
```
Create a new variable group 'SDAF-MGMT' for the control plane environment using
| ARM_TENANT_ID | Enter the Tenant ID for the service principal. | | | AZURE_CONNECTION_NAME | Previously created connection name. | | | sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. |
-| FENCING_SPN_ID | Enter the service principal application ID for the fencing agent. | Required for highly available deployments. |
-| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments. |
-| FENCING_SPN_TENANT | Enter the service principal tenant ID for the fencing agent. | Required for highly available deployments. |
+| FENCING_SPN_ID | Enter the service principal application ID for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
+| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
+| FENCING_SPN_TENANT | Enter the service principal tenant ID for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
| `PAT` | `<Personal Access Token>` | Use the Personal Token defined in the previous | | `POOL` | `<Agent Pool name>` | Use the Agent pool defined in the previous | | APP_REGISTRATION_APP_ID | App registration application ID | Required if deploying the web app |
The agent will now be configured and started.
Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
-Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the reply-url values for the app registration.
+Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the 'reply-url' values for the app registration.
-As a result of running the SAP workload zone deployment pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. Copy this value, and use it in the following command:
+As a result of running the control plane pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. You can at any time update the URLs of the registered application web app using the following command.
# [Linux](#tab/linux)
$webapp_url_base="<WEBAPP_URL_BASE>"
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ```
-After updating the reply-urls, run the pipeline.
-
-By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, navigate to the app service resource. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](../../../app-service/app-service-ip-restrictions.md).
- You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work. You should now be able to visit the web app, and use it to deploy SAP workload zones and SAP system infrastructure.
You should now be able to visit the web app, and use it to deploy SAP workload z
## Next step > [!div class="nextstepaction"]
-> [DevOps hands on lab](automation-devops-tutorial.md)
+> [DevOps hands on lab](automation-devops-tutorial.md)
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | -- |
+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster using managed Identities | Optional |
> | `resource_offset` | Provides and offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional | > | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional | > | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
description: Overview of the SAP workload zone configuration process within the
Previously updated : 08/13/2022 Last updated : 09/13/2022
ANF_service_level = "Ultra"
```
+### DNS Support
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | | --| -- | |
+> | `use_custom_dns_a_registration` | Should a custom DNS A record be created when using private endpoints. | Optional | |
+> | `management_dns_subscription_id` | Custom DNS subscription ID. | Optional | |
+> | `management_dns_resourcegroup_name` | Custom DNS resource group name. | Optional | |
+> | | | | |
## Other Parameters > [!div class="mx-tdCol2BreakAll "]
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
az ad app update `
# [Azure DevOps](#tab/devops) It is currently not possible to perform this action from Azure DevOps.+ > [!TIP]
You can log in and visit the web app by following the URL from earlier or clicki
## Next step > [!div class="nextstepaction"]
-> [Configure SAP Workload Zone](automation-configure-workload-zone.md)
+> [Configure SAP Workload Zone](automation-configure-workload-zone.md)
virtual-machines Dbms_Guide_Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_oracle.md
# Azure Virtual Machines Oracle DBMS deployment for SAP workload
-[767598]:https://launchpad.support.sap.com/#/notes/767598
-[773830]:https://launchpad.support.sap.com/#/notes/773830
-[826037]:https://launchpad.support.sap.com/#/notes/826037
-[965908]:https://launchpad.support.sap.com/#/notes/965908
-[1031096]:https://launchpad.support.sap.com/#/notes/1031096
-[1114181]:https://launchpad.support.sap.com/#/notes/1114181
-[1139904]:https://launchpad.support.sap.com/#/notes/1139904
-[1173395]:https://launchpad.support.sap.com/#/notes/1173395
-[1245200]:https://launchpad.support.sap.com/#/notes/1245200
-[1409604]:https://launchpad.support.sap.com/#/notes/1409604
-[1558958]:https://launchpad.support.sap.com/#/notes/1558958
-[1585981]:https://launchpad.support.sap.com/#/notes/1585981
-[1588316]:https://launchpad.support.sap.com/#/notes/1588316
-[1590719]:https://launchpad.support.sap.com/#/notes/1590719
-[1597355]:https://launchpad.support.sap.com/#/notes/1597355
-[1605680]:https://launchpad.support.sap.com/#/notes/1605680
-[1619720]:https://launchpad.support.sap.com/#/notes/1619720
-[1619726]:https://launchpad.support.sap.com/#/notes/1619726
-[1619967]:https://launchpad.support.sap.com/#/notes/1619967
-[1750510]:https://launchpad.support.sap.com/#/notes/1750510
-[1752266]:https://launchpad.support.sap.com/#/notes/1752266
-[1757924]:https://launchpad.support.sap.com/#/notes/1757924
-[1757928]:https://launchpad.support.sap.com/#/notes/1757928
-[1758182]:https://launchpad.support.sap.com/#/notes/1758182
-[1758496]:https://launchpad.support.sap.com/#/notes/1758496
-[1772688]:https://launchpad.support.sap.com/#/notes/1772688
-[1814258]:https://launchpad.support.sap.com/#/notes/1814258
-[1882376]:https://launchpad.support.sap.com/#/notes/1882376
-[1909114]:https://launchpad.support.sap.com/#/notes/1909114
-[1922555]:https://launchpad.support.sap.com/#/notes/1922555
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1941500]:https://launchpad.support.sap.com/#/notes/1941500
-[1956005]:https://launchpad.support.sap.com/#/notes/1956005
-[1973241]:https://launchpad.support.sap.com/#/notes/1973241
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2002167]:https://launchpad.support.sap.com/#/notes/2002167
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2039619]:https://launchpad.support.sap.com/#/notes/2039619
-[2069760]:https://launchpad.support.sap.com/#/notes/2069760
-[2121797]:https://launchpad.support.sap.com/#/notes/2121797
-[2134316]:https://launchpad.support.sap.com/#/notes/2134316
-[2171857]:https://launchpad.support.sap.com/#/notes/2171857
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2191498]:https://launchpad.support.sap.com/#/notes/2191498
-[2233094]:https://launchpad.support.sap.com/#/notes/2233094
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-
-[azure-cli]:../../../cli-install-nodejs.md
-[azure-portal]:https://portal.azure.com
-[azure-ps]:/powershell/azure/
-[azure-quickstart-templates-github]:https://github.com/Azure/azure-quickstart-templates
-[azure-script-ps]:https://go.microsoft.com/fwlink/p/?LinkID=395017
-[azure-resource-manager/management/azure-subscription-service-limits]:../../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits
-
-[dbms-guide]:dbms-guide.md
-[dbms-guide-2.1]:dbms-guide.md#c7abf1f0-c927-4a7c-9c1d-c7b5b3b7212f
-[dbms-guide-2.2]:dbms-guide.md#c8e566f9-21b7-4457-9f7f-126036971a91
-[dbms-guide-2.3]:dbms-guide.md#10b041ef-c177-498a-93ed-44b3441ab152
-[dbms-guide-2]:dbms-guide.md#65fa79d6-a85f-47ee-890b-22e794f51a64
-[dbms-guide-3]:dbms-guide.md#871dfc27-e509-4222-9370-ab1de77021c3
-[dbms-guide-5.5.1]:dbms-guide.md#0fef0e79-d3fe-4ae2-85af-73666a6f7268
-[dbms-guide-5.5.2]:dbms-guide.md#f9071eff-9d72-4f47-9da4-1852d782087b
-[dbms-guide-5.6]:dbms-guide.md#1b353e38-21b3-4310-aeb6-a77e7c8e81c8
-[dbms-guide-5.8]:dbms-guide.md#9053f720-6f3b-4483-904d-15dc54141e30
-[dbms-guide-5]:dbms-guide.md#3264829e-075e-4d25-966e-a49dad878737
-[dbms-guide-8.4.1]:dbms-guide.md#b48cfe3b-48e9-4f5b-a783-1d29155bd573
-[dbms-guide-8.4.2]:dbms-guide.md#23c78d3b-ca5a-4e72-8a24-645d141a3f5d
-[dbms-guide-8.4.3]:dbms-guide.md#77cd2fbb-307e-4cbf-a65f-745553f72d2c
-[dbms-guide-8.4.4]:dbms-guide.md#f77c1436-9ad8-44fb-a331-8671342de818
-[dbms-guide-900-sap-cache-server-on-premises]:dbms-guide.md#642f746c-e4d4-489d-bf63-73e80177a0a8
-[dbms-guide-managed-disks]:dbms-guide.md#f42c6cb5-d563-484d-9667-b07ae51bce29
-
-[dbms-guide-figure-100]:media/virtual-machines-shared-sap-dbms-guide/100_storage_account_types.png
-[dbms-guide-figure-200]:media/virtual-machines-shared-sap-dbms-guide/200-ha-set-for-dbms-ha.png
-[dbms-guide-figure-300]:media/virtual-machines-shared-sap-dbms-guide/300-reference-config-iaas.png
-[dbms-guide-figure-400]:media/virtual-machines-shared-sap-dbms-guide/400-sql-2012-backup-to-blob-storage.png
-[dbms-guide-figure-500]:media/virtual-machines-shared-sap-dbms-guide/500-sql-2012-backup-to-blob-storage-different-containers.png
-[dbms-guide-figure-600]:media/virtual-machines-shared-sap-dbms-guide/600-iaas-maxdb.png
-[dbms-guide-figure-700]:media/virtual-machines-shared-sap-dbms-guide/700-livecach-prod.png
-[dbms-guide-figure-800]:media/virtual-machines-shared-sap-dbms-guide/800-azure-vm-sap-content-server.png
-[dbms-guide-figure-900]:media/virtual-machines-shared-sap-dbms-guide/900-sap-cache-server-on-premises.png
-
-[deployment-guide]:deployment-guide.md
-[deployment-guide-2.2]:deployment-guide.md#42ee2bdb-1efc-4ec7-ab31-fe4c22769b94
-[deployment-guide-3.1.2]:deployment-guide.md#3688666f-281f-425b-a312-a77e7db2dfab
-[deployment-guide-3.2]:deployment-guide.md#db477013-9060-4602-9ad4-b0316f8bb281
-[deployment-guide-3.3]:deployment-guide.md#54a1fc6d-24fd-4feb-9c57-ac588a55dff2
-[deployment-guide-3.4]:deployment-guide.md#a9a60133-a763-4de8-8986-ac0fa33aa8c1
-[deployment-guide-3]:deployment-guide.md#b3253ee3-d63b-4d74-a49b-185e76c4088e
-[deployment-guide-4.1]:deployment-guide.md#604bcec2-8b6e-48d2-a944-61b0f5dee2f7
-[deployment-guide-4.2]:deployment-guide.md#7ccf6c3e-97ae-4a7a-9c75-e82c37beb18e
-[deployment-guide-4.3]:deployment-guide.md#31d9ecd6-b136-4c73-b61e-da4a29bbc9cc
-[deployment-guide-4.4.2]:deployment-guide.md#6889ff12-eaaf-4f3c-97e1-7c9edc7f7542
-[deployment-guide-4.4]:deployment-guide.md#c7cbb0dc-52a4-49db-8e03-83e7edc2927d
-[deployment-guide-4.5.1]:deployment-guide.md#987cf279-d713-4b4c-8143-6b11589bb9d4
-[deployment-guide-4.5.2]:deployment-guide.md#408f3779-f422-4413-82f8-c57a23b4fc2f
-[deployment-guide-4.5]:deployment-guide.md#d98edcd3-f2a1-49f7-b26a-07448ceb60ca
-[deployment-guide-5.1]:deployment-guide.md#bb61ce92-8c5c-461f-8c53-39f5e5ed91f2
-[deployment-guide-5.2]:deployment-guide.md#e2d592ff-b4ea-4a53-a91a-e5521edb6cd1
-[deployment-guide-5.3]:deployment-guide.md#fe25a7da-4e4e-4388-8907-8abc2d33cfd8
-
-[deployment-guide-configure-monitoring-scenario-1]:deployment-guide.md#ec323ac3-1de9-4c3a-b770-4ff701def65b
-[deployment-guide-configure-proxy]:deployment-guide.md#baccae00-6f79-4307-ade4-40292ce4e02d
-[deployment-guide-figure-100]:media/virtual-machines-shared-sap-deployment-guide/100-deploy-vm-image.png
-[deployment-guide-figure-1000]:media/virtual-machines-shared-sap-deployment-guide/1000-service-properties.png
-[deployment-guide-figure-11]:deployment-guide.md#figure-11
-[deployment-guide-figure-1100]:media/virtual-machines-shared-sap-deployment-guide/1100-azperflib.png
-[deployment-guide-figure-1200]:medi-test-login.png
-[deployment-guide-figure-1300]:medi-test-executed.png
-[deployment-guide-figure-14]:deployment-guide.md#figure-14
-[deployment-guide-figure-1400]:media/virtual-machines-shared-sap-deployment-guide/1400-azperflib-error-servicenotstarted.png
-[deployment-guide-figure-300]:media/virtual-machines-shared-sap-deployment-guide/300-deploy-private-image.png
-[deployment-guide-figure-400]:media/virtual-machines-shared-sap-deployment-guide/400-deploy-using-disk.png
-[deployment-guide-figure-5]:deployment-guide.md#figure-5
-[deployment-guide-figure-50]:media/virtual-machines-shared-sap-deployment-guide/50-forced-tunneling-suse.png
-[deployment-guide-figure-500]:media/virtual-machines-shared-sap-deployment-guide/500-install-powershell.png
-[deployment-guide-figure-6]:deployment-guide.md#figure-6
-[deployment-guide-figure-600]:media/virtual-machines-shared-sap-deployment-guide/600-powershell-version.png
-[deployment-guide-figure-7]:deployment-guide.md#figure-7
-[deployment-guide-figure-700]:media/virtual-machines-shared-sap-deployment-guide/700-install-powershell-installed.png
-[deployment-guide-figure-760]:media/virtual-machines-shared-sap-deployment-guide/760-azure-cli-version.png
-[deployment-guide-figure-900]:medi-update-executed.png
-[deployment-guide-figure-azure-cli-installed]:deployment-guide.md#402488e5-f9bb-4b29-8063-1c5f52a892d0
-[deployment-guide-figure-azure-cli-version]:deployment-guide.md#0ad010e6-f9b5-4c21-9c09-bb2e5efb3fda
-[deployment-guide-install-vm-agent-windows]:deployment-guide.md#b2db5c9a-a076-42c6-9835-16945868e866
-[deployment-guide-troubleshooting-chapter]:deployment-guide.md#564adb4f-5c95-4041-9616-6635e83a810b
-
-[deploy-template-cli]:../../../resource-group-template-deploy-cli.md
-[deploy-template-portal]:../../../resource-group-template-deploy-portal.md
-[deploy-template-powershell]:../../../resource-group-template-deploy.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
-[getting-started-dbms]:get-started.md#1343ffe1-8021-4ce6-a08d-3a1553a4db82
-[getting-started-deployment]:get-started.md#6aadadd2-76b5-46d8-8713-e8d63630e955
-[getting-started-planning]:get-started.md#3da0389e-708b-4e82-b2a2-e92f132df89c
-
-[getting-started-windows-classic]:../../virtual-machines-windows-classic-sap-get-started.md
-[getting-started-windows-classic-dbms]:../../virtual-machines-windows-classic-sap-get-started.md#c5b77a14-f6b4-44e9-acab-4d28ff72a930
-[getting-started-windows-classic-deployment]:../../virtual-machines-windows-classic-sap-get-started.md#f84ea6ce-bbb4-41f7-9965-34d31b0098ea
-[getting-started-windows-classic-dr]:../../virtual-machines-windows-classic-sap-get-started.md#cff10b4a-01a5-4dc3-94b6-afb8e55757d3
-[getting-started-windows-classic-ha-sios]:../../virtual-machines-windows-classic-sap-get-started.md#4bb7512c-0fa0-4227-9853-4004281b1037
-[getting-started-windows-classic-planning]:../../virtual-machines-windows-classic-sap-get-started.md#f2a5e9d8-49e4-419e-9900-af783173481c
-
-[ha-guide-classic]:https://go.microsoft.com/fwlink/?LinkId=613056
-
-[install-extension-cli]:virtual-machines-linux-enable-aem.md
-
-[Logo_Linux]:media/virtual-machines-shared-sap-shared/Linux.png
-[Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png
-
-[msdn-set-azurermvmaemextension]:https://msdn.microsoft.com/library/azure/mt670598.aspx
-
-[planning-guide]:planning-guide.md
-[planning-guide-1.2]:planning-guide.md#e55d1e22-c2c8-460b-9897-64622a34fdff
-[planning-guide-11]:planning-guide.md#7cf991a1-badd-40a9-944e-7baae842a058
-[planning-guide-11.4.1]:planning-guide.md#5d9d36f9-9058-435d-8367-5ad05f00de77
-[planning-guide-11.5]:planning-guide.md#4e165b58-74ca-474f-a7f4-5e695a93204f
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-[planning-guide-3.1]:planning-guide.md#be80d1b9-a463-4845-bd35-f4cebdb5424a
-[planning-guide-3.2.1]:planning-guide.md#df49dc09-141b-4f34-a4a2-990913b30358
-[planning-guide-3.2.2]:planning-guide.md#fc1ac8b2-e54a-487c-8581-d3cc6625e560
-[planning-guide-3.2.3]:planning-guide.md#18810088-f9be-4c97-958a-27996255c665
-[planning-guide-3.2]:planning-guide.md#8d8ad4b8-6093-4b91-ac36-ea56d80dbf77
-[planning-guide-3.3.2]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-[planning-guide-5.1.1]:planning-guide.md#4d175f1b-7353-4137-9d2f-817683c26e53
-[planning-guide-5.1.2]:planning-guide.md#e18f7839-c0e2-4385-b1e6-4538453a285c
-[planning-guide-5.2.1]:planning-guide.md#1b287330-944b-495d-9ea7-94b83aff73ef
-[planning-guide-5.2.2]:planning-guide.md#57f32b1c-0cba-4e57-ab6e-c39fe22b6ec3
-[planning-guide-5.2]:planning-guide.md#6ffb9f41-a292-40bf-9e70-8204448559e7
-[planning-guide-5.3.1]:planning-guide.md#6e835de8-40b1-4b71-9f18-d45b20959b79
-[planning-guide-5.3.2]:planning-guide.md#a43e40e6-1acc-4633-9816-8f095d5a7b6a
-[planning-guide-5.4.2]:planning-guide.md#9789b076-2011-4afa-b2fe-b07a8aba58a1
-[planning-guide-5.5.1]:planning-guide.md#4efec401-91e0-40c0-8e64-f2dceadff646
-[planning-guide-5.5.3]:planning-guide.md#17e0d543-7e8c-4160-a7da-dd7117a1ad9d
-[planning-guide-7.1]:planning-guide.md#3e9c3690-da67-421a-bc3f-12c520d99a30
-[planning-guide-7]:planning-guide.md#96a77628-a05e-475d-9df3-fb82217e8f14
-[planning-guide-9.1]:planning-guide.md#6f0a47f3-a289-4090-a053-2521618a28c3
-[planning-guide-azure-premium-storage]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-
-[planning-guide-figure-100]:media/virtual-machines-shared-sap-planning-guide/100-single-vm-in-azure.png
-[planning-guide-figure-1300]:media/virtual-machines-shared-sap-planning-guide/1300-ref-config-iaas-for-sap.png
-[planning-guide-figure-1400]:media/virtual-machines-shared-sap-planning-guide/1400-attach-detach-disks.png
-[planning-guide-figure-1600]:media/virtual-machines-shared-sap-planning-guide/1600-firewall-port-rule.png
-[planning-guide-figure-1700]:media/virtual-machines-shared-sap-planning-guide/1700-single-vm-demo.png
-[planning-guide-figure-1900]:media/virtual-machines-shared-sap-planning-guide/1900-vm-set-vnet.png
-[planning-guide-figure-200]:media/virtual-machines-shared-sap-planning-guide/200-multiple-vms-in-azure.png
-[planning-guide-figure-2100]:media/virtual-machines-shared-sap-planning-guide/2100-s2s.png
-[planning-guide-figure-2200]:media/virtual-machines-shared-sap-planning-guide/2200-network-printing.png
-[planning-guide-figure-2300]:media/virtual-machines-shared-sap-planning-guide/2300-sapgui-stms.png
-[planning-guide-figure-2400]:media/virtual-machines-shared-sap-planning-guide/2400-vm-extension-overview.png
-[planning-guide-figure-2500]:media/virtual-machines-shared-sap-planning-guide/2500-vm-extension-details.png
-[planning-guide-figure-2600]:media/virtual-machines-shared-sap-planning-guide/2600-sap-router-connection.png
-[planning-guide-figure-2700]:media/virtual-machines-shared-sap-planning-guide/2700-exposed-sap-portal.png
-[planning-guide-figure-2800]:media/virtual-machines-shared-sap-planning-guide/2800-endpoint-config.png
-[planning-guide-figure-2900]:media/virtual-machines-shared-sap-planning-guide/2900-azure-ha-sap-ha.png
-[planning-guide-figure-300]:media/virtual-machines-shared-sap-planning-guide/300-vpn-s2s.png
-[planning-guide-figure-3000]:media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png
-[planning-guide-figure-3200]:media/virtual-machines-shared-sap-planning-guide/3200-sap-ha-with-sql.png
-[planning-guide-figure-400]:media/virtual-machines-shared-sap-planning-guide/400-vm-services.png
-[planning-guide-figure-600]:media/virtual-machines-shared-sap-planning-guide/600-s2s-details.png
-[planning-guide-figure-700]:media/virtual-machines-shared-sap-planning-guide/700-decision-tree-deploy-to-azure.png
-[planning-guide-figure-800]:media/virtual-machines-shared-sap-planning-guide/800-portal-vm-overview.png
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[resource-group-authoring-templates]:../../../resource-group-authoring-templates.md
-[resource-group-overview]:../../../azure-resource-manager/management/overview.md
-[resource-groups-networking]:../../../networking/networking-overview.md
-[sap-pam]:https://support.sap.com/pam
-[sap-templates-2-tier-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-2-tier-marketplace-image%2Fazuredeploy.json
-[sap-templates-2-tier-os-disk]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-2-tier-user-disk%2Fazuredeploy.json
-[sap-templates-2-tier-user-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-2-tier-user-image%2Fazuredeploy.json
-[sap-templates-3-tier-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image%2Fazuredeploy.json
-[sap-templates-3-tier-user-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-user-image%2Fazuredeploy.json
-[storage-azure-cli]:../../../storage/common/storage-azure-cli.md
-[storage-azure-cli-copy-blobs]:../../../storage/common/storage-azure-cli.md#copy-blobs
-[storage-introduction]:../../../storage/common/storage-introduction.md
-[storage-powershell-guide-full-copy-vhd]:../../../storage/common/storage-powershell-guide-full.md#how-to-copy-blobs-from-one-storage-container-to-another
-[storage-premium-storage-preview-portal]:../../disks-types.md
-[storage-redundancy]:../../../storage/common/storage-redundancy.md
-[storage-scalability-targets]:../../../storage/common/scalability-targets-standard-accounts.md
-[storage-use-azcopy]:../../../storage/common/storage-use-azcopy.md
-[template-201-vm-from-specialized-vhd]:https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-from-specialized-vhd
-[templates-101-simple-windows-vm]:https://github.com/Azure/azure-quickstart-templates/tree/master/101-simple-windows-vm
-[templates-101-vm-from-user-image]:https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image
-[virtual-machines-linux-attach-disk-portal]:../../linux/attach-disk-portal.md
-[virtual-machines-azure-resource-manager-architecture]:../../../resource-manager-deployment-model.md
-[virtual-machines-azurerm-versus-azuresm]:../../../resource-manager-deployment-model.md
-[virtual-machines-windows-classic-configure-oracle-data-guard]:../../virtual-machines-windows-classic-configure-oracle-data-guard.md
-[virtual-machines-linux-cli-deploy-templates]:../../linux/cli-deploy-templates.md
-[virtual-machines-deploy-rmtemplates-powershell]:../../virtual-machines-windows-ps-manage.md
-[virtual-machines-linux-agent-user-guide]:../../linux/agent-user-guide.md
-[virtual-machines-linux-agent-user-guide-command-line-options]:../../linux/agent-user-guide.md#command-line-options
-[virtual-machines-linux-capture-image]:../../linux/capture-image.md
-[virtual-machines-linux-capture-image-resource-manager]:../../linux/capture-image.md
-[virtual-machines-linux-capture-image-resource-manager-capture]:../../linux/capture-image.md#step-2-capture-the-vm
-[virtual-machines-linux-configure-raid]:../../linux/configure-raid.md
-[virtual-machines-linux-configure-lvm]:../../linux/configure-lvm.md
-[virtual-machines-linux-classic-create-upload-vhd-step-1]:../../virtual-machines-linux-classic-create-upload-vhd.md#step-1-prepare-the-image-to-be-uploaded
-[virtual-machines-linux-create-upload-vhd-suse]:../../linux/suse-create-upload-vhd.md
-[virtual-machines-linux-redhat-create-upload-vhd]:../../linux/redhat-create-upload-vhd.md
-[virtual-machines-linux-how-to-attach-disk]:../../linux/add-disk.md
-[virtual-machines-linux-how-to-attach-disk-how-to-initialize-a-new-data-disk-in-linux]:../../linux/add-disk.md#connect-to-the-linux-vm-to-mount-the-new-disk
-[virtual-machines-linux-tutorial]:../../linux/quick-create-cli.md
-[virtual-machines-linux-update-agent]:../../linux/update-agent.md
-[virtual-machines-manage-availability-linux]:../../linux/manage-availability.md
-[virtual-machines-manage-availability-windows]:../../windows/manage-availability.md
-[virtual-machines-ps-create-preconfigure-windows-resource-manager-vms]:virtual-machines-windows-create-powershell.md
-[virtual-machines-sizes-linux]:../../linux/sizes.md
-[virtual-machines-sizes-windows]:../../windows/sizes.md
-[virtual-machines-windows-classic-ps-sql-alwayson-availability-groups]:./../../windows/sqlclassic/virtual-machines-windows-classic-ps-sql-alwayson-availability-groups.md
-[virtual-machines-windows-classic-ps-sql-int-listener]:./../../windows/sqlclassic/virtual-machines-windows-classic-ps-sql-int-listener.md
-[virtual-machines-sql-server-high-availability-and-disaster-recovery-solutions]:/azure/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview
-[virtual-machines-sql-server-infrastructure-services]:/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview
-[virtual-machines-sql-server-performance-best-practices]:/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices
-[virtual-machines-upload-image-windows-resource-manager]:../../virtual-machines-windows-upload-image.md
-[virtual-machines-windows-tutorial]:../../virtual-machines-windows-hero-tutorial.md
-[virtual-machines-workload-template-sql-alwayson]:https://azure.microsoft.com/resources/templates/sql-server-2014-alwayson-existing-vnet-and-ad/
-[virtual-network-deploy-multinic-arm-cli]:../linux/multiple-nics.md
-[virtual-network-deploy-multinic-arm-ps]:../windows/multiple-nics.md
-[virtual-network-deploy-multinic-arm-template]:../../../virtual-network/template-samples.md
-[virtual-networks-configure-vnet-to-vnet-connection]:../../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
-[virtual-networks-create-vnet-arm-pportal]:../../../virtual-network/manage-virtual-network.md#create-a-virtual-network
-[virtual-networks-manage-dns-in-vnet]:../../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
-[virtual-networks-multiple-nics]:../../../virtual-network/virtual-network-deploy-multinic-classic-ps.md
-[virtual-networks-nsg]:../../../virtual-network/security-overview.md
-[virtual-networks-reserved-private-ip]:../../../virtual-network/virtual-networks-static-private-ip-arm-ps.md
-[virtual-networks-static-private-ip-arm-pportal]:../../../virtual-network/virtual-networks-static-private-ip-arm-pportal.md
-[virtual-networks-udr-overview]:../../../virtual-network/virtual-networks-udr-overview.md
-[vpn-gateway-about-vpn-devices]:../../../vpn-gateway/vpn-gateway-about-vpn-devices.md
-[vpn-gateway-create-site-to-site-rm-powershell]:../../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
-[vpn-gateway-cross-premises-options]:../../../vpn-gateway/vpn-gateway-plan-design.md
-[vpn-gateway-site-to-site-create]:../../../vpn-gateway/vpn-gateway-site-to-site-create.md
-[vpn-gateway-vpn-faq]:../../../vpn-gateway/vpn-gateway-vpn-faq.md
-[xplat-cli]:../../../cli-install-nodejs.md
-[xplat-cli-azure-resource-manager]:../../../xplat-cli-azure-resource-manager.md
--
-This document covers several different areas to consider when you're deploying Oracle Database for SAP workload in Azure IaaS. Before you read this document, we recommend you read [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md). We also recommend that you read other guides in the [SAP workload on Azure documentation](./get-started.md).
-
-You can find information about Oracle versions and corresponding OS versions that are supported for running SAP on Oracle on Azure in SAP Note [2039619].
-
-General information about running SAP Business Suite on Oracle can be found at [SAP on Oracle](https://www.sap.com/community/topic/oracle.html).
-Oracle software is supported by Oracle to run on Microsoft Azure. For more information about general support for Windows Hyper-V and Azure, check the [Oracle and Microsoft Azure FAQ](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html).
-
-## SAP Notes relevant for Oracle, SAP, and Azure
-
-The following SAP Notes are related to SAP on Azure.
-
-| Note number | Title |
+This document covers several different areas to consider when deploying Oracle Database for SAP workload in Azure IaaS. Before you read this document, we recommend you read [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](/azure/virtual-machines/workloads/sap/dbms_guide_general).
+We also recommend that you read other guides in the [SAP workload on Azure documentation](/azure/virtual-machines/workloads/sap/get-started).
+
+You can find information about Oracle versions and corresponding OS versions that are supported for running SAP on Oracle on Azure in SAP Note [2039619](https://launchpad.support.sap.com/#/notes/2039619).
+
+General information about running SAP Business Suite on Oracle can be found at [SAP on Oracle](https://www.sap.com/community/topic/oracle.html). Oracle software is supported by Oracle to run on Microsoft Azure. For more information about general support for Windows Hyper-V and Azure, check the [Oracle and Microsoft Azure FAQ](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html).
+++
+### The following SAP notes are relevant for an Oracle Installation
+
+| Note number | Note title |
| | |
-| [1928533] |SAP Applications on Azure: Supported products and Azure VM types |
-| [2015553] |SAP on Microsoft Azure: Support prerequisites |
-| [1999351] |Troubleshooting enhanced Azure monitoring for SAP |
-| [2178632] |Key monitoring metrics for SAP on Microsoft Azure |
-| [2191498] |SAP on Linux with Azure: Enhanced monitoring |
-| [2039619] |SAP applications on Microsoft Azure using the Oracle database: Supported products and versions |
-| [2243692] |Linux on Microsoft Azure (IaaS) VM: SAP license issues |
-| [2069760] |Oracle Linux 7.x SAP installation and upgrade |
-| [1597355] |Swap-space recommendation for Linux |
-| [2171857] |Oracle Database 12c - file system support on Linux |
-| [1114181] |Oracle Database 11g - file system support on Linux |
+| 1738053 | [SAPinst for Oracle ASM installation SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0001738053) |
+| 2896926 | [ASM disk group compatibility NetWeaver SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0002896926) |
+| 1550133 | [Using Oracle Automatic Storage Management (ASM) with SAP NetWeaver based Products SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0001550133)] |
+| 888626 | [Redo log layout for high-end systems SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0000888626) |
+| 105047 | [Support for Oracle functions in the SAP environment SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0000105047) |
+| 2799920 | [Patches for 19c: Database SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0002799920) |
+| 974876 | [Oracle Transparent Data Encryption (TDE) SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0000974876) |
+| 2936683 | [Oracle Linux 8: SAP Installation and Upgrade SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2936683) |
+| 1672954 | [Oracle 11g, 12c, 18c and 19c: Usage of hugepages on Linux](https://launchpad.support.sap.com/#/notes/1672954) |
+| 1171650 | [Automated Oracle DB parameter check](https://launchpad.support.sap.com/#/notes/1171650) |
+| 2936683 | [Oracle Linux 8: SAP Installation and Upgrade](https://launchpad.support.sap.com/#/notes/2936683) |
-The exact configurations and functionality that are supported by Oracle and SAP on Azure are documented in SAP Note [#2039619](https://launchpad.support.sap.com/#/notes/2039619).
+### Specifics for Oracle Database on Oracle Linux
-Windows and Oracle Linux are the only operating systems that are supported by Oracle and SAP on Azure. The widely used SLES and RHEL Linux distributions aren't supported for deploying Oracle components in Azure. Oracle components include the Oracle Database client, which is used by SAP applications to connect against the Oracle DBMS.
+Oracle software is supported by Oracle to run on Microsoft Azure with Oracle Linux as the guest OS. For more information about general support for Windows Hyper-V and Azure, see the [<u>Azure and Oracle FAQ</u>](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html).
-Exceptions, according to SAP Note [#2039619](https://launchpad.support.sap.com/#/notes/2039619), are SAP components that don't use the Oracle Database client. Such SAP components are SAP's stand-alone enqueue, message server, Enqueue replication services, WebDispatcher, and SAP Gateway.
+The specific scenario of SAP applications using Oracle Databases is supported as well. Details are discussed in the next part of the document.
-Even if you're running your Oracle DBMS and SAP application instances on Oracle Linux, you can run your SAP Central Services on SLES or RHEL and protect it with a Pacemaker-based cluster. Pacemaker as an high-availability framework has not been approved for support on Oracle Linux by SAP and Oracle.
+### General Recommendations for running SAP on Oracle on Azure
-## Specifics for Oracle Database on Windows
+When installing or migrating existing SAP on Oracle systems to Azure, the following deployment pattern should be followed:
-### Oracle Configuration guidelines for SAP installations in Azure VMs on Windows
+1. Use the most [recent Oracle Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/) version available (Oracle Linux 8.6 or higher)
+2. Use the most recent Oracle Database version available with the latest SAP Bundle Patch (SBP) (Oracle 19 Patch 15 or higher) [2799920 - Patches for 19c: Database](https://launchpad.support.sap.com/#/notes/2799920)
+3. Use Automatic Storage Management (ASM) for small, medium and large sized databases on block storage
+4. Azure Premium Storage SSD should be used. Do not use Standard or other storage types.
+5. ASM removes the requirement for Mirror Log. Follow the guidance from Oracle in Note [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626)
+6. Use ASMLib and do not use udev
+7. Azure NetApp Files deployments should use Oracle dNFS (OracleΓÇÖs own high performance Direct NFS solution)
+8. Large databases benefit greatly from large SGA sizes. Large customers should deploy on Azure M-series with 4 TB or more RAM size.
+ - Set Linux Huge Pages to 75% of Physical RAM size
+ - Set SGA to 90% of Huge Page size
+9. Oracle Home should be located outside of the ΓÇ£rootΓÇ¥ volume or disk. Use a separate disk or ANF volume. The disk holding the Oracle Home should be 64GB or larger
+10. The size of the boot disk for large high performance Oracle database servers is important. As a minimum a P10 disk should be used for M-series or E-series. Do not use small disks such as P4 or P6. A small disk can cause performance issues.
+11. Accelerated Networking must be enabled on all VMs. Upgrade to the latest OL release if there are any problems enabling Accelerated Networking
+12. Check for updates in this documentation and SAP note [2039619 - SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2039619)
+
+For information about which Oracle versions and corresponding OS versions are supported for running SAP on Oracle on Azure Virtual Machines, see SAP Note [<u>2039619</u>](https://launchpad.support.sap.com/#/notes/2039619).
+
+General information about running SAP Business Suite on Oracle can be found in the [<u>SAP on Oracle community page</u>](https://www.sap.com/community/topic/oracle.html). SAP on Oracle on Azure is only supported on Oracle Linux (and not Suse or Red Hat). Oracle RAC is not supported on Azure because RAC would require Multicast networking.
+
+## Storage configuration
+
+There are two recommended storage deployment patterns for SAP on Oracle on Azure:
+
+1. Oracle Automatic Storage Management (ASM)
+2. Azure NetApp Files (ANF) with Oracle dNFS (Direct NFS)
+
+Customers currently running Oracle databases on EXT4 or XFS file systems with LVM are encouraged to move to ASM. There are considerable performance, administration and reliability advantages to running on ASM compared to LVM. ASM reduces complexity, improves supportability and makes administration tasks simpler. This documentation contains links for Oracle DBAs to learn how to install and manage ASM.
+
+### Oracle Automatic Storage Management (ASM)
+
+Checklist for Oracle Automatic Storage Management:
+
+1. All SAP on Oracle on Azure systems are running **ASM** including Development, QAS and Production. Small, Medium and Large databases
+2. [**ASMLib**](https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/about-oracle-asm-with-oracle-asmlib.html)
+ is used and not UDEV. UDEV is required for multiple SANs, a scenario that does not exist on Azure
+3. ASM should be configured for **External Redundancy**. Azure Premium SSD storage has built in triple redundancy. Azure Premium SSD matches the reliability and integrity of any other storage solution. For optional safety customers can consider **Normal Redundancy** for the Log Disk Group
+4. No Mirror Log is required for ASM [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626)
+5. ASM Disk Groups configured as per Variant 1, 2 or 3 below
+6. ASM Allocation Unit size = 4MB (default). VLDB OLAP systems such as BW may benefit from larger ASM Allocation Unit size. Change only after confirming with Oracle support
+7. ASM Sector Size and Logical Sector Size = default (UDEV is not recommended but requires 4k)
+8. Appropriate ASM Variant is used. Production systems should use Variant 2 or 3
+
+### Oracle Automatic Storage Management Disk Groups
+
+Part II of the official Oracle Guide describes the installation and the management of ASM:
+
+- [Oracle Automatic Storage Management Administrator's Guide, 19c](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/https://docsupdatetracker.net/index.html)
+- [Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide, 19c for Linux](https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/https://docsupdatetracker.net/index.html)
+
+The following ASM limits exist for Oracle Database 12c or later:
+
+511 disk groups, 10,000 ASM disks in a Disk Group, 65,530 ASM disks in a storage system, 1 million files for each Disk Group. More info here: [Performance and Scalability Considerations for Disk Groups (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/performance-scability-diskgroup.html#GUID-5AC1176D-D331-4C1C-978F-0ECA43E0900F)
+
+Review the ASM documentation in the relevant SAP Installation Guide for Oracle available from <https://help.sap.com/viewer/nwguidefinder>
+
+### Variant 1 ΓÇô small to medium data volumes up to 3 TB, restore time not critical
+
+Customer has small or medium sized databases where backup and/or restore + recovery of all databases can be accomplished by RMAN in a timely fashion. Example: When a complete Oracle ASM disk group, with data files, from one or more databases is broken and all data files from all databases need to be restored to a newly created Oracle ASM disk group using RMAN.
+
+Oracle ASM disk group recommendation:
-In accordance with the SAP installation manual, Oracle-related files shouldn't be installed or located in the OS disk of the VM (drive c:). Virtual machines of varying sizes can support a varying number of attached disks. Smaller virtual machine types can support a smaller number of attached disks.
+|ASM Disk Group Name |Stores | Azure Storage |
+|-||--|
+| **+DATA** |All data files |3-6 x P 30 (1 TiB) |
+| |Control file (first copy) | To increase DB size add extra P30 disks |
+| |Online redo logs (first copy) | |
+| **+ARCH** |Control file (second copy) | 2 x P20 (512 GiB) |
+| |Archived redo logs | |
+| **+RECO** |Control file (third copy) | 2 x P20 (512 GiB) |
+| |RMAN backups (optional) | |
+| | recovery area (optional) | |
-If you have smaller VMs and would hit the limit of the number of disks you can attach to the VM, you can install/locate Oracle home, stage, `saptrace`, `saparch`, `sapbackup`, `sapcheck`, or `sapreorg` into the OS disk. These parts of Oracle DBMS components aren't too intense on I/O and I/O throughput. This means that the OS disk can handle the I/O requirements. The default size of the OS disk should be 127 GB.
+### Variant 2 ΓÇô medium to large data volumes between 3 TB and 12 TB, restore time important
-Oracle Database and redo log files need to be stored on separate data disks. There's an exception for the Oracle temporary tablespace. `Tempfiles` can be created on D:/ (non-persistent drive). The non-persistent D:\ drive also offers better I/O latency and throughput (with the exception of A-Series VMs).
+Customer has medium to large sized databases where backup and/or restore
++
-To determine the right amount of space for the `tempfiles`, you can check the sizes of the `tempfiles` on existing systems.
+recovery of all databases cannot be accomplished in a timely fashion.
-### Storage configuration
-Only single-instance Oracle using NTFS formatted disks is supported. All database files must be stored on the NTFS file system on Managed Disks (recommended) or on VHDs. These disks are mounted to the Azure VM and are based on [Azure page blob storage](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) or [Azure Managed Disks](../../managed-disks-overview.md).
+Usually customers will use RMAN, Azure Backup for Oracle and/or disk snap techniques in combination.
-Check out the article [Azure Storage types for SAP workload](./planning-guide-storage.md) to get more details of the specific Azure block storage types suitable for DBMS workload.
+Major differences to Variant 1 are:
-We strongly recommend using [Azure Managed Disks](../../managed-disks-overview.md). We also strongly recommend using [Azure premium storage or Azure Ultra disk](../../disks-types.md) for your Oracle Database deployments.
+1. Separate Oracle ASM Disk Group for each database
+2. \<DBNAME\>+ΓÇ£\_ΓÇ¥ is used as a prefix for the name of the DATA disk group
+3. The number of the DATA disk group is appended if the database spans over more than one DATA disk group
+4. No online redo logs are located in the ΓÇ£dataΓÇ¥ disk groups. Instead an extra disk group is used for the first member of each online redo log group.
-Network drives or remote shares like Azure file services aren't supported for Oracle Database files. For more information, see:
+| ASM Disk Group Name | Stores |Azure Storage |
+||-||
+| **+\<DBNAME\>\_DATA[#]** | All data files | 3-12 x P 30 (1 TiB) |
+| | All temp files | To increase DB size, add extra P30 disks |
+| |Control file (first copy) | |
+| **+OLOG** | Online redo logs (first copy) | 3 x P20 (512 GiB) |
+| **+ARCH** | Control file (second copy) |3 x P20 (512 GB) |
+| | Archived redo logs | |
+| **+RECO** | Control file (third copy) | 3 x P20 (512 GiB) |
+| |RMAN backups (optional) | |
+| |Fast recovery area (optional) | |
-- [Introducing Microsoft Azure File Service](/archive/blogs/windowsazurestorage/introducing-microsoft-azure-file-service) -- [Persisting connections to Microsoft Azure Files](/archive/blogs/windowsazurestorage/persisting-connections-to-microsoft-azure-files)
+### Variant 3 ΓÇô huge data and data change volumes more than 5 TB, restore time crucial
-If you're using disks that are based on Azure page blob storage or Managed Disks, the statements in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) apply to deployments with Oracle Database as well.
+Customer has a huge database where backup and/or restore + recovery of a single databases cannot be accomplished in a timely fashion.
-Quotas on IOPS throughput for Azure disks exist. This concept is explained in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md). The exact quotas depend on the VM type that you use. A list of VM types with their quotas can be found at [Sizes for Windows virtual machines in Azure][virtual-machines-sizes-windows].
+Usually customers will use RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. In this variant, each relevant database file type is separated to different Oracle ASM disk groups.
-To identify the supported Azure VM types, see SAP Note [1928533].
+|ASM Disk Group Name | Stores | Azure Storage |
+||||
+| **+\<DBNAME\>\_DATA[#]** | All data files |5-30 or more x P30 (1 TiB) or P40 (2 TiB)
+| | All temp files To increase DB size, add extra P30 disks |
+| |Control file (first copy) | |
+| **+OLOG** | Online redo logs (first copy) |3-8 x P20 (512 GiB) or P30 (1 TiB) |
+| | | For more safety ΓÇ£Normal RedundancyΓÇ¥ can be selected for this ASM Disk Group |
+|**+ARCH** | Control file (second copy) |3-8 x P20 (512 GiB) or P30 (1 TiB) |
+| | Archived redo logs | |
+| **+RECO** | Control file (third copy) |3 x P30 (1 TiB), P40 (2 TiB) or P50 (4 TiB) |
+| |RMAN backups (optional) | |
+| | Fast recovery area (optional) | |
-The minimum configuration is as follows:
-| Component | Disk | Caching | Storage pool |
-| | | | |
-| \oracle\<SID>\origlogaA & mirrlogB | Premium or Ultra disk | None | Not needed |
-| \oracle\<SID>\origlogaB & mirrlogA | Premium or Ultra disk | None | Not needed |
-| \oracle\<SID>\sapdata1...n | Premium or Ultra disk | Read-only | Can be used for Premium |
-| \oracle\<SID>\oraarch | Standard | None | Not needed |
-| Oracle Home, `saptrace`, ... | OS disk (Premium) | | Not needed |
+> [!NOTE]
+> Azure Host Disk Cache for the DATA ASM Disk Group can be set to either Read Only or None. All other ASM Disk Groups should be set to None. On BW or SCM a separate ASM Disk Group for TEMP can be considered for large or busy systems.
-Disks selection for hosting online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on one single mounted disk as long as the size, IOPS, and throughput satisfy the requirements.
+### Adding Space to ASM + Azure Disks
-The performance configuration is as follows:
+Oracle ASM Disk Groups can either be extended by adding extra disks or by extending current disks. It is recommended to add extra disks rather than extending existing disks. Review these MOS articles and links MOS Notes 1684112.1 and 2176737.1
-| Component | Disk | Caching | Storage pool |
-| | | | |
-| \oracle\<SID>\origlogaA | Premium or Ultra disk | None | Can be used for Premium |
-| \oracle\<SID>\origlogaB | Premium or Ultra disk | None | Can be used for Premium |
-| \oracle\<SID>\mirrlogAB | Premium or Ultra disk | None | Can be used for Premium |
-| \oracle\<SID>\mirrlogBA | Premium or Ultra disk | None | Can be used for Premium |
-| \oracle\<SID>\sapdata1...n | Premium or Ultra disk | Read-only | Recommended for premium |
-| \oracle\SID\sapdata(n+1)* | Premium or Ultra disk | None | Can be used for Premium |
-| \oracle\<SID>\oraarch* | Premium or Ultra disk | None | Not needed |
-| Oracle Home, `saptrace`, ... | OS disk (Premium) | Not needed |
+ASM will add a disk to the disk group:
+`asmca -silent -addDisk -diskGroupName DATA -disk '/dev/sdd1'`
-*(n+1): hosting SYSTEM, TEMP, and UNDO tablespaces. The I/O pattern of System and Undo tablespaces are different from other tablespaces hosting application data. No caching is the best option for performance of the System and Undo tablespaces.
+ASM will automatically rebalance the data.
+To check rebalancing run this command.
-*oraarch: storage pool isn't necessary from a performance point of view. It can be used to get more space.
+`ps -ef | grep rbal`
-If more IOPS are required in case of Azure premium storage, we recommend using Windows Storage Pools (only available in Windows Server 2012 and later) to create one large logical device over multiple mounted disks. This approach simplifies the administration overhead for managing the disk space, and helps you avoid the effort of manually distributing files across multiple mounted disks.
+`oraasm 4288 1 0 Jul28 ? 00:04:36 asm_rbal_oradb1`
-#### Write Accelerator
-For Azure M-Series VMs, the latency writing into the online redo logs can be reduced by factors when compared to Azure premium storage. Enable Azure Write Accelerator for the disks (VHDs) based on Azure Premium Storage that are used for online redo log files. For more information, see [Write Accelerator](../../how-to-enable-write-accelerator.md). Or use Azure Ultra disk for the online redo log volume.
+Documentation is available with:
+- [How to Resize ASM Disk Groups Between Multiple Zones (aemcorp.com)](https://www.aemcorp.com/managedservices/blog/resizing-asm-disk-groups-between-multiple-zones)
+- [RESIZING - Altering Disk Groups (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/21/ostmg/alter-diskgroups.html#GUID-6AEFFA72-7BDC-4AA8-8667-8417AAF3DAC8)
+### Monitoring SAP on Oracle ASM Systems on Azure
-### Backup/restore
-For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same way as they are on standard Windows Server operating systems. Oracle Recovery Manager (RMAN) is also supported for backups to disk and restores from disk.
+Run an Oracle AWR report as the first step when troubleshooting a performance problem. Disk performance metrics will be detailed in the AWR report.
-You can also use Azure Backup to run an application-consistent VM backup. The article [Plan your VM backup infrastructure in Azure](../../../backup/backup-azure-vms-introduction.md) explains how Azure Backup uses the Windows VSS functionality for executing application-consistent backups. The Oracle DBMS releases that are supported on Azure by SAP can leverage the VSS functionality for backups. For more information, see the Oracle documentation [Basic concepts of database backup and recovery with VSS](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ntqrf/basic-concepts-of-database-backup-and-recovery-with-vss.html#GUID-C085101B-237F-4773-A2BF-1C8FD040C701).
+Disk performance can be monitored from inside Oracle Enterprise Manager and via external tools. Documentation which might help is available here:
+- [Using Views to Display Oracle ASM Information](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/views-asm-info.html#GUID-23E1F0D8-ECF5-4A5A-8C9C-11230D2B4AD4)
+- [ASMCMD Disk Group Management Commands (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/asmcmd-diskgroup-commands.html#GUID-55F7A91D-2197-467C-9847-82A3308F0392)
+OS level monitoring tools cannot monitor ASM disks as there is no recognizable file system. Freespace monitoring must be done from within Oracle.
+### Training Resources on Oracle Automatic Storage Management (ASM)
-### High availability
-Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover in Data Guard, your need to use Fast-Start Failover (FSFA). The Observer (FSFA) triggers the failover. If you don't use FSFA, you can only use a manual failover configuration.
+Oracle DBAs that are not familiar with Oracle ASM follow the training materials and resources here:
+- [Sap on Oracle with ASM on Microsoft Azure - Part1 - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-oracle-with-asm-on-microsoft-azure-part1/ba-p/1865024)
+- [Oracle19c DB \[ ASM \] installation on \[ Oracle Linux 8.3 \] \[ Grid \| ASM \| UDEV \| OEL 8.3 \] \[ VMware \] - YouTube](https://www.youtube.com/watch?v=pRJgiuT-S2M)
+- [ASM Administrator's Guide (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/automatic-storage-management-administrators-guide.pdf)
+- [Oracle for SAP Technology Update (April 2022)](https://www.oracle.com/a/ocom/docs/ora4sap-technology-update-5112158.pdf)
+- [Performance and Scalability Considerations for Disk Groups (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/performance-scability-diskgroup.html#GUID-BC6544D7-6D59-42B3-AE1F-4201D3459ADD)
+- [Migrating to Oracle ASM with Oracle Enterprise Manager](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/admin-asm-em.html#GUID-002546C0-7D5F-46E9-B3AD-CDCFF25AFEA0)
+- [Using RMAN to migrate to ASM \| The Oracle Mentor (wordpress.com)](https://theoraclementor.wordpress.com/2013/07/07/using-rman-to-migrate-to-asm/)
+- [<u>What is Oracle ASM to Azure IaaS? - Simple Talk (red-gate.com)</u>](https://www.red-gate.com/simple-talk/databases/oracle-databases/what-is-oracle-asm-to-azure-iaas/)
+- [ASM Command-Line Utility (ASMCMD) (oracle.com)](https://docs.oracle.com/cd/B19306_01/server.102/b14215/asm_util.htm)
+- [Useful asmcmd commands - DBACLASS DBACLASS](https://dbaclass.com/article/useful-asmcmd-commands-oracle-cluster/)
+- [Moving your SAP Database to Oracle Automatic Storage Management 11g Release 2 - A Best Practices Guide](https://www.sap.com/documents/2016/08/f2e8c029-817c-0010-82c7-eda71af511fa.html)
+- [Installing and Configuring Oracle ASMLIB Software](https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/installing-and-configuring-oracle-asmlib-software.html#GUID-79F9D58F-E5BB-45BD-A664-260C0502D876)
-For more information about disaster recovery for Oracle databases in Azure, see [Disaster recovery for an Oracle Database 12c database in an Azure environment](../oracle/oracle-disaster-recovery.md).
+## Azure NetApp Files (ANF) with Oracle dNFS (Direct NFS)
-### Accelerated networking
-For Oracle deployments on Windows, we strongly recommend accelerated networking as described in [Azure accelerated networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/). Also consider the recommendations that are made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md).
-### Other
-[Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) describes other important concepts related to deployments of VMs with Oracle Database, including Azure availability sets and SAP monitoring.
+The combination of Azure VMΓÇÖs and ANF is a robust and proven combination implemented by many customers on an exceptionally large scale.
-## Specifics for Oracle Database on Oracle Linux
-Oracle software is supported by Oracle to run on Microsoft Azure with Oracle Linux as the guest OS. For more information about general support for Windows Hyper-V and Azure, see the [Azure and Oracle FAQ](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html).
+Databases of 100+ TB are already running productive on this combination. To start, we wrote a detailed blog on how to set up this combination:
-The specific scenario of SAP applications leveraging Oracle Databases is supported as well. Details are discussed in the next part of the document.
+- [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043)
-### Oracle version support
-For information about which Oracle versions and corresponding OS versions are supported for running SAP on Oracle on Azure Virtual Machines, see SAP Note [2039619].
+More general information
-General information about running SAP Business Suite on Oracle can be found in the [SAP on Oracle community page](https://www.sap.com/community/topic/oracle.html).
+- [TR-3633: Oracle Databases on NetApp ONTAP \| NetApp](https://www.netapp.com/pdf.html?item=/media/8744-tr3633pdf.pdf)
+- [NFS best practice and implementation guide \| TR-4067 (netapp.com)](https://www.netapp.com/media/10720-tr-4067.pdf)
-### Oracle configuration guidelines for SAP installations in Azure VMs on Linux
+Mirror Log is required on dNFS ANF Production systems.
-In accordance with SAP installation manuals, Oracle-related files shouldn't be installed or located into system drivers for a VM's boot disk. Varying sizes of virtual machines support a varying number of attached disks. Smaller virtual machine types can support a smaller number of attached disks.
+Even though the ANF is highly redundant, Oracle still requires a mirrored redo-logfile volume. The recommendation is to create two separate volumes and configure origlogA together with mirrlogB and origlogB together with mirrlogA. In this case, you make use of a distributed load balancing of the redo-logfiles.
-In this case, we recommend installing/locating Oracle home, stage, `saptrace`, `saparch`, `sapbackup`, `sapcheck`, or `sapreorg` to boot disk. These parts of Oracle DBMS components aren't intense on I/O and I/O throughput. This means that the OS disk can handle the I/O requirements. The default size of the OS disk is 30 GB. You can expand the boot disk by using the Azure portal, PowerShell, or CLI. After the boot disk has been expanded, you can add an additional partition for Oracle binaries.
+The mount option ΓÇ£nconnectΓÇ¥ is NOT recommended when the dNFS client is configured. dNFS manages the IO channel and makes use of multiple sessions, so this option is obsolete and can cause manifold issues. The dNFS client will ignore the mount options and will handle the IO directly.
+Both NFS versions (v3 and v4.1) with ANF are supported for the Oracle binaries, data- and log-files.
-### Storage configuration
+We highly recommend using the Oracle dNFS clint for all Oracle volumes.
-The filesystems of ext4, xfs, NFSv4.1 (only on Azure NetApp Files (ANF)) or Oracle ASM (see SAP Note [#2039619](https://launchpad.support.sap.com/#/notes/2039619) for release/version requirements) are supported for Oracle Database files on Azure. All database files must be stored on these file systems based on VHDs, Managed Disks, or ANF. These disks are mounted to the Azure VM and are based on [Azure page blob storage](/rest/api/storageservices/Understanding-Block-Blobs--Append-Blobs--and-Page-Blobs), [Azure Managed Disks](../../managed-disks-overview.md), or [Azure NetApp Files](https://azure.microsoft.com/services/netapp/).
+Recommended mount options are:
-Minimum requirements list like:
+| NFS Vers | Mount Options |
+|-||
+| **NFSv3** | rw,vers=3,rsize=262144,wsize=262144,hard,timeo=600,noatime |
+| | |
+| **NFSv4.1** | rw,vers=4.1,rsize=262144,wsize=262144,hard,timeo=600,noatime |
-- For Oracle Linux UEK kernels, a minimum of UEK version 4 is required to support [Azure premium SSDs](../../premium-storage-performance.md#disk-caching).-- For Oracle with ANF the minimum supported Oracle Linux is 8.2.-- For Oracle with ANF the minimum supported Oracle version is 19c (19.8.0.0)
-Checkout the article [Azure Storage types for SAP workload](./planning-guide-storage.md) to get more details of the specific Azure block storage types suitable for DBMS workload.
+### ANF Backup
-Using Azure block storage, it is highly recommended to use [Azure managed disks](../../managed-disks-overview.md) and [Azure premium SSDs](../../disks-types.md) for your Oracle Database deployments.
+With ANF, some key features are available like consistent snapshot-based backups, low latency, and remarkably high performance. From version 6 of our AzAcSnap tool [Azure Application Consistent Snapshot tool for ANF](/azure/azure-netapp-files/azacsnap-get-started) Oracle databases can be configured for consistent database snapshots. Also, the option of resizing the volumes on the fly is valued by our customers.
-Except for Azure NetApp Files, other shared disks, network drives, or remote shares like Azure File Services (AFS) aren't supported for Oracle Database files. For more information, see the following:
+Those snapshots remain on the actual data volume and must be copied away using ANF CRR (Cross Region Replication) [Cross-region replication of ANF](/azure/azure-netapp-files/cross-region-replication-introduction)
+or other backup tools.
-- [Introducing Microsoft Azure File Service](/archive/blogs/windowsazurestorage/introducing-microsoft-azure-file-service)
+## SAP on Oracle on Azure with LVM
-- [Persisting connections to Microsoft Azure Files](/archive/blogs/windowsazurestorage/persisting-connections-to-microsoft-azure-files)
+ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support will be better for customers using ASM. Oracle provide documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team do not follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used.
-If you're using disks based on Azure page blob storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) apply to deployments with Oracle Database as well.
+Note that: when creating LVM the ΓÇ£-iΓÇ¥ option must be used to evenly distribute data across the number of disks in the LVM group.
-Quotas on IOPS throughput for Azure disks exist. This concept is explained in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md).The exact quotas depend on the VM type that's used. For a list of VM types with their quotas, see [Sizes for Linux virtual machines in Azure][virtual-machines-sizes-linux].
+Mirror Log is required when running LVM.
-To identify the supported Azure VM types, see SAP Note [1928533].
+### Minimum configuration Linux:
-Minimum configuration:
+| **Component** | **Disk** | **Host Cache** | **Striping<sup>1</sup>** |
+|--|-|--|--|
+| /oracle/\<SID\>/origlogaA & mirrlogB | Premium | None | Not needed |
+| /oracle/\<SID\>/origlogaB & mirrlogA | Premium | None | Not needed |
+| /oracle/\<SID\>/sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended |
+| /oracle/\<SID\>/oraarch<sup>3</sup> | Premium | None | Not needed |
+| Oracle Home, saptrace, ... | Premium | None | None |
-| Component | Disk | Caching | Stripping* |
-| | | | |
-| /oracle/\<SID>/origlogaA & mirrlogB | Premium, Ultra disk, or ANF | None | Not needed |
-| /oracle/\<SID>/origlogaB & mirrlogA | Premium, Ultra disk, or ANF | None | Not needed |
-| /oracle/\<SID>/sapdata1...n | Premium, Ultra disk, or ANF | Read-only | Can be used for Premium |
-| /oracle/\<SID>/oraarch | Standard or ANF | None | Not needed |
-| Oracle Home, `saptrace`, ... | OS disk (Premium) | | Not needed |
+1. Striping: LVM stripe using RAID0
+2. During R3load migrations the Host Cache option for SAPDATA should be set to None
+3. oraarch: LVM is optional
-*Stripping: LVM stripe or MDADM using RAID0
+The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements.
-The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements.
+### Performance configuration Linux:
-Performance configuration:
+| **Component** | **Disk** | **Host Cache** | **Striping<sup>1</sup>** |
+|-|-|--|--|
+| /oracle/\<SID\>/origlogaA | Premium | None | Can be used |
+| /oracle/\<SID\>/origlogaB | Premium | None | Can be used |
+| /oracle/\<SID\>/mirrlogAB | Premium | None | Can be used |
+| /oracle/\<SID\>/mirrlogBA | Premium | None | Can be used |
+| /oracle/\<SID\>/sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended |
+| /oracle/\<SID\>/oraarch<sup>3</sup> | Premium | None | Not needed |
+| Oracle Home, saptrace, ... | Premium | None | None |
-| Component | Disk | Caching | Stripping* |
-| | | | |
-| /oracle/\<SID>/origlogaA | Premium, Ultra disk, or ANF | None | Can be used for Premium |
-| /oracle/\<SID>/origlogaB | Premium, Ultra disk, or ANF | None | Can be used for Premium |
-| /oracle/\<SID>/mirrlogAB | Premium, Ultra disk, or ANF | None | Can be used for Premium |
-| /oracle/\<SID>/mirrlogBA | Premium, Ultra disk, or ANF | None | Can be used for Premium |
-| /oracle/\<SID>/sapdata1...n | Premium, Ultra disk, or ANF | Read-only | Recommended for Premium |
-| /oracle/\<SID>/sapdata(n+1)* | Premium, Ultra disk, or ANF | None | Can be used for Premium |
-| /oracle/\<SID>/oraarch* | Premium, Ultra disk, or ANF | None | Not needed |
-| Oracle Home, `saptrace`, ... | OS disk (Premium) | Not needed |
+1. Striping: LVM stripe using RAID0
+2. During R3load migrations the Host Cache option for SAPDATA should be set to None
+3. oraarch: LVM is optional
-*Stripping: LVM stripe or MDADM using RAID0
+## Azure Infra: VM Throughput Limits & Azure Disk Storage Options
-*(n+1):hosting SYSTEM, TEMP, and UNDO tablespaces: The I/O pattern of System and Undo tablespaces are different from other tablespaces hosting application data. No caching is the best option for performance of the System and Undo tablespaces.
+### Oracle Automatic Storage Management (ASM)## can evaluate these storage technologies:
-*oraarch: storage pool isn't necessary from a performance point of view.
+1. Azure Premium Storage ΓÇô currently the default choice
+3. Managed Disk Bursting - [Managed disk bursting - Azure Virtual Machines \| Microsoft Docs](/azure/virtual-machines/disk-bursting)
+4. Azure Write Accelerator
+5. Online disk extension for Azure Premium SSD storage is still in progress
+Log write times can be improved on Azure M-Series VMs by enabling Write Accelerator. Enable Azure Write Accelerator for the Azure Premium Storage disks used by the ASM Disk Group for <u>online redo log files</u>. For more information, see [<u>Write Accelerator</u>](/azure/virtual-machines/how-to-enable-write-accelerator).
-If more IOPS are required when using Azure premium storage, we recommend using LVM (Logical Volume Manager) or MDADM to create one large logical volume over multiple mounted disks. For more information, see [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) regarding guidelines and pointers on how to leverage LVM or MDADM. This approach simplifies the administration overhead of managing the disk space and helps you avoid the effort of manually distributing files across multiple mounted disks.
+Using Write Accelerator is optional but can be enabled if the AWR report indicates higher than expected log write times.
-If you plan to use Azure NetApp Files make sure the dNFS client is configured properly. Using dNFS is mandatory to have a supported environment. The configuration of dNFS is documented in the article [Creating an Oracle Database on Direct NFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA).
+### Azure VM Throughput Limits
-An example demonstrating the usage of Azure NetApp Files based NFS for Oracle databases is presented in the blog [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043).
+Each Azure VM type has specified limits for CPU, Disk, Network and RAM. The limits are documented in the links below
+The following recommendations should be followed when selecting a VM type:
-#### Write Accelerator
-For Azure M-Series VMs, when you use Azure Write Accelerator, the latency writing into the online redo logs can be reduced by factors when using Azure premium storage. Enable Azure Write Accelerator for the disks (VHDs) based on Azure Premium Storage that are used for online redo log files. For more information, see [Write Accelerator](../../how-to-enable-write-accelerator.md). Or use Azure Ultra disk for the online redo log volume.
+1. Ensure the **Disk Throughput and IOPS** is sufficient for the workload and at least equal to the aggregate throughput of the disks
+2. Consider enabling paid **bursting** especially for Redo Log disk(s)
+3. For ANF, the Network throughput is important as all storage traffic is counted as ΓÇ£NetworkΓÇ¥ rather than Disk throughput
+4. Review this blog for Network tuning for M-series [Optimizing Network Throughput on Azure M-series VMs HCMT (microsoft.com)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimizing-network-throughput-on-azure-m-series-vms/ba-p/3581129)
+5. Review this [link](/azure/virtual-machines/workloads/oracle/oracle-design) that describes how to use an AWR report to select the correct Azure VM
+6. Azure Intel Ev5 [Edv5 and Edsv5-series - Azure Virtual Machines \|Microsoft Docs](/azure/virtual-machines/edv5-edsv5-series#edsv5-series)
+7. Azure AMD Eadsv5 [Easv5 and Eadsv5-series - Azure Virtual Machines \|Microsoft Docs](/azure/virtual-machines/easv5-eadsv5-series#eadsv5-series)
+8. Azure M-series/Msv2-series [M-series - Azure Virtual Machines \|Microsoft Docs](/azure/virtual-machines/m-series) and [Msv2/Mdsv2 Medium Memory Series - Azure Virtual Machines \| Microsoft Docs](/azure/virtual-machines/msv2-mdsv2-series)
+9. Azure Mv2 [Mv2-series - Azure Virtual Machines \| Microsoft Docs](/azure/virtual-machines/mv2-series)
+
+## Backup/restore
+
+For backup/restore functionality, the SAP BR\*Tools for Oracle are supported in the same way as they are on bare metal and Hyper-V. Oracle Recovery Manager (RMAN) is also supported for backups to disk and restores from disk.
+
+For more information about how you can use Azure Backup and Recovery services for Oracle databases, see:
+-  [<u>Back up and recover an Oracle Database 12c database on an Azure Linux virtual machine</u>](/azure/virtual-machines/workloads/oracle/oracle-overview)
+- [<u>Azure Backup service</u>](/azure/backup/backup-overview) is also supporting Oracle backups as described in the article [<u>Back up and recover an Oracle Database 19c database on an Azure Linux VM using Azure Backup</u>](/azure/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup).
+
+## High availability
+
+Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover in Data Guard, you need to use Fast-Start Failover (FSFA). The Observer functionality (FSFA) triggers the failover. If you don't use FSFA, you can only use a manual failover configuration. For more information, see [<u>Implement Oracle Data Guard on an Azure Linux virtual machine</u>](/azure/virtual-machines/workloads/oracle/configure-oracle-dataguard).
+
+Disaster Recovery aspects for Oracle databases in Azure are presented in the article [<u>Disaster recovery for an Oracle Database 12c database in an Azure environment</u>](/azure/virtual-machines/workloads/oracle/oracle-disaster-recovery).
+
+Another good Oracle whitepaper [Setting up Oracle 12c Data Guard for SAP Customers](https://www.sap.com/documents/2016/12/a67bac51-9a7c-0010-82c7-eda71af511fa.html)
+
+## Huge Pages & Large Oracle SGA Configurations
+
+VLDB SAP on Oracle on Azure deployments apply SGA sizes in excess of 3TB.  Modern versions of Oracle handle large SGA sizes well and significantly reduce IO.  Review the AWR report and increase the SGA size to reduce read IO. 
+
+As general guidance Linux Huge Pages should be configured to approximately 75% of the VM RAM size.  The SGA size can be set to 90% of the Huge Page size.  A approximate example would be a m192ms VM with 4 TB of RAM would have Huge Pages set proximately 3 TB.  The SGA can be set to a value a little less such as 2.95 TB.
+
+Large SAP customers running on High Memory Azure VMs greatly benefit from HugePages as described in this [article](https://www.carajandb.com/en/blog/2016/7-easy-steps-to-configure-hugepages-for-your-oracle-database-server/)
+
+NUMA systems vm.min_free_kbytes should be set to 524288 \* \<# of NUMA nodes\>.  [See Oracle Linux : Recommended Value of vm.min_free_kbytes Kernel Tuning Parameter (Doc ID 2501269.1...](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=79485198498171&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=2501269.1&_afrWindowMode=0&_adf.ctrl-state=mvhajwq3z_4)
+
+ 
+## Links & other Oracle Linux Utilities
+
+Oracle Linux provides a useful GUI management utility
+- Oracle web console [Oracle Linux: Install Cockpit Web Console on Oracle Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/obe-cockpit-install/https://docsupdatetracker.net/index.html#want-to-learn-more)
+- Upstream [Cockpit Project — Cockpit Project (cockpit-project.org)](https://cockpit-project.org/)
+
+Oracle Linux has a new package management tool ΓÇô DNF
+
+[Oracle Linux 8: Package Management made easy with free videos \| Oracle Linux Blog](https://blogs.oracle.com/linux/oracle-linux-8%3a-package-management-made-easy-with-free-videos)
+
+[Oracle® Linux 8 Managing Software on Oracle Linux - Chapter 1 Yum DNF](https://docs.oracle.com/en/operating-systems/oracle-linux/8/software-management/dnf.html)
+
+Memory and NUMA configurations can be tested and benchmarked with a useful tool - Oracle Real Application Testing (RAT)
+
+[Oracle Real Application Testing: What Is It and How Do You Use It? (aemcorp.com)](https://www.aemcorp.com/managedservices/blog/oracle-real-application-testing-rat-what-is-it-and-how-do-you-use-it)
+
+Information on UDEV Log Corruption issue [Oracle Redolog corruption on Azure \| Oracle in the field (wordpress.com)](https://bjornnaessens.wordpress.com/2021/07/29/oracle-redolog-corruption-on-azure/)
+
+[Oracle ASM in Azure corruption - follow up (dbaharrison.blogspot.com)](http://dbaharrison.blogspot.com/2017/07/oracle-asm-in-azure-corruption-follow-up.html)
+
+[Data corruption on Hyper-V or Azure when running Oracle ASM - Red Hat Customer Portal](https://access.redhat.com/solutions/3114361)
+
+[Set up Oracle ASM on an Azure Linux virtual machine - Azure Virtual Machines \| Microsoft Docs](/azure/virtual-machines/workloads/oracle/configure-oracle-asm)
+
+### Oracle Configuration guidelines for SAP installations in Azure VMs on Windows
+SAP on Oracle on Azure also supports Windows. The recommendations for Windows deployments are summarized below:
-### Backup/restore
-For backup/restore functionality, the SAP BR*Tools for Oracle are supported in the same way as they are on bare metal and Hyper-V. Oracle Recovery Manager (RMAN) is also supported for backups to disk and restores from disk.
+1. The following Windows releases are recommended:
+ Windows Server 2022 (only from Oracle Database 19.13.0 on)
+ Windows Server 2019 (only from Oracle Database 19.5.0 on)
+2. There is no support for ASM on Windows. Windows Storage Spaces should be used to aggregate disks for optimal performance
+3. Install the Oracle Home on a dedicated independent disk (do not install Oracle Home on the C: Drive)
+4. All disks must be formatted NTFS
+5. Follow the Windows Tuning guide from Oracle and enable large pages, lock pages in memory and other Windows specific settings
-For more information about how you can use Azure Backup and Recovery services for backing up and recovering Oracle databases, see [Back up and recover an Oracle Database 12c database on an Azure Linux virtual machine](../oracle/oracle-overview.md).
+At the time, of writing ASM for Windows customers on Azure is not supported. SWPM for Windows does not support ASM currently. VLDB SAP on Oracle migrations to Azure have required ASM and have therefore selected Oracle Linux.
-[Azure Backup service](../../../backup/backup-overview.md) is also supporting Oracle backups as described in the article [Back up and recover an Oracle Database 19c database on an Azure Linux VM using Azure Backup](../oracle/oracle-database-backup-azure-backup.md).
+## Storage Configurations for SAP on Oracle on Windows
+### Minimum configuration Windows:
-### High availability
-Oracle Data Guard is supported for high availability and disaster recovery purposes. To achieve automatic failover in Data Guard, you need to use Fast-Start Failover (FSFA). The Observer functionality (FSFA) triggers the failover. If you don't use FSFA, you can only use a manual failover configuration. For more information, see [Implement Oracle Data Guard on an Azure Linux virtual machine](../oracle/configure-oracle-dataguard.md).
+| **Component** | **Disk** | **Host Cache** | **Striping<sup>1</sup>** |
+|--|-|--|--|
+| E:\oracle\\\<SID\>\origlogaA & mirrlogB | Premium | None | Not needed |
+| F:\oracle\\\<SID\>\origlogaB & mirrlogA | Premium | None | Not needed |
+| G:\oracle\\\<SID\>\sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended |
+| H:\oracle\\\<SID\>\oraarch<sup>3</sup> | Premium | None | Not needed |
+| I:\Oracle Home, saptrace, ... | Premium | None | None |
+1. Striping: Windows Storage Spaces
+2. During R3load migrations the Host Cache option for SAPDATA should be set to None
+3. oraarch: Windows Storage Spaces is optional
-Disaster Recovery aspects for Oracle databases in Azure are presented in the article [Disaster recovery for an Oracle Database 12c database in an Azure environment](../oracle/oracle-disaster-recovery.md).
+The disk selection for hosting Oracle's online redo logs should be driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements.
-### Accelerated networking
-Support for Azure Accelerated Networking in Oracle Linux is provided with Oracle Linux 7 Update 5 (Oracle Linux 7.5). If you can't upgrade to the latest Oracle Linux 7.5 release, there might be a workaround by using the RedHat Compatible Kernel (RHCK) instead of the Oracle UEK kernel.
+### Performance configuration Windows:
-Using the RHEL kernel within Oracle Linux is supported according to SAP Note [#1565179](https://launchpad.support.sap.com/#/notes/1565179). For Azure Accelerated Networking, the minimum RHCKL kernel release needs to be 3.10.0-862.13.1.el7. If you're using the UEK kernel in Oracle Linux in conjunction with [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/), you need to use Oracle UEK kernel version 5.
+| **Component** | **Disk** | **Host Cache** | **Striping<sup>1</sup>** |
+|-|-|--|--|
+| E:\oracle\\\<SID\>\origlogaA | Premium | None | Can be used |
+| F:\oracle\\\<SID\>\origlogaB | Premium | None | Can be used |
+| G:\oracle\\\<SID\>\mirrlogAB | Premium | None | Can be used |
+| H:\oracle\\\<SID\>\mirrlogBA | Premium | None | Can be used |
+| I:\oracle\\\<SID\>\sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended |
+| J:\oracle\\\<SID\>\oraarch<sup>3</sup> | Premium | None | Not needed |
+| K:\Oracle Home, saptrace, ... | Premium | None | None |
-If you're deploying VMs from an image that's not based on Azure Marketplace, then you need to copy additional configuration files to the VM by running the following code:
-<pre><code># Copy settings from GitHub to the correct place in the VM
-sudo curl -so /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules https://raw.githubusercontent.com/LIS/lis-next/master/hv-rhel7.x/hv/tools/68-azure-sriov-nm-unmanaged.rules
-</code></pre>
+1. Striping: Windows Storage Spaces
+2. During R3load migrations the Host Cache option for SAPDATA should be set to None
+3. oraarch: Windows Storage Spaces is optional
+### Links for Oracle on Windows
+- [Overview of Windows Tuning (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntqrf/overview-of-windows-tuning.html#GUID-C0A0EC5D-65DD-4693-80B1-DA2AB6147AB9)
+- [Postinstallation Configuration Tasks on Windows (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntqrf/postinstallation-configuration-tasks-on-windows.html#GUID-ECCA1626-A624-48E4-AB08-3D1F6419709E)
+- [SAP on Windows Presentation (oracle.com)](https://www.oracle.com/technetwork/topics/dotnet/tech-info/oow2015-windowsdb-bestpracticesperf-2757613.pdf)
+ [2823030 - Oracle on MS WINDOWS Large Pages](https://launchpad.support.sap.com/#/notes/2823030)
-## Next steps
+### Next steps
Read the article - [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 09/08/2022 Last updated : 09/14/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- September 14, 2022 Release of updated SAP on Oracle guide with new and updated content [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md)
- September 8, 2022: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) to add instructions for deploying /hana/shared (only) on NFS on Azure Files - September 6, 2022: Add managed identity for pacemaker fence agent [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](high-availability-guide-suse-pacemaker.md) on SLES and [Setting up Pacemaker on RHEL in Azure](high-availability-guide-rhel-pacemaker.md) RHEL - August 22, 2022: Release of cost optimization scenario [Deploy PAS and AAS with SAP NetWeaver HA cluster](high-availability-guide-rhel-with-dialog-instance.md) on RHEL
In the SAP workload documentation space, you can find the following areas:
- June 23, 2020: Changes to [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) guide and introduction of [Azure Storage types for SAP workload](./planning-guide-storage.md) guide - June 22, 2020: Add installation steps for new VM Extension for SAP to the [Deployment Guide](deployment-guide.md) - June 16, 2020: Change in [Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md) to add a link to SUSE Public Cloud Infrastructure 101 documentation -- June 10, 2020: Adding new HLI SKUs into [Available SKUs for HLI](./hana-available-skus.md) and [SAP HANA (Large Instances) storage architecture](./hana-storage-architecture.md)
+- June 10, 2020: Adding new HLI SKUs into [Available SKUs for HLI](./hana-available-skus.md) and [SAP HANA (Large Instances) storage architecture](./hana-storage-architecture.md)
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
For more information, see [remove components checklist](concept-remove-component
### Does Azure Virtual Network Manager store customer data? No. Azure Virtual Network Manager doesn't store any customer data.
+### Can an Azure Virtual Network Manager instance be moved?
+No. Resource move is not supported currently. If you need to move it, you can consider deleting the existing AVNM instance and using the ARM template to create another one in another location.
+ ### How can I see what configurations are applied to help me troubleshoot? You can view Azure Virtual Network Manager settings under **Network Manager** for a virtual network. You can see both connectivity and security admin configuration that are applied. For more information, see [view applied configuration](how-to-view-applied-configurations.md).
virtual-wan About Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-nva-hub.md
Title: 'About Network Virtual Appliances - Virtual WAN hub' description: Learn about Network Virtual Appliances in a Virtual WAN hub.- Previously updated : 06/02/2021 Last updated : 09/14/2022 # Customer intent: As someone with a networking background, I want to learn about Network Virtual Appliances in a Virtual WAN hub. # About NVAs in a Virtual WAN hub
-Customers can deploy select Network Virtual Appliances (NVAs) directly into a Virtual WAN hub in a solution that is jointly managed by Microsoft Azure and third-party Network Virtual Appliance vendors. Not all Network Virtual Appliances in Azure Marketplace can be deployed into a Virtual WAN hub. For a full list of available partners, see the [Partners](#partner) section of this article.
+Customers can deploy select Network Virtual Appliances (NVAs) directly into a Virtual WAN hub in a solution that is jointly managed by Microsoft Azure and third-party Network Virtual Appliance vendors. Not all Network Virtual Appliances in Azure Marketplace can be deployed into a Virtual WAN hub. For a full list of available partners, see the [Partners](#partners) section of this article.
## Key benefits
Deploying NVAs into a Virtual WAN hub provides the following benefits:
> [!IMPORTANT] > To ensure you get the best support for this integrated solution, make sure you have similar levels of support entitlement with both Microsoft and your Network Virtual Appliance provider.
-## <a name ="partner"></a> Partners
+## Partners
[!INCLUDE [NVA partners](../../includes/virtual-wan-nva-hub-partners.md)]
Customers can deploy an Azure Firewall along side their connectivity-based NVAs.
Customers can also deploy NVAs into a Virtual WAN hub that perform both SD-WAN connectivity and Next-Generation Firewall capabilities. Customers can connect on-premises devices to the NVA in the hub and also use the same appliance to inspect all North-South, East-West, and Internet-bound traffic. Routing to enable these scenarios can be configured via [Routing Intent and Routing Policies](./how-to-routing-policies.md).
-Partners that support these traffic flows are listed as **dual-role SD-WAN connectivity and security (Next-Generation Firewall) Network Virtual Appliances** in the [Partners section](#partner).
+Partners that support these traffic flows are listed as **dual-role SD-WAN connectivity and security (Next-Generation Firewall) Network Virtual Appliances** in the [Partners section](#partners).
:::image type="content" source="./media/about-nva-hub/global-transit-ngfw.png" alt-text="Global transit architecture with third-party NVA." lightbox="./media/about-nva-hub/global-transit-ngfw.png":::
NVA Partners may create different resources depending on their appliance deploym
### Managed resource group permissions
-By default, all managed resource groups have an deny-all Azure Active Directory assignment. Deny-all assignments prevent customers from calling write operations on any resources in the managed resource group, including Network Virtual Appliance resources.
+By default, all managed resource groups have a deny-all Azure Active Directory assignment. Deny-all assignments prevent customers from calling write operations on any resources in the managed resource group, including Network Virtual Appliance resources.
However, partners may create exceptions for specific actions that customers are allowed to perform on resources deployed in managed resource groups.
-Permissions on resources in existing managed resource groups are not dynamically updated as new permitted actions are added by partners and require a manual refresh.
+Permissions on resources in existing managed resource groups aren't dynamically updated as new permitted actions are added by partners and require a manual refresh.
To refresh permissions on the managed resource groups, customers can leverage the [Refresh Permissions REST API ](/rest/api/managedapplications/applications/refresh-permissions).