Updates from: 06/08/2022 01:20:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
The following example shows a **RelyingParty** element in the *B2C_1A_signup_sig
<UserJourneyBehaviors> <SingleSignOn Scope="Tenant" KeepAliveInDays="7"/> <SessionExpiryType>Rolling</SessionExpiryType>
- <SessionExpiryInSeconds>300</SessionExpiryInSeconds>
+ <SessionExpiryInSeconds>900</SessionExpiryInSeconds>
<JourneyInsights TelemetryEngine="ApplicationInsights" InstrumentationKey="your-application-insights-key" DeveloperMode="true" ClientEnabled="false" ServerEnabled="true" TelemetryVersion="1.0.0" /> <ContentDefinitionParameters> <Parameter Name="campaignId">{OAUTH-KV:campaignId}</Parameter>
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Previously updated : 08/25/2021 Last updated : 05/25/2022
The provisioning service uses the concept of a job to operate against an applica
If you are using an application in the gallery, the job generally contains the name of the app (e.g. zoom snowFlake, dataBricks, etc.). You can skip this documentation when using a gallery application. This primarily applies for non-gallery applications with jobID SCIM or customAppSSO. ## SCIM 2.0 compliance issues and status
-In the table below, any item marked as fixed means that the proper behavior can be found on the SCIM job. We have worked to ensure backwards compatibility for the changes we have made. However, we do not recommend implementing old behavior. We recommend using the new behavior for any new implementations and updating existing implementations.
+In the table below, any item marked as fixed means that the proper behavior can be found on the SCIM job. We have worked to ensure backwards compatibility for the changes we have made. We recommend using the new behavior for any new implementations and updating existing implementations. Please note that the customappSSO behavior that was the default prior to December 2018 is not supported anymore.
> [!NOTE]
-> For the changes made in 2018, you can revert back to the customappsso behavior. For the changes made since 2018, you can use the URLs to revert back to the older behavior. We have worked to ensure backwards compatibility for the changes we have made by allowing you to revert back to the old jobID or by using a flag. However, as previously mentioned, we do not recommend implementing old behavior. We recommend using the new behavior for any new implementations and updating existing implementations.
+> For the changes made in 2018, it is possible to revert back to the customappsso behavior. For the changes made since 2018, you can use the URLs to revert back to the older behavior. We have worked to ensure backwards compatibility for the changes we have made by allowing you to revert back to the old jobID or by using a flag. However, as previously mentioned, we do not recommend implementing old behavior as it is not supported anymore. We recommend using the new behavior for any new implementations and updating existing implementations.
| **SCIM 2.0 compliance issue** | **Fixed?** | **Fix date** | **Backwards compatibility** | ||||
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
This section covers final steps before migrating user phone numbers.
### Set federatedIdpMfaBehavior to enforceMfaByFederatedIdp
-For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values).
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain has a Microsoft Graph PowerShell security setting named **federatedIdpMfaBehavior**. You can set **federatedIdpMfaBehavior** to `enforceMfaByFederatedIdp` so Azure AD accepts MFA that's performed by the federated identity provider. If the federated identity provider didn't perform MFA, Azure AD redirects the request to the federated identity provider to perform MFA. For more information, see [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true ).
>[!NOTE] > The **federatedIdpMfaBehavior** setting is an evolved version of the **SupportsMfa** property of the [Set-MsolDomainFederationSettings MSOnline v1 PowerShell cmdlet](/powershell/module/msonline/set-msoldomainfederationsettings).
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md
If your organization is federated with Azure Active Directory, use Azure AD Multi-Factor Authentication or Active Directory Federation Services (AD FS) to secure resources that are accessed by Azure AD. Use the following procedures to secure Azure Active Directory resources with either Azure AD Multi-Factor Authentication or Active Directory Federation Services. >[!NOTE]
->Set the domain setting [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values) to `enforceMfaByFederatedIdp` (recommended) or **SupportsMFA** to `$True`. The **federatedIdpMfaBehavior** setting overrides **SupportsMFA** when both are set.
+>Set the domain setting [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true) to `enforceMfaByFederatedIdp` (recommended) or **SupportsMFA** to `$True`. The **federatedIdpMfaBehavior** setting overrides **SupportsMFA** when both are set.
## Secure Azure AD resources using AD FS
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
+adobe-target: true
# Customer intent: As an Azure AD Administrator, I want to learn how to enable and use password writeback so that when end-users reset their password through a web browser their updated password is synchronized back to my on-premises AD environment.
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
This article describes how you can use the **Audit** dashboard in Permissions Ma
1. In the **Audit** dashboard, load the query you want to duplicate. 2. Select the ellipses menu **(…)** on the far right, and then select **Duplicate**.
- CloudKnox creates a copy of the query. Both the copy of the query and the original query display in the **Saved Queries** list.
+ Permissions Management creates a copy of the query. Both the copy of the query and the original query display in the **Saved Queries** list.
You can rename the original or copy of the query, change it, and save it without changing the other query.
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
Title: View integration information about an authorization system in CloudKnox Permissions Management
-description: View integration information about an authorization system in CloudKnox Permissions Management.
+ Title: View integration information about an authorization system in Permissions Management
+description: View integration information about an authorization system in Permissions Management.
# View integration information about an authorization system > [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Entra Permissions Management is currently in PREVIEW.
> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-The **Integrations** dashboard in CloudKnox Permissions Management (CloudKnox) allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole.
+The **Integrations** dashboard in Permissions Management allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole.
## Display integration information about an authorization system
-Refer to the **Integration** subpages in CloudKnox for information about available authorization systems for integration.
+Refer to the **Integration** subpages in Permissions Management for information about available authorization systems for integration.
1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
Refer to the **Integration** subpages in CloudKnox for information about availab
## Available integrated authorization systems
-The following authorization systems may be listed in the **Integrations** dashboard, depending on which systems are integrated into the CloudKnox application.
+The following authorization systems may be listed in the **Integrations** dashboard, depending on which systems are integrated into the Permissions Management application.
-- **ServiceNow**: Manages digital workflows for enterprise operations, and the CloudKnox integration allows you to request and approve permissions through the ServiceNow ticketing workflow.-- **Splunk**: Searches, monitors, and analyzes machine-generated data, and the CloudKnox integration enables exporting usage analytics data, alerts, and logs.-- **HashiCorp Terraform**: CloudKnox enables the generation of least-privilege policies through the Hashi Terraform provider.-- **CloudKnox API**: The CloudKnox application programming interface (API) provides access to CloudKnox features.
+- **ServiceNow**: Manages digital workflows for enterprise operations, and the Permissions Management integration allows you to request and approve permissions through the ServiceNow ticketing workflow.
+- **Splunk**: Searches, monitors, and analyzes machine-generated data, and the Permissions Management integration enables exporting usage analytics data, alerts, and logs.
+- **HashiCorp Terraform**: Permissions Management enables the generation of least-privilege policies through the Hashi Terraform provider.
+- **Permissions Management API**: The Permissions Management application programming interface (API) provides access to Permissions Management features.
- **Saviynt**: Enables you to view Identity entitlements and usage inside the Saviynt console. - **Securonix**: Enables exporting usage analytics data, alerts, and logs.
The following authorization systems may be listed in the **Integrations** dashbo
<!## Next steps>
-<![Installation overview](cloudknox-installation.md)>
-<![Configure integration with the CloudKnox API](cloudknox-integration-api.md)>
-<![Sign up and deploy FortSentry in your organization](cloudknox-fortsentry-registration.md)>
+<![Installation overview](installation.md)>
+<![Configure integration with the Permissions Management API](integration-api.md)>
+<![Sign up and deploy FortSentry in your organization](fortsentry-registration.md)>
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
Title: Generate and download the Permissions analytics report in CloudKnox Permissions Management
-description: How to generate and download the Permissions analytics report in CloudKnox Permissions Management.
+ Title: Generate and download the Permissions analytics report in Permissions Management
+description: How to generate and download the Permissions analytics report in Permissions Management.
# Generate and download the Permissions analytics report > [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Entra Permissions Management is currently in PREVIEW.
> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-This article describes how to generate and download the **Permissions analytics report** in CloudKnox Permissions Management (CloudKnox).
+This article describes how to generate and download the **Permissions analytics report** in Permissions Management.
> [!NOTE] > This topic applies only to Amazon Web Services (AWS) users. ## Generate the Permissions analytics report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
The **Systems Reports** subtab displays a list of reports the **Reports** table. 1. Find **Permissions Analytics Report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
active-directory Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md
Title: View system reports in the Reports dashboard in CloudKnox Permissions Management
-description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management.
+ Title: View system reports in the Reports dashboard in Permissions Management
+description: How to view system reports in the Reports dashboard in Permissions Management.
# View system reports in the Reports dashboard > [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Entra Permissions Management is currently in PREVIEW.
> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-CloudKnox Permissions Management (CloudKnox) has various types of system report types available that capture specific sets of data. These reports allow management to:
+Permissions Management has various types of system report types available that capture specific sets of data. These reports allow management to:
- Make timely decisions. - Analyze trends and system/user performance.
The **Reports** dashboard provides a table of information with both system repor
## Available system reports
-CloudKnox offers the following reports for management associated with the authorization systems noted in parenthesis:
+Permissions Management offers the following reports for management associated with the authorization systems noted in parenthesis:
- **Access Key Entitlements And Usage**: - **Summary of report**: Provides information about access key, for example, permissions, usage, and rotation date.
active-directory Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/training-videos.md
Title: CloudKnox Permissions Management training videos
-description: CloudKnox Permissions Management training videos.
+ Title: Permissions Management training videos
+description: Permissions Management training videos.
Last updated 04/20/2022
-# CloudKnox Permissions Management training videos
+# Entra Permissions Management training videos
-To view step-by-step training videos on how to use CloudKnox Permissions Management (CloudKnox) features, select a link below.
+To view step-by-step training videos on how to use Permissions Management features, select a link below.
-## Onboard CloudKnox in your organization
+## Onboard Permissions Management in your organization
-### Enable CloudKnox in your Azure Active Directory (Azure AD) tenant
+### Enable Permissions Management in your Azure Active Directory (Azure AD) tenant
-To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+To view a video on how to enable Permissions Management in your Azure AD tenant, select [Enable Permissions Management in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
### Configure and onboard Amazon Web Services (AWS) accounts
-To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
### Configure and onboard Google Cloud Platform (GCP) accounts
-To view a video on how to configure and onboard Google Cloud Platform (GCP) accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
+To view a video on how to configure and onboard Google Cloud Platform (GCP) accounts in Permissions Management, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
## Next steps -- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](overview.md)-- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](faqs.md).-- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](ui-dashboard.md).
+- For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
+- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).
+- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privilged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies: - Policy 1: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
- - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?tabs=http&view=graph-rest-1.0).
+ - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http&preserve-view=true).
- **Block access to organization resources from devices running an unsupported Operating System**. For this example, lets say you want to block access to resources from Windows OS version older than Windows 10. For this scenario, organizations would create the following Conditional Access policy: - All users, accessing all cloud apps, excluding a filter for devices using rule expression device.operatingSystem equals Windows and device.operatingSystemVersion startsWith "10.0" and for Access controls, Block. - **Do not require multifactor authentication for specific accounts on specific devices**. For this example, lets say you want to not require multifactor authentication when using service accounts on specific devices like Teams phones or Surface Hub devices. For this scenario, organizations would create the following two Conditional Access policies:
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Organizations can use this control to require Azure AD to pass device informatio
For more information on the use and configuration of app-enforced restrictions, see the following articles: - [Enabling limited access with SharePoint Online](/sharepoint/control-access-from-unmanaged-devices)-- [Enabling limited access with Exchange Online](/microsoft-365/security/office-365-security/secure-email-recommended-policies?view=o365-worldwide#limit-access-to-exchange-online-from-outlook-on-the-web)
+- [Enabling limited access with Exchange Online](/microsoft-365/security/office-365-security/secure-email-recommended-policies?view=o365-worldwide#limit-access-to-exchange-online-from-outlook-on-the-web&preserve-view=true)
## Conditional Access application control
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
Last updated 07/16/2019 -+ #Customer intent: As an application developer, I want to learn how how to use the AcquireTokenSilent method so I can acquire tokens from the cache.
When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
-For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, as well as the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token is not applicable.
+For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, as well as the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token is not applicable.
The recommended pattern is to call the `AcquireTokenSilent` method first. If `AcquireTokenSilent` fails, then acquire a token using other methods.
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Previously updated : 08/25/2021 Last updated : 06/07/2022 #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
MSAL 4.25+ supports WAM on UWP, .NET Classic, .NET Core 3.1, and .NET 5.
For .NET Classic and .NET Core 3.1, WAM functionality is fully supported but you have to add a reference to [Microsoft.Identity.Client.Desktop](https://www.nuget.org/packages/Microsoft.Identity.Client.Desktop/) package, alongside MSAL, and instead of `WithBroker()`, call `.WithWindowsBroker()`.
-For .NET 5, target `net5.0-windows10.0.17763.0` (or higher) and not just `net5.0`. Your app will still run on older versions of Windows if you add `<SupportedOSPlatformVersion>7</SupportedOSPlatformVersion>` in the csproj. MSAL will use a browser when WAM is not available.
+For .NET 5, target `net5.0-windows10.0.17763.0` (or higher) and not just `net5.0`. Your app will still run on older versions of Windows if you add `<SupportedOSPlatformVersion>7</SupportedOSPlatformVersion>` in the csproj. MSAL will use a browser when WAM isn't available.
## WAM value proposition Using an authentication broker such as WAM has numerous benefits. -- Enhanced security (your app does not have to manage the powerful refresh token)
+- Enhanced security (your app doesn't have to manage the powerful refresh token)
- Better support for Windows Hello, Conditional Access and FIDO keys - Integration with Windows' "Email and Accounts" view - Better Single Sign-On (users don't have to reenter passwords)
Using an authentication broker such as WAM has numerous benefits.
## WAM limitations -- B2C and ADFS authorities are not supported. MSAL will fallback to a browser.-- Available on Win10+ and Win Server 2019+. On Mac, Linux and earlier Windows MSAL will fallback to a browser.
+- B2C and ADFS authorities aren't supported. MSAL will fall back to a browser.
+- Available on Win10+ and Win Server 2019+. On Mac, Linux, and earlier versions of Windows, MSAL will fall back to a browser.
- Not available on Xbox. ## WAM calling pattern
catch (MsalUiRequiredException) // no change in the pattern
} ```
-Call `.WithBroker(true)`. If a broker is not present (e.g. Win8.1, Mac, or Linux), then MSAL will fallback to a browser! Redirect URI rules apply to the browser.
+Call `.WithBroker(true)`. If a broker isn't present (for example, Win8.1, Mac, or Linux), then MSAL will fall back to a browser. Redirect URI rules apply to the browser.
## Redirect URI
-WAM redirect URIs do not need to be configured in MSAL, but they must be configured in the app registration.
+WAM redirect URIs don't need to be configured in MSAL, but they must be configured in the app registration.
### Win32 (.NET framework / .NET 5)
ms-appx-web://microsoft.aad.brokerplugin/{client_id}
## Token cache persistence
-It's important to persist MSAL's token cache because MSAL needs to save internal WAM account IDs there. Without it, restarting the app means that `GetAccounts` API will miss some of the accounts. Note that on UWP, MSAL knows where to save the token cache.
+It's important to persist MSAL's token cache because MSAL needs to save internal WAM account IDs there. Without it, restarting the app means that `GetAccounts` API will miss some of the accounts. On UWP, MSAL knows where to save the token cache.
## GetAccounts `GetAccounts` returns accounts of users who have previously logged in interactively into the app.
-In addition to this, WAM can list the OS-wide Work and School accounts configured in Windows (for Win32 apps but not for UWP apps). To opt-into this feature, set `ListWindowsWorkAndSchoolAccounts` in `WindowsBrokerOptions` to **true**. You can enable it as below.
+In addition, WAM can list the OS-wide Work and School accounts configured in Windows (for Win32 apps but not for UWP apps). To opt-into this feature, set `ListWindowsWorkAndSchoolAccounts` in `WindowsBrokerOptions` to **true**. You can enable it as below.
```csharp .WithWindowsBrokerOptions(new WindowsBrokerOptions()
In addition to this, WAM can list the OS-wide Work and School accounts configure
``` >[!NOTE]
-> Microsoft (i.e. outlook.com etc.) accounts will not be listed in Win32 nor UWP for privacy reasons.
+> Microsoft (outlook.com etc.) accounts will not be listed in Win32 nor UWP for privacy reasons.
Applications cannot remove accounts from Windows! ## RemoveAsync -- Removes all account information from MSAL's token cache (this includes MSA - i.e. personal accounts - account info and other account information copied by MSAL into its cache).
+- Removes all account information from MSAL's token cache (this includes MSA, that is, personal accounts information copied by MSAL into its cache).
- Removes app-only (not OS-wide) accounts. >[!NOTE]
Applications cannot remove accounts from Windows!
## Other considerations -- WAM's interactive operations require being on the UI thread. MSAL throws a meaningful exception when not on UI thread. This does NOT apply to console apps.
+- WAM's interactive operations require being on the UI thread. MSAL throws a meaningful exception when not on UI thread. This doesn't apply to console apps.
- `WithAccount` provides an accelerated authentication experience if the MSAL account was originally obtained via WAM, or, WAM can find a work and school account in Windows.-- WAM is not able to pre-populate the username field with a login hint, unless a Work and School account with the same username is found in Windows.
+- WAM isn't able to pre-populate the username field with a login hint, unless a Work and School account with the same username is found in Windows.
- If WAM is unable to offer an accelerated authentication experience, it will show an account picker. Users can add new accounts. !["WAM account picker"](media/scenario-desktop-acquire-token-wam/wam-account-picker.png) -- New accounts are automatically remembered by Windows. Work and School have the option of joining the organization's directory or opting out completely, in which case the account will not appear under "Email & Accounts". Microsoft accounts are automatically added to Windows. Apps cannot list these accounts programmatically (but only through the Account Picker).
+- New accounts are automatically remembered by Windows. Work and School have the option of joining the organization's directory or opting out completely, in which case the account won't appear under "Email & Accounts". Microsoft accounts are automatically added to Windows. Apps can't list these accounts programmatically (but only through the Account Picker).
## Troubleshooting
-### "Either the user cancelled the authentication or the WAM Account Picker crashed because the app is running in an elevated process" error message
+### "Either the user canceled the authentication or the WAM Account Picker crashed because the app is running in an elevated process" error message
When an app that uses MSAL is run as an elevated process, some of these calls within WAM may fail due to different process security levels. Internally MSAL.NET uses native Windows methods ([COM](/windows/win32/com/the-component-object-model)) to integrate with WAM. Starting with version 4.32.0, MSAL will display a descriptive error message when it detects that the app process is elevated and WAM returned no accounts.
-One solution is to not run the app as elevated, if possible. Another solution is for the app developer to call `WindowsNativeUtils.InitializeProcessSecurity` method when the app starts up. This will set the security of the processes used by WAM to the same levels. See [this sample app](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/master/tests/devapps/WAM/NetCoreWinFormsWam/Program.cs#L18-L21) for an example. However, note, that this solution is not guaranteed to succeed to due external factors like the underlying CLR behavior. In that case, an `MsalClientException` will be thrown. See issue [#2560](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/2560) for additional information.
+One solution is to not run the app as elevated, if possible. Another solution is for the app developer to call `WindowsNativeUtils.InitializeProcessSecurity` method when the app starts up. This will set the security of the processes used by WAM to the same levels. See [this sample app](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/master/tests/devapps/WAM/NetCoreWinFormsWam/Program.cs#L18-L21) for an example. However, note, that this solution isn't guaranteed to succeed to due external factors like the underlying CLR behavior. In that case, an `MsalClientException` will be thrown. For more information, see issue [#2560](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/2560).
### "WAM Account Picker did not return an account" error message
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites ### PowerShell Module
-Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta) (Microsoft.Graph.Users).
+Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Users).
### Get a test email account
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
+adobe-target: true
# Leave an organization as a B2B collaboration user
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
# Recover from deletions
-This article addresses recovering from soft and hard deletions in your Azure AD tenant. If you havenΓÇÖt already done so, we recommend first reading the [Recoverability best practices article](recoverability-overview.md) for foundational knowledge.
+This article addresses recovering from soft and hard deletions in your Azure Active Directory (Azure AD) tenant. If you haven't already done so, read [Recoverability best practices](recoverability-overview.md) for foundational knowledge.
## Monitor for deletions
-The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0. ](/graph/api/directory-deleteditems-list?tabs=http)
+The [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. Export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md).
-### Audit log
+You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on how to find deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0](/graph/api/directory-deleteditems-list?tabs=http).
-The Audit Log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state by either a soft or hard deletion.
+### Audit log
-[![Screenshot of audit log showing deletions](./media/recoverability/delete-audit-log.png)](./media/recoverability/delete-audit-log.png#lightbox)
+The Audit log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state by either a soft or hard deletion.
+[![Screenshot that shows an Audit log with deletions.](./media/recoverability/delete-audit-log.png)](./media/recoverability/delete-audit-log.png#lightbox)
-
-A delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. Track the occurrence of hard-delete events by comparing "Delete \<object\>" events with the type of object that has been deleted, noting those that do not support soft-delete. In addition, note "Hard Delete \<object\>" events.
-
+A delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. Track the occurrence of hard-delete events by comparing "Delete \<object\>" events with the type of object that was deleted. Note the events that don't support soft delete. Also note "Hard Delete \<object\>" events.
| Object type | Activity in log| Result | | - | - | - |
A delete event for applications, users, and Microsoft 365 Groups is a soft delet
| User| Hard delete user| Hard deleted | | Microsoft 365 Group| Delete group| Soft deleted | | Microsoft 365 Group| Hard delete group| Hard deleted |
-| All other objects| Delete ΓÇ£objectTypeΓÇ¥| Hard deleted |
-
+| All other objects| Delete "objectType"| Hard deleted |
> [!NOTE]
-> The audit log does not distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft-deleted. If you see a Delete group entry, it may be the soft delete of a M365 group, or the hard delete of another type of group. **It is therefore important that your documentation of your known good state include the group type for each group in your organization**. To learn more about documenting your known good state, see [Recoverability best practices](recoverability-overview.md).
+> The Audit log doesn't distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft deleted. If you see a Delete group entry, it might be the soft delete of a Microsoft 365 Group or the hard delete of another type of group.
+>
+>*It's important that your documentation of your known good state includes the group type for each group in your organization*. To learn more about documenting your known good state, see [Recoverability best practices](recoverability-overview.md).
+ ### Monitor support tickets
-A sudden increase in support tickets regarding access to a specific object may indicate that there has been a deletion. Because some objects have dependencies, deletion of a group used to access an application, an application itself, or a Conditional Access policy targeting an application can all cause broad sudden impact. If you see a trend like this, check to ensure that none of the objects required for access have been deleted.
+A sudden increase in support tickets about access to a specific object might indicate that a deletion occurred. Because some objects have dependencies, deletion of a group used to access an application, an application itself, or a Conditional Access policy that targets an application can all cause broad sudden impact. If you see a trend like this, check to ensure that none of the objects required for access were deleted.
## Soft deletions
-When objects such as users, Microsoft 365 groups, or application registrations are ΓÇ£soft deleted,ΓÇ¥ they enter a suspended state in which they aren't available for use by other services. In this state, items retain their properties and can be restored for 30 days. After 30 days, objects in the soft-deleted state are permanently or ΓÇ£hardΓÇ¥ deleted.
+When objects such as users, Microsoft 365 Groups, or application registrations are soft deleted, they enter a suspended state in which they aren't available for use by other services. In this state, items retain their properties and can be restored for 30 days. After 30 days, objects in the soft-deleted state are permanently, or hard, deleted.
> [!NOTE]
-> Objects cannot be restored from a hard-deleted state. They must be recreated and reconfigured.
-
+> Objects can't be restored from a hard-deleted state. They must be re-created and reconfigured.
+ ### When soft deletes occur
-It's important to understand why object deletions occur in your environment to prepare for them. This section outlines frequent scenarios for soft deletion by object class. Keep in mind there may be scenarios your organization sees which are unique to your organization so a discovery process is key to preparation.
+It's important to understand why object deletions occur in your environment so that you can prepare for them. This section outlines frequent scenarios for soft deletion by object class. You might see scenarios that are unique to your organization, so a discovery process is key to preparation.
### Users
-Users enter the soft delete state anytime the user object is deleted by using the Azure portal, Microsoft Graph, or PowerShell.
+Users enter the soft-delete state anytime the user object is deleted by using the Azure portal, Microsoft Graph, or PowerShell.
The most frequent scenarios for user deletion are:
-* An administrator intentionally deletes a user in the Azure AD portal in response to a request, or as part of routine user maintenance.
-
-* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you may have a script that removes users who haven't signed in for a specified time period.
-
-* A user is moved out of scope for synchronization with Azure Active Directory (Azure AD) connect.
-
-* A user is removed in an HR system and is deprovisioned via an automated workflow.
+* An administrator intentionally deletes a user in the Azure AD portal in response to a request or as part of routine user maintenance.
+* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might have a script that removes users who haven't signed in for a specified time.
+* A user is moved out of scope for synchronization with Azure AD Connect.
+* A user is removed from an HR system and is deprovisioned via an automated workflow.
### Microsoft 365 Groups The most frequent scenarios for Microsoft 365 Groups being deleted are:
-* An administrator intentionally deletes the group, for example in response to a support request.
-
-* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you may have a script that deletes groups that haven't been accessed or attested to by the group owner for a specific period of time.
-
-* Non-adminsΓÇÖ unintentional deletion of a group they own.
--
+* An administrator intentionally deletes the group, for example, in response to a support request.
+* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might have a script that deletes groups that haven't been accessed or attested to by the group owner for a specified time.
+* Unintentional deletion of a group owned by non-admins.
### Application objects and service principals The most frequent scenarios for application deletion are:
-* An administrator intentionally deletes the application, for example in response to a support request.
-
-* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you may want a process for deleting abandoned applications that are no longer used or managed. In general, create an offboarding process for applications rather than scripting to avoid unintentional deletions.
+* An administrator intentionally deletes the application, for example, in response to a support request.
+* An automation script in Microsoft Graph or PowerShell triggers the deletion. For example, you might want a process for deleting abandoned applications that are no longer used or managed. In general, create an offboarding process for applications rather than scripting to avoid unintentional deletions.
### Properties maintained with soft delete - | Object type| Important properties maintained | | - | - |
-| Users (including external users)| **All properties maintained**, including ObjectID, group memberships, roles, licenses, application assignments. |
-| Microsoft 365 Groups| **All properties maintained**, including ObjectID, group memberships, licenses, application assignments |
-| Application Registration| **All properties maintained.** (See additional information following this table.) |
----
-When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps & service principals in Azure AD - Microsoft identity platform](/azure/active-directory/develop/app-objects-and-service-principals).
-
+| Users (including external users)| *All properties are maintained*, including ObjectID, group memberships, roles, licenses, and application assignments. |
+| Microsoft 365 Groups| *All properties are maintained*, including ObjectID, group memberships, licenses, and application assignments. |
+| Application registration| *All properties are maintained.* (See more information after this table.) |
+When you delete an application, the application registration by default enters the soft-delete state. To understand the relationship between application registrations and service principals, see [Apps and service principals in Azure AD - Microsoft identity platform](/azure/active-directory/develop/app-objects-and-service-principals).
## Recover from soft deletion
-You can restore soft deleted items in the Azure portal or with Microsoft Graph.
+You can restore soft-deleted items in the Azure portal or with Microsoft Graph.
### Users
-You can see soft-deleted users in the Azure portal on the Users ΓÇô Deleted users page.
-
-![screenshot showing restoring users in the Azure portal](media/recoverability/deletion-restore-user.png)
+You can see soft-deleted users in the Azure portal on the **Users | Deleted users** page.
-For details on restoring users, see the following documentation:
+![Screenshot that shows restoring users in the Azure portal.](media/recoverability/deletion-restore-user.png)
-* See [Restore or permanently remove recently deleted user](active-directory-users-restore.md) for restoring in the Azure portal.
+For more information on how to restore users, see the following documentation:
-* See [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http) for restoring with Microsoft Graph.
+* To restore from the Azure portal, see [Restore or permanently remove recently deleted user](active-directory-users-restore.md).
+* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http).
### Groups
-You can see soft-deleted Microsoft 365 (Microsoft 365) Groups in the Azure portal in the Groups ΓÇô Deleted groups screen.
-
-![Screenshot showing restoring groups in the Azure portal.](media/recoverability/deletion-restore-groups.png)
-
+You can see soft-deleted Microsoft 365 Groups in the Azure portal on the **Groups | Deleted groups** page.
-For details on restoring soft deleted Microsoft 365 Groups, see the following documentation:
+![Screenshot that shows restoring groups in the Azure portal.](media/recoverability/deletion-restore-groups.png)
-* To restore from the Azure portal, see [Restore a deleted Microsoft 365 group. ](../enterprise-users/groups-restore-deleted.md)
+For more information on how to restore soft-deleted Microsoft 365 Groups, see the following documentation:
-* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http).
+* To restore from the Azure portal, see [Restore a deleted Microsoft 365 Group](../enterprise-users/groups-restore-deleted.md).
+* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http).
### Applications
-Applications have two objects, the application registration and the service principle. For more information on the differences between the registration and the service principal, see [Apps & service principals in Azure AD.](/azure/active-directory/develop/app-objects-and-service-principals)
+Applications have two objects: the application registration and the service principal. For more information on the differences between the registration and the service principal, see [Apps and service principals in Azure AD](/azure/active-directory/develop/app-objects-and-service-principals).
-To restore an application from the Azure portal, select App registrations, then deleted applications. Select the application registration to restore, and then select Restore app registration.
+To restore an application from the Azure portal, select **App registrations** > **Deleted applications**. Select the application registration to restore, and then select **Restore app registration**.
+
+[![Screenshot that shows the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
-[![A screenshot showing the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
-
## Hard deletions
-A ΓÇ£hard deletionΓÇ¥ is the permanent removal of an object from your Azure Active Directory (Azure AD) tenant. Objects that don't support soft delete are removed in this way. Similarly, soft deleted objects are hard deleted once the deletion time is 30 days ago. The only object types that support a soft delete are:
+A hard deletion is the permanent removal of an object from your Azure AD tenant. Objects that don't support soft delete are removed in this way. Similarly, soft-deleted objects are hard deleted after a deletion time of 30 days. The only object types that support a soft delete are:
* Users- * Microsoft 365 Groups- * Application registration > [!IMPORTANT]
-> All other item types are hard deleted. When an item is hard deleted it cannot be restored: it must be recreated. Neither administrators nor Microsoft can restore hard deleted items. It's important to prepare for this situation by ensuring that you have processes and documentation to minimize potential disruption from a hard delete.
-For information on preparing for and documenting current states, see [Recoverability best practices](recoverability-overview.md).
+> All other item types are hard deleted. When an item is hard deleted, it can't be restored. It must be re-created. Neither administrators nor Microsoft can restore hard-deleted items. Prepare for this situation by ensuring that you have processes and documentation to minimize potential disruption from a hard delete.
+>
+> For information on how to prepare for and document current states, see [Recoverability best practices](recoverability-overview.md).
### When hard deletes usually occur Hard deletes most often occur in the following circumstances.
-Moving from soft to hard delete
+Moving from soft to hard delete:
* A soft-deleted object wasn't restored within 30 days.
+* An administrator intentionally deletes an object in the soft delete state.
-* An administrator intentionally deletes an object in the soft delete state
-
-Directly hard deleted
+Directly hard deleted:
-* The object type deleted doesn't support soft delete.
-
-* An administrator chooses to permanently delete an item by using the portal, typically in response to a request.
-
-* An automation script triggers the deletion of the object by using Microsoft Graph or PowerShell. Use of an automation script to clean up stale objects isn't uncommon. Microsoft recommends a robust off-boarding process for objects in your tenant to avoid mistakes that may result in mass-deletion of critical objects.
+* The object type that was deleted doesn't support soft delete.
+* An administrator chooses to permanently delete an item by using the portal, which typically occurs in response to a request.
+* An automation script triggers the deletion of the object by using Microsoft Graph or PowerShell. Use of an automation script to clean up stale objects isn't uncommon. A robust off-boarding process for objects in your tenant helps you to avoid mistakes that might result in mass deletion of critical objects.
## Recover from hard deletion
-Hard deleted items must be recreated and reconfigured. It's best to avoid unwanted hard deletions.
+Hard-deleted items must be re-created and reconfigured. It's best to avoid unwanted hard deletions.
-### Review soft-deleted objects
+### Review soft-deleted objects
-Ensure you have a process to frequently review items in the soft delete state and restore them if appropriate. To do so, you should:
-
-* Frequently [list deleted items](/graph/api/directory-deleteditems-list?tabs=http).
+Ensure you have a process to frequently review items in the soft-delete state and restore them if appropriate. To do so, you should:
+* Frequently [list deleted items](/graph/api/directory-deleteditems-list?tabs=http).
* Ensure that you have specific criteria for what should be restored.
+* Ensure that you have specific roles or users assigned to evaluate and restore items as appropriate.
+* Develop and test a continuity management plan. For more information, see [Considerations for your Enterprise Business Continuity Management Plan](/compliance/assurance/assurance-developing-your-ebcm-plan).
-* Ensure that you have specific roles or users assigned to evaluating and restoring items as appropriate.
-
-* Develop and test a continuity management plan. For more information, see [Considerations for your Enterprise Business Continuity Management Plan. ](/compliance/assurance/assurance-developing-your-ebcm-plan)
--
-For more information on avoiding unwanted deletions, see the following topics in the [Recoverability best practices](recoverability-overview.md) article.
+For more information on how to avoid unwanted deletions, see the following topics in [Recoverability best practices](recoverability-overview.md):
* Business continuity and disaster planning- * Document known good states- * Monitoring and data retention
active-directory Recover From Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-misconfigurations.md
# Recover from misconfiguration
-Configuration settings in Azure Active Directory (Azure AD) can affect any resource in the Azure AD tenant through targeted or tenant-wide management actions.
+Configuration settings in Azure Active Directory (Azure AD) can affect any resource in the Azure AD tenant through targeted or tenant-wide management actions.
## What is configuration?
-Configurations are any changes in Azure AD that alter the behavior or capabilities of an Azure AD service or feature. For example, when you configure a Conditional Access policy you alter who can access the targeted applications and under what circumstances.
+Configurations are any changes in Azure AD that alter the behavior or capabilities of an Azure AD service or feature. For example, when you configure a Conditional Access policy, you alter who can access the targeted applications and under what circumstances.
-It's important to understand the configuration items that are important to your organization. The following configurations have a high impact on your security posture.
+You need to understand the configuration items that are important to your organization. The following configurations have a high impact on your security posture.
-### Tenant wide configurations
+### Tenant-wide configurations
-* **External identities**: Global administrators for the tenant identify and control the external identities that can be provisioned in the tenant.
+* **External identities**: Global administrators for the tenant identify and control the external identities that can be provisioned in the tenant. They determine:
* Whether to allow external identities in the tenant.-
- * From which domain(s) external identities can be added.
-
+ * From which domains external identities can be added.
* Whether users can invite users from other tenants.
-* **Named Locations**: Global administrators can create named locations, which can then be used to
+* **Named locations**: Global administrators can create named locations, which can then be used to:
* Block sign-ins from specific locations.
+ * Trigger Conditional Access policies like multifactor authentication.
- * Trigger conditional access policies such as MFA.
-
-* **Allowed authentication methods**: Global administrators set the authentication methods allowed for the tenant.
-
-* **Self-service options**. Global Administrators set self-service options such as self-service-password reset and create Office 365 groups at the tenant level.
+* **Allowed authentication methods**: Global administrators set the authentication methods allowed for the tenant.
+* **Self-service options**: Global administrators set self-service options like self-service password reset and create Office 365 groups at the tenant level.
The implementation of some tenant-wide configurations can be scoped, provided they aren't overridden by global administration policies. For example: * If the tenant is configured to allow external identities, a resource administrator can still exclude those identities from accessing a resource.- * If the tenant is configured to allow personal device registration, a resource administrator can exclude those devices from accessing specific resources.-
-* If named locations are configured, a resource administrator can configure policies either allowing or excluding access from those locations.
+* If named locations are configured, a resource administrator can configure policies that either allow or exclude access from those locations.
### Conditional Access configurations
-Conditional Access policies are access control configurations that bring together signals to make decisions and enforce organizational policies.
-
-![A screenshot showing user, location. Device, application, and risk signals coming together in conditional access policies.](media\recoverability\miscofigurations-conditional-accss-signals.png)
+Conditional Access policies are access control configurations that bring together signals to make decisions and enforce organizational policies.
+![Screenshot that shows user, location, device, application, and risk signals coming together in Conditional Access policies.](media\recoverability\miscofigurations-conditional-accss-signals.png)
-
-To learn more about Conditional Access policies, see [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+To learn more about Conditional Access policies, see [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md).
> [!NOTE]
-> While configuration alters the behavior or capabilities of an object or policy, not all changes to an object are configuration. You can change the data or attributes associated with an item, such as changing a userΓÇÖs address, without affecting the capabilities of that user object.
-## What is misconfiguration
+> While configuration alters the behavior or capabilities of an object or policy, not all changes to an object are configuration. You can change the data or attributes associated with an item, like changing a user's address, without affecting the capabilities of that user object.
+
+## What is misconfiguration?
-A misconfiguration is a configuration of a resource or policy that diverges from your organizational policies or plans and causes unintended or unwanted consequences.
+Misconfiguration is a configuration of a resource or policy that diverges from your organizational policies or plans and causes unintended or unwanted consequences.
A misconfiguration of tenant-wide settings or Conditional Access policies can seriously affect your security and the public image of your organization by:
-* Changing how administrators, tenant users, and external users interact with resources in your tenant.
+* Changing how administrators, tenant users, and external users interact with resources in your tenant:
* Unnecessarily limiting access to resources.- * Loosening access controls on sensitive resources.
-* Changing the ability of your users to interact with other tenants, and external users to interact with your tenant.
-
-* Causing denial of service, for example by not allowing customers to access their accounts.
-
+* Changing the ability of your users to interact with other tenants and external users to interact with your tenant.
+* Causing denial of service, for example, by not allowing customers to access their accounts.
* Breaking dependencies among data, systems, and applications resulting in business process failures. ### When does misconfiguration occur?
A misconfiguration of tenant-wide settings or Conditional Access policies can se
Misconfiguration is most likely to occur when: * A mistake is made during ad-hoc changes.- * A mistake is made as a result of troubleshooting exercises.-
-* Malicious intent by a bad actor.
+* An action was carried out with malicious intent by a bad actor.
## Prevent misconfiguration It's critical that alterations to the intended configuration of an Azure AD tenant are subject to robust change management processes, including: * Documenting the change, including prior state and intended post-change state.-
-* Using Privileged Identity Management (PIM) to ensure that administrators with intent to change must deliberately escalate their privileges to do so. To learn more about PIM, see [What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
-
+* Using Privileged Identity Management (PIM) to ensure that administrators with intent to change must deliberately escalate their privileges to do so. To learn more about PIM, see [What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md).
* Using a strong approval workflow for changes, for example, requiring [approval of PIM escalation of privileges](../privileged-identity-management/azure-ad-pim-approval-workflow.md). -- ## Monitor for configuration changes
-While you want to prevent misconfiguration, you can't set the bar for changes so high that it impacts administratorsΓÇÖ ability to perform their work efficiently.
+While you want to prevent misconfiguration, you can't set the bar for changes so high that it affects the ability of administrators to perform their work efficiently.
-Closely monitor for configuration changes by watching for the following operations in your [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md).
+Closely monitor for configuration changes by watching for the following operations in your [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md):
* Add- * Create
+* Update
+* Set
+* Delete
-* Update
-
-* Set
-
-* Delete
-
-The following table includes informative entries in the Audit Log you can look for.
+The following table includes informative entries in the Audit log you can look for.
### Conditional Access and authentication method configuration changes
-Conditional Access policies are created on the Conditional Access page in the Azure portal. Changes to policies are made in the Conditional Access policy details page for the policy.
+Conditional Access policies are created on the **Conditional Access** page in the Azure portal. Changes to policies are made on the **Conditional Access policy details** page for the policy.
| Service filter| Activities| Potential impacts | | - | - | - |
-| Conditional Access| Add, Update, or Delete Conditional Access policy| User access is granted or blocked when it shouldnΓÇÖt be. |
-| Conditional Access| Add, Update, or Delete Named location| Network locations consumed by CA Policy aren't configured as intended, creating gaps in CA Policy conditions. |
-| Authentication Method| Update Authentication methods policy| Users can use weaker authentication methods or are blocked from a method they should use |
-
+| Conditional Access| Add, update, or delete Conditional Access policy| User access is granted or blocked when it shouldnΓÇÖt be. |
+| Conditional Access| Add, update, or delete named location| Network locations consumed by the Conditional Access policy aren't configured as intended, which creates gaps in Conditional Access policy conditions. |
+| Authentication method| Update authentication methods policy| Users can use weaker authentication methods or are blocked from a method they should use. |
### User and password reset configuration changes
-User settings changes are made in the Azure AD portal User settings page. Password Reset changes are made on the Password reset page. Changes made on these pages are captured in the audit log as detailed in the following table.
+User settings changes are made on the Azure AD portal **User settings** page. Password reset changes are made on the **Password reset** page. Changes made on these pages are captured in the Audit log as detailed in the following table.
| Service filter| Activities| Potential impacts | | - | - | - |
-| Core Directory| Update company settings| Users may or may not be able to register applications, contrary to intent. |
-| Core Directory| Set company information| Users may or may not be able to access the Azure AD administration portal contrary to intent. <br>Sign-in pages donΓÇÖt represent the company brand with potential damage to reputation |
-| Core Directory| **Activity**: Updated service principal<br>**Target**: 0365 LinkedIn connection| Users may/may not be able to connect their Azure AD account with LinkedIn contrary to intent. |
-| Self-service group Management| Update Myapps feature value| Users may/may not be able to use user features contrary to intent. |
-| Self-service group Management| Update ConvergedUXV2 feature value| Users may/may not be able to use user features contrary to intent. |
-| Self-service group Management| Update MyStaff feature value| Users may/may not be able to use user features contrary to intent. |
-| Core directory| **Activity**: Update service principal<br>**Target**: Microsoft password reset service| Users are able/unable to reset their password contrary to intent. <br>Users are required/not required to register for SSPR contrary to intent.<br> Users can reset their password using methods that are unapproved, for example by using security questions. |
--
+| Core directory| Update company settings| Users might or might not be able to register applications, contrary to intent. |
+| Core directory| Set company information| Users might or might not be able to access the Azure AD administration portal, contrary to intent. <br>Sign-in pages don't represent the company brand, with potential damage to reputation. |
+| Core directory| **Activity**: Updated service principal<br>**Target**: 0365 LinkedIn connection| Users might or might not be able to connect their Azure AD account with LinkedIn, contrary to intent. |
+| Self-service group management| Update MyApps feature value| Users might or might not be able to use user features, contrary to intent. |
+| Self-service group management| Update ConvergedUXV2 feature value| Users might or might not be able to use user features, contrary to intent. |
+| Self-service group management| Update MyStaff feature value| Users might or might not be able to use user features, contrary to intent. |
+| Core directory| **Activity**: Update service principal<br>**Target**: Microsoft password reset service| Users are able or unable to reset their password, contrary to intent. <br>Users are required or not required to register for self-service password reset, contrary to intent.<br> Users can reset their password by using methods that are unapproved, for example, by using security questions. |
### External identities configuration changes
-You can make changes to these settings on the External identities or External collaboration settings pages in the Azure AD portal.
+You can make changes to these settings on the **External identities** or **External collaboration** settings pages in the Azure AD portal.
| Service filter| Activities| Potential impacts | | - | - | - |
-| Core Directory| Add, update, or delete a partner to cross-tenant access setting| Users have outbound access to tenants that should be blocked.<br>Users from external tenants who should be blocked have inbound access |
+| Core directory| Add, update, or delete a partner to cross-tenant access setting| Users have outbound access to tenants that should be blocked.<br>Users from external tenants who should be blocked have inbound access. |
| B2C| Create or delete identity provider| Identity providers for users who should be able to collaborate are missing, blocking access for those users. |
-| Core directory| Set directory feature on tenant| External users have greater/less visibility of directory objects than intended.<br>External users may/may not invite other external users to your tenant contrary to intent. |
-| Core Directory| Set federation settings on domain| External user invitations may/may not be sent to users in other tenants contrary to intent. |
-| AuthorizationPolicy| Update authorization policy| External user invitations may/may not be sent to users in other tenants contrary to intent. |
-| Core Directory| Update Policy| External user invitations may/may not be sent to users in other tenants contrary to intent. |
---
+| Core directory| Set directory feature on tenant| External users have greater or less visibility of directory objects than intended.<br>External users might or might not invite other external users to your tenant, contrary to intent. |
+| Core directory| Set federation settings on domain| External user invitations might or might not be sent to users in other tenants, contrary to intent. |
+| AuthorizationPolicy| Update authorization policy| External user invitations might or might not be sent to users in other tenants, contrary to intent. |
+| Core directory| Update policy| External user invitations might or might not be sent to users in other tenants, contrary to intent. |
### Custom role and mobility definition configuration changes -
-| Service filter| Activities / portal| Potential impacts |
+| Service filter| Activities/portal| Potential impacts |
| - |- | -|
-| Core Directory| Add role definition| Custom role scope is narrower or broader than intended |
-| PIM| Update role setting| Custom role scope is narrower or broader than intended |
-| Core Directory| Update role definition| Custom role scope is narrower or broader than intended |
-| Core Directory| Delete role definition| Custom role are missing |
-| Core Directory| Add delegated permission grant| Mobile Device Management (MDM) and/or Mobile Application Management (MAM) configuration is missing or misconfigured leading to the failure of device or application management |
+| Core directory| Add role definition| Custom role scope is narrower or broader than intended. |
+| PIM| Update role setting| Custom role scope is narrower or broader than intended. |
+| Core directory| Update role definition| Custom role scope is narrower or broader than intended. |
+| Core directory| Delete role definition| Custom roles are missing. |
+| Core directory| Add delegated permission grant| Mobile device management or mobile application management configuration is missing or misconfigured, which leads to the failure of device or application management. |
### Audit log detail view
-Selecting some audit entries in the Audit Log will provide you with details on the old and new configuration values. For example, for Conditional Access policy configuration changes you can see the information in the following screenshot.
-
-![A screenshot of audit log details for a change to a conditional access policy.](media/recoverability/misconfiguration-audit-log-details.png)
+Selecting some audit entries in the Audit log will provide you with details on the old and new configuration values. For example, for Conditional Access policy configuration changes, you can see the information in the following screenshot.
+![Screenshot that shows Audit log details for a change to a Conditional Access policy.](media/recoverability/misconfiguration-audit-log-details.png)
## Use workbooks to track changes
-There are several Azure Monitor workbooks that can help you to monitor configuration changes.
+Azure Monitor workbooks can help you monitor configuration changes.
-[The Sensitive Operations Report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that may indicate a compromise, including:
+The [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that might indicate a compromise, including:
-* Modified application or service principal credentials or authentication methods
+* Modified application or service principal credentials or authentication methods.
+* New permissions granted to service principals.
+* Directory role and group membership updates for service principals.
+* Modified federation settings.
-* New permissions granted to service principals
-
-* Directory role and group membership updates for service principals
-
-* Modified federation settings
-
-The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing, and which applications I your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
+The [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) can help you monitor which applications in external tenants your users are accessing and which applications your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
## Next steps
-For foundational information on recoverability, see [Recoverability best practices](recoverability-overview.md)
-
-for information on recovering from deletions, see [Recover from deletions](recover-from-deletions.md)
+- For foundational information on recoverability, see [Recoverability best practices](recoverability-overview.md).
+- For information on recovering from deletions, see [Recover from deletions](recover-from-deletions.md).
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
# Recoverability best practices
+Unintended deletions and misconfigurations will happen to your tenant. To minimize the impact of these unintended events, you must prepare for their occurrence.
-Unintended deletions and misconfigurations will happen to your tenant. To minimize the impact of these unintended events, you must prepare for their occurrence.
+Recoverability is the preparatory processes and functionality that enable you to return your services to a prior functioning state after an unintended change. Unintended changes include the soft or hard deletion or misconfiguration of applications, groups, users, policies, and other objects in your Azure Active Directory (Azure AD) tenant.
-Recoverability is the preparatory processes and functionality that enable you to return your services to a prior functioning state after an unintended change. Unintended changes include the soft- or hard-deletion or misconfiguration of applications, groups, users, policies, and other objects in your Azure Active Directory (Azure AD) tenant.
+Recoverability helps your organization be more resilient. Resilience, while related, is different. Resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. For more information about how to make your systems more resilient, see [Building resilience into identity and access management with Azure Active Directory](resilience-overview.md).
-Recoverability helps your organization be more resilient. Resilience while related, is different. Resilience is the ability to endure disruption to system components and recover with minimal impact to your business, users, customers, and operations. For more information about making your systems more resilient, see [Building resilient identity and access management with Azure Active Directory](resilience-overview.md).
-
-This article describes the best practices in preparing for deletions and misconfigurations to minimize the unintended consequences to your organizationΓÇÖs business.
+This article describes the best practices in preparing for deletions and misconfigurations to minimize the unintended consequences to your organization's business.
## Deletions and misconfigurations
Deletions and misconfigurations have different impacts on your tenant.
The impact of deletions depends on the object type.
-Users, Microsoft 365 (Microsoft 365) Groups, and applications can be ΓÇ£soft deleted.ΓÇ¥ Soft deleted items are sent to the Azure AD recycle bin. While in the recycle bin, items are not available for use. However, they retain all their properties, and can be restored via a Microsoft Graph API call, or in the Azure AD portal. Items in the soft delete state that aren't restored within 30 days, are permanently or ΓÇ£hard deleted.ΓÇ¥
+Users, Microsoft 365 Groups, and applications can be soft deleted. Soft-deleted items are sent to the Azure AD recycle bin. While in the recycle bin, items aren't available for use. However, they retain all their properties and can be restored via a Microsoft Graph API call or in the Azure AD portal. Items in the soft-delete state that aren't restored within 30 days are permanently, or hard, deleted.
-![Screenshot showing that users, Microsoft 365 groups, and applications are soft deleted, and then hard deleted after 30 days.](media/recoverability/overview-deletes.png)
+![Diagram that shows that users, Microsoft 365 Groups, and applications are soft deleted and then hard deleted after 30 days.](media/recoverability/overview-deletes.png)
> [!IMPORTANT]
-> All other object types are hard deleted immediately when selected for deletion. When an object is hard deleted, it cannot be recovered. It must be recreated and reconfigured.
-For more information on deletions and how to recover from them, see [Recover from deletions](recover-from-deletions.md).
+> All other object types are hard deleted immediately when they're selected for deletion. When an object is hard deleted, it can't be recovered. It must be re-created and reconfigured.
+>
+>For more information on deletions and how to recover from them, see [Recover from deletions](recover-from-deletions.md).
### Misconfigurations
-Configurations are any changes in Azure AD that alter the behavior or capabilities of an Azure AD service or feature. For example, when you configure a Conditional Access policy you alter who can access the targeted applications and under what circumstances. Tenant-wide configurations affect your entire tenant. Configurations of specific objects or services affect only that object and its dependencies.
+Misconfigurations are configurations of a resource or policy that diverge from your organizational policies or plans and cause unintended or unwanted consequences. Misconfiguration of tenant-wide settings or Conditional Access policies can seriously affect your security and the public image of your organization. Misconfigurations can:
-For more information on misconfigurations and how to recover from them, see [Recover from misconfigurations](recover-from-misconfigurations.md).
+* Change how administrators, tenant users, and external users interact with resources in your tenant.
+* Change the ability of your users to interact with other tenants and external users to interact with your tenant.
+* Cause denial of service.
+* Break dependencies among data, systems, and applications.
-## Shared responsibility
+For more information on misconfigurations and how to recover from them, see [Recover from misconfigurations](recover-from-misconfigurations.md).
-Recoverability is a shared responsibility between Microsoft as your cloud service provider, and your organization.
+## Shared responsibility
-![Screenshot that shows shared responsibilities between Microsoft and customers for planning and recovery.](media/recoverability/overview-shared-responsiblility.png)
+Recoverability is a shared responsibility between Microsoft as your cloud service provider and your organization.
+![Diagram that shows shared responsibilities between Microsoft and customers for planning and recovery.](media/recoverability/overview-shared-responsiblility.png)
You can use the tools and services that Microsoft provides to prepare for deletions and misconfigurations. ## Business continuity and disaster planning
-Restoring a hard deleted or misconfigured item is a resource-intensive process. You can minimize the resources needed by planning ahead. Consider having a specific team of admins in charge of restorations.
+Restoring a hard-deleted or misconfigured item is a resource-intensive process. You can minimize the resources needed by planning ahead. Consider having a specific team of admins in charge of restorations.
### Test your restoration process
-You should rehearse your restoration process for different object types, and the communication that will go out as a result. Be sure to do rehearse with test objects, ideally in a test tenant.
+Rehearse your restoration process for different object types and the communication that will go out as a result. Be sure to rehearse with test objects, ideally in a test tenant.
-Testing your plan can help you to determine the following:
+Testing your plan can help you determine the:
- Validity and completeness of your object state documentation.- - Typical time to resolution.- - Appropriate communications and their audiences.- - Expected successes and potential challenges. ### Create the communication process
-Create a process of pre-defined communications to make others aware of the issue and timelines for restoration. Include the following in your restoration communication plan.
--- The types of communications to go out. Consider creating pre-defined templates.
+Create a process of predefined communications to make others aware of the issue and timelines for restoration. Include the following points in your restoration communication plan:
-- Stakeholders to receive communications. Include the following as applicable:-
- - impacted business owners.
-
- - operational admins who will perform recovery.
+- The types of communications to go out. Consider creating predefined templates.
+- Stakeholders to receive communications. Include the following groups, as applicable:
+ - Affected business owners.
+ - Operational admins who will perform recovery.
- Business and technical approvers.
+ - Affected users.
- - Impacted users.
--- Define the events that trigger communications, such as---
+- Define the events that trigger communications, such as:
+ - Initial deletion.
+ - Impact assessment.
+ - Time to resolution.
+ - Restoration.
## Document known good states
-Document the state of your tenant and its objects regularly so that in the event of a hard delete or misconfiguration you have a road map to recovery. The following tools can help you in documenting your current state.
--- The [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.--- You can use the [Azure AD Exporter](https://github.com/microsoft/azureadexporter) to regularly export your configuration settings. --- The [Microsoft 365 desired state configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module is a module of the PowerShell Desired State Configuration framework. It can be used to export the configurations for reference, and application of the prior state of many settings.--- The [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code.-
+Document the state of your tenant and its objects regularly. Then if a hard delete or misconfiguration occurs, you have a roadmap to recovery. The following tools can help you document your current state:
+- [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.
+- [Azure AD Exporter](https://github.com/microsoft/azureadexporter) is a tool you can use to export your configuration settings.
+- [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) is a module of the PowerShell Desired State Configuration framework. You can use it to export configurations for reference and application of the prior state of many settings.
+- [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code.
### Commonly used Microsoft Graph APIs
-The Microsoft Graph APIs can be used to export the current state of many Azure AD configurations. The APIs cover most scenarios where reference material about the prior state, or the ability to apply that state from an exported copy, could become vital to keep your business running.
-
-Graph APIs are highly customizable based on your organizational needs. To implement a solution for backups or reference material requires developers to engineer code to query for, store, and display the data. Many implementations use online code repositories as part of this functionality.
+You can use Microsoft Graph APIs to export the current state of many Azure AD configurations. The APIs cover most scenarios where reference material about the prior state, or the ability to apply that state from an exported copy, could become vital to keeping your business running.
-### Useful APIS for recovery
+Microsoft Graph APIs are highly customizable based on your organizational needs. To implement a solution for backups or reference material requires developers to engineer code to query for, store, and display the data. Many implementations use online code repositories as part of this functionality.
+### Useful APIs for recovery
| Resource types| Reference links | | - | - |
Graph APIs are highly customizable based on your organizational needs. To implem
| Conditional Access policies| [Conditional Access policy API](/graph/api/resources/conditionalaccesspolicy) | | Devices| [devices API](/graph/api/resources/device) | | Domains| [domains API](/graph/api/domain-list?tabs=http) |
-| Administrative Units| [administrativeUnit API)](/graph/api/resources/administrativeunit) |
-| Deleted Items*| [deletedItems API](/graph/api/resources/directory) |
-
+| Administrative units| [administrative unit API)](/graph/api/resources/administrativeunit) |
+| Deleted items*| [deletedItems API](/graph/api/resources/directory) |
-Securely store these configuration exports with access provided to a limited number of admins.
+*Securely store these configuration exports with access provided to a limited number of admins.
-The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provide most of the documentation you'll need.
+The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provide most of the documentation you need:
- Verify that you've implemented the desired configuration. - Use the exporter to capture current configurations. - Review the export, understand the settings for your tenant that aren't exported, and manually document them. - Store the output in a secure location with limited access. - > [!NOTE]
-> Settings in the legacy MFA portal, for Application Proxy and federation settings may not be exported with the Azure AD Exporter, or with the Graph API.
-The [Microsoft 365 desired state configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module uses Microsoft Graph and PowerShell to retrieve the state of many of the configurations in Azure AD. This information can be used as reference information or, by using PowerShell Desired State Configuration scripting, to reapply a known-good state.
+> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Azure AD Exporter, or with the Microsoft Graph API.
+The [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module uses Microsoft Graph and PowerShell to retrieve the state of many of the configurations in Azure AD. This information can be used as reference information or, by using PowerShell Desired State Configuration scripting, to reapply a known good state.
- Use [Conditional Access Graph APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) to manage policies like code. Automate approvals to promote policies from preproduction environments, backup and restore, monitor change, and plan ahead for emergencies.
+ Use [Conditional Access Graph APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) to manage policies like code. Automate approvals to promote policies from preproduction environments, backup and restore, monitor change, and plan ahead for emergencies.
-### Map the dependencies among objects.
+### Map the dependencies among objects
-The deletion of some objects can cause a ripple effect due to dependencies. For example, deletion of a security group used for application assignment would result in users who were members of that group being unable to access the applications to which the group was assigned.
+The deletion of some objects can cause a ripple effect because of dependencies. For example, deletion of a security group used for application assignment would result in users who were members of that group being unable to access the applications to which the group was assigned.
#### Common dependencies -
-| Object Type| Potential Dependencies |
+| Object type| Potential dependencies |
| - | - |
-| Application object| Service Principal (Enterprise Application). <br>Groups assigned to the application. <br>Conditional Access Policies affecting the application. |
-| Service principals| Application object |
-| Conditional Access Policies| Users assigned to the policy.<br>Groups assigned to the policy.<br>Service Principal (Enterprise Application) targeted by the policy. |
-| Groups other than Microsoft 365 Groups| Users assigned to the group.<br>Conditional access policies to which the group is assigned.<br>Applications to which the group is assigned access. |
+| Application object| Service principal (enterprise application). <br>Groups assigned to the application. <br>Conditional Access policies affecting the application. |
+| Service principals| Application object. |
+| Conditional Access policies| Users assigned to the policy.<br>Groups assigned to the policy.<br>Service principal (enterprise application) targeted by the policy. |
+| Groups other than Microsoft 365 Groups| Users assigned to the group.<br>Conditional Access policies to which the group is assigned.<br>Applications to which the group is assigned access. |
## Monitoring and data retention
-The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes, and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0 ](/graph/api/directory-deleteditems-list?tabs=http)
+The [Azure AD Audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0 ](/graph/api/directory-deleteditems-list?tabs=http).
### Audit logs
-The Audit Log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state (either from active to soft-deleted or active to hard-deleted).
+The Audit log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state, either from active to soft deleted or active to hard deleted.
-A Delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type it's a hard delete.
+A Delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete.
-| Object Type | Activity in log| Result |
+| Object type | Activity in log| Result |
| - | - | - | | Application| Delete application| Soft deleted | | Application| Hard delete application| Hard deleted |
A Delete event for applications, users, and Microsoft 365 Groups is a soft delet
| All other objects| Delete ΓÇ£objectTypeΓÇ¥| Hard deleted | > [!NOTE]
-> The audit log does not distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft-deleted. If you see a Delete group entry, it may be the soft delete of a M365 group, or the hard delete of another type of group. It is therefore important that your documentation of your known good state include the group type for each group in your organization.
+> The Audit log doesn't distinguish the group type of a deleted group. Only Microsoft 365 Groups are soft deleted. If you see a Delete group entry, it might be the soft delete of a Microsoft 365 Group or the hard delete of another type of group. Your documentation of your known good state should include the group type for each group in your organization.
-For information on monitoring configuration changes, see [Recover from misconfigurations](recover-from-misconfigurations.md).
+For information on monitoring configuration changes, see [Recover from misconfigurations](recover-from-misconfigurations.md).
### Use workbooks to track configuration changes
-There are several Azure Monitor workbooks that can help you to monitor configuration changes.
-
-[The Sensitive Operations Report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that may indicate a compromise, including:
--- Modified application or service principal credentials or authentication methods-- New permissions granted to service principals-- Directory role and group membership updates for service principals-- Modified federation settings-
-The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing, and which applications in your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
-
-## Operational security
+Azure Monitor workbooks can help you monitor configuration changes.
-Preventing unwanted changes is far less difficult than needing to recreate and reconfigure objects. Include the following in your change management processes to minimize accidents:
+The [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) can help identify suspicious application and service principal activity that might indicate a compromise, including:
-- Use a least privilege model. Ensure that each member of your team has the least privileges necessary to complete their usual tasks and require a process to escalate privileges for more unusual tasks.
+- Modified application or service principal credentials or authentication methods.
+- New permissions granted to service principals.
+- Directory role and group membership updates for service principals.
+- Modified federation settings.
-- Administrative control of an object enables configuration and deletion. Use Read Only admin roles, for example the Global Reader role, for any tasks that do not require operations to create, update, or delete (CRUD). When CRUD operations are required, use object specific roles when possible. For example, User Administrators can delete only users, and Application Administrators can delete only applications. Use these more limited roles whenever possible, instead of a Global Administrator role, which can delete anything, including the tenant.
+The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing and which applications in your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants.
-- [Use Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md). PIM enables just-in-time escalation of privileges to perform tasks like hard deletion. You can configure PIM to have notifications and or approvals for the privilege escalation.
+## Operational security
+Preventing unwanted changes is far less difficult than needing to re-create and reconfigure objects. Include the following tasks in your change management processes to minimize accidents:
-## Next steps
+- Use a least privilege model. Ensure that each member of your team has the least privileges necessary to complete their usual tasks. Require a process to escalate privileges for more unusual tasks.
+- Administrative control of an object enables configuration and deletion. Use read-only admin roles, for example, the Global Reader role, for tasks that don't require operations to create, update, or delete (CRUD). When CRUD operations are required, use object-specific roles when possible. For example, User administrators can delete only users, and Application administrators can delete only applications. Use these more limited roles whenever possible, instead of a Global administrator role, which can delete anything, including the tenant.
+- [Use Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md). PIM enables just-in-time escalation of privileges to perform tasks like hard deletion. You can configure PIM to have notifications or approvals for the privilege escalation.
-[Recover from deletions](recover-from-deletions.md)
+## Next steps
-[Recover from misconfigurations](recover-from-misconfigurations.md)
+- [Recover from deletions](recover-from-deletions.md)
+- [Recover from misconfigurations](recover-from-misconfigurations.md)
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
In this case, start with the list of attributes in this topic and identify those
| st |X |X | | | | streetAddress |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premise. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
| title |X |X | | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | | userPrincipalName |X | | |UPN is the login ID for the user. Most often the same as [mail] value. |
active-directory App Management Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-videos.md
-+ Last updated 05/31/2022
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Use the previous example to get the **ObjectID** of the policy, and that of the
## Configuring policy through Graph Explorer
-Set the HRD policy using Microsoft Graph. See [homeRealmDiscoveryPolicy](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0) resource type for information on how to create the policy.
+Set the HRD policy using Microsoft Graph. See [homeRealmDiscoveryPolicy](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0&preserve-view=true) resource type for information on how to create the policy.
From the Microsoft Graph explorer window:
active-directory Bridgelineunbound Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bridgelineunbound-tutorial.md
- Title: 'Tutorial: Azure Active Directory integration with Bridgeline Unbound | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Bridgeline Unbound.
-------- Previously updated : 12/16/2020--
-# Tutorial: Azure Active Directory integration with Bridgeline Unbound
-
-In this tutorial, you'll learn how to integrate Bridgeline Unbound with Azure Active Directory (Azure AD). When you integrate Bridgeline Unbound with Azure AD, you can:
-
-* Control in Azure AD who has access to Bridgeline Unbound.
-* Enable your users to be automatically signed-in to Bridgeline Unbound with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To configure Azure AD integration with Bridgeline Unbound, you need the following items:
-
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Bridgeline Unbound single sign-on enabled subscription
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-
-* Bridgeline supports **SP and IDP** initiated SSO
-* Bridgeline Unbound supports **Just In Time** user provisioning
-
-## Adding Bridgeline Unbound from the gallery
-
-To configure the integration of Bridgeline Unbound into Azure AD, you need to add Bridgeline Unbound from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Bridgeline Unbound** in the search box.
-1. Select **Bridgeline Unbound** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
--
-## Configure and test Azure AD SSO for Bridgeline Unbound
-
-Configure and test Azure AD SSO with Bridgeline Unbound using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Bridgeline Unbound.
-
-To configure and test Azure AD SSO with Bridgeline Unbound, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-2. **[Configure Bridgeline Unbound SSO](#configure-bridgeline-unbound-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create Bridgeline Unbound test user](#create-bridgeline-unbound-test-user)** - to have a counterpart of Britta Simon in Bridgeline Unbound that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-### Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Bridgeline Unbound** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `iApps_UPSTT_<ENVIRONMENTNAME>`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.iapps.com/SAMLAssertionService.aspx`
-
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.iapps.com/CommonLogin/login?<INSTANCENAME>`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Bridgeline Unbound Client support team](mailto:support@iapps.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
-
-7. On the **Set up Bridgeline Unbound** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
--
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Bridgeline Unbound.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Bridgeline Unbound**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
--
-## Configure Bridgeline Unbound SSO
-
-To configure single sign-on on **Bridgeline Unbound** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Bridgeline Unbound support team](mailto:support@iapps.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Bridgeline Unbound test user
-
-In this section, a user called Britta Simon is created in Bridgeline Unbound. Bridgeline Unbound supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Bridgeline Unbound, a new one is created after authentication.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Bridgeline Unbound Sign on URL where you can initiate the login flow.
-
-* Go to Bridgeline Unbound Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Bridgeline Unbound for which you set up the SSO
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Bridgeline Unbound tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Bridgeline Unbound for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
--
-## Next steps
-
-Once you configure Bridgeline Unbound you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Igrafx Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/igrafx-platform-tutorial.md
Previously updated : 02/18/2022 Last updated : 06/03/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
| **Identifier** | |--|
- | `https://<CustomerName>.igrafxcloud.com/saml/metadata` |
- | `https://<CustomerName>.igrafxdemo.com/saml/metadata` |
- | `https://<CustomerName>.igrafxtraining.com/saml/metadata` |
- | `https://<CustomerName>.igrafx.com/saml/metadata` |
+ | `https://<SUBDOMAIN>.igrafxcloud.com/saml/metadata` |
+ | `https://<SUBDOMAIN>.igrafxdemo.com/saml/metadata` |
+ | `https://<SUBDOMAIN>.igrafxtraining.com/saml/metadata` |
+ | `https://<SUBDOMAIN>.igrafx.com/saml/metadata` |
b. In the **Reply URL** text box, type a URL using one of the following patterns: | **Reply URL** | ||
- | `https://<CustomerName>.igrafxcloud.com/` |
- | `https://<CustomerName>.igrafxdemo.com/` |
- | `https://<CustomerName>.igrafxtraining.com/` |
- | `https://<CustomerName>.igrafx.com/` |
+ | `https://<SUBDOMAIN>.igrafxcloud.com/` |
+ | `https://<SUBDOMAIN>.igrafxdemo.com/` |
+ | `https://<SUBDOMAIN>.igrafxtraining.com/` |
+ | `https://<SUBDOMAIN>.igrafx.com/` |
c. In the **Sign on URL** text box, type a URL using one of the following patterns: | **Sign on URL** | |-|
- | `https://<CustomerName>.igrafxcloud.com/` |
- | `https://<CustomerName>.igrafxdemo.com/` |
- | `https://<CustomerName>.igrafxtraining.com/` |
- | `https://<CustomerName>.igrafx.com/` |
+ | `https://<SUBDOMAIN>.igrafxcloud.com/` |
+ | `https://<SUBDOMAIN>.igrafxdemo.com/` |
+ | `https://<SUBDOMAIN>.igrafxtraining.com/` |
+ | `https://<SUBDOMAIN>.igrafx.com/` |
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [iGrafx Platform Client support team](mailto:support@igrafx.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
active-directory Rackspacesso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rackspacesso-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Rackspace SSO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Rackspace SSO'
description: Learn how to configure single sign-on between Azure Active Directory and Rackspace SSO.
Previously updated : 05/14/2021 Last updated : 06/03/2022
-# Tutorial: Azure Active Directory integration with Rackspace SSO
+# Tutorial: Azure AD SSO integration with Rackspace SSO
In this tutorial, you'll learn how to integrate Rackspace SSO with Azure Active Directory (Azure AD). When you integrate Rackspace SSO with Azure AD, you can:
To configure Azure AD integration with Rackspace SSO, you need the following ite
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Rackspace SSO supports **SP** initiated SSO.
+* Rackspace SSO supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
To configure and test Azure AD single sign-on with Rackspace SSO, you need to pe
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-2. **[Configure Rackspace SSO Single Sign-On](#configure-rackspace-sso-single-sign-on)** - to configure the Single Sign-On settings on application side.
+2. **[Configure Rackspace SSO](#configure-rackspace-sso)** - to configure the Single Sign-On settings on application side.
1. **[Set up Attribute Mapping in the Rackspace Control Panel](#set-up-attribute-mapping-in-the-rackspace-control-panel)** - to assign Rackspace roles to Azure AD users. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-4. On the **Basic SAML Configuration** section, Upload the **Service Provider metadata file** which you can download from the [URL](https://login.rackspace.com/federate/sp.xml) and perform the following steps:
+4. On the **Basic SAML Configuration** section, upload the **Service Provider metadata file** which you can download from the [URL](https://login.rackspace.com/federate/sp.xml) and perform the following steps:
a. Click **Upload metadata file**.
- ![Screenshot shows Basic SAML Configuration with the Upload metadata file link.](common/upload-metadata.png)
+ ![Screenshot shows Basic S A M L Configuration with the Upload metadata file link.](common/upload-metadata.png "Metadata")
b. Click on **folder logo** to select the metadata file and click **Upload**.
- ![Screenshot shows a dialog box where you can select and upload a file.](common/browse-upload-metadata.png)
+ ![Screenshot shows a dialog box where you can select and upload a file.](common/browse-upload-metadata.png "Folder")
c. Once the metadata file is successfully uploaded, the necessary URLs get auto populated automatically.
- d. In the **Sign-on URL** text box, type the URL:
- `https://login.rackspace.com/federate/`
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
This file will be uploaded to Rackspace to populate required Identity Federation configuration settings.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Rackspace SSO Single Sign-On
+## Configure Rackspace SSO
To configure single sign-on on **Rackspace SSO** side: 1. See the documentation at [Add an Identity Provider to the Control Panel](https://developer.rackspace.com/docs/rackspace-federation/gettingstarted/add-idp-cp/) 1. It will lead you through the steps to:
- 1. Create a new Identity Provider
+ 1. Create a new Identity Provider.
1. Specify an email domain that users will use to identify your company when signing in. 1. Upload the **Federation Metadata XML** previously downloaded from the Azure control panel.
Rackspace uses an **Attribute Mapping Policy** to assign Rackspace roles and gro
* If you want to assign varying levels of Rackspace access using Azure AD groups, you will need to enable the Groups claim in the Azure **Rackspace SSO** Single Sign-on settings. The **Attribute Mapping Policy** will then be used to match those groups to desired Rackspace roles and groups:
- ![The Groups claim settings](common/sso-groups-claim.png)
+ ![Screenshot shows the Groups claim settings.](common/sso-groups-claim.png "Groups")
* By default, Azure AD sends the UID of Azure AD Groups in the SAML claim, versus the name of the Group. However, if you are synchronizing your on-premises Active Directory to Azure AD, you have the option to send the actual names of the groups:
- ![The Groups claim name settings](common/sso-groups-claims-names.png)
+ ![Screenshot shows the Groups claim name settings.](common/sso-groups-claims-names.png "Claims")
The following example **Attribute Mapping Policy** demonstrates: 1. Setting the Rackspace user's name to the `user.name` SAML claim. Any claim can be used, but it is most common to set this to a field containing the user's email address.
See the Rackspace [Attribute Mapping Basics documentation](https://developer.rac
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to Rackspace SSO Sign-on URL where you can initiate the login flow.
+In this section, you test your Azure AD single sign-on configuration with following options.
-* Go to Rackspace SSO Sign-on URL directly and initiate the login flow from there.
+* Click on Test this application in Azure portal and you should be automatically signed in to the Rackspace SSO for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Rackspace SSO tile in the My Apps, this will redirect to Rackspace SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Rackspace SSO tile in the My Apps, you should be automatically signed in to the Rackspace SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
You can also use the **Validate** button in the **Rackspace SSO** Single sign-on settings:
- ![SSO Validate Button](common/sso-validate-sign-on.png)
+ ![Screenshot shows the SSO Validate Button.](common/sso-validate-sign-on.png "Validate")
## Next steps
-Once you configure Rackspace SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Rackspace SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Rstudio Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rstudio-connect-tutorial.md
Previously updated : 09/14/2021 Last updated : 06/03/2022 # Tutorial: Azure AD SSO integration with RStudio Connect SAML Authentication
IdPAttributeProfile = azure
SSOInitiated = IdPAndSP ```
+If `IdPAttributeProfile = azure`,the profile sets the NameIDFormat to persistent, among other settings and overrides any other specified attributes defined in the configuration [file](https://docs.rstudio.com/connect/admin/authentication/saml/#the-azure-profile).
+
+This becomes an issue if you want to create a user ahead of time using the RStudio Connect API and apply permissions prior to the user logging in the first time. The NameIDFormat should be set to emailAddress or some other unique identifier because when it's set to persistent, then the value gets hashed and you don't know what the value is ahead of time. So using the API will not work.
+API for creating user for SAML: https://docs.rstudio.com/connect/api/#post-/v1/users
+
+So you may want to have this in your configuration file in this situation:
+
+```
+[SAML]
+NameIDFormat = emailAddress
+UniqueIdAttribute = NameID
+UsernameAttribute = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name
+FirstNameAttribute = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname
+LastNameAttribute = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname
+EmailAttribute = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailAddress
+```
+ Store your **Server Address** in the `Server.Address` value, and the **App Federation Metadata Url** in the `SAML.IdPMetaData` value. Note that this sample configuration uses an unencrypted HTTP connection, while Azure AD requires the use of an encrypted HTTPS connection. You can either use a [reverse proxy](https://docs.rstudio.com/connect/admin/proxy/) in front of RStudio Connect SAML Authentication or configure RStudio Connect SAML Authentication to [use HTTPS directly](https://docs.rstudio.com/connect/admin/appendix/configuration/#HTTPS). If you have trouble with configuration, you can read the [RStudio Connect SAML Authentication Admin Guide](https://docs.rstudio.com/connect/admin/authentication/saml/) or email the [RStudio support team](mailto:support@rstudio.com) for help.
active-directory Tap App Security Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tap-app-security-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure TAP App Security to support provisioning with Azure AD
-Contact [TAP App Security support](mailto:support@tapappsecurity.com) in order to obtain a SCIM Token.
--
+1. Log in to [TAP App Security back-end control panel](https://app.tapappsecurity.com/).
+1. Navigate to **Single Sign On > Active Directory**.
+1. Click on the **Integrate Active Directory app** button. Then enter the domain of your organization and click **Save** button.
+ [![Screenshot on how to add domain.](media/tap-app-security-provisioning-tutorial/add-domain.png)](media/tap-app-security-provisioning-tutorial/add-domain.png#lightbox)
+1. After entering the domain, a new line in the table appears showing domain name and its status as **initialize**. Click on the gear icon to reveal technical data about TAP app Security server and to complete initialization.
+ [![Screenshot showing initialize.](media/tap-app-security-provisioning-tutorial/initialize.png)](media/tap-app-security-provisioning-tutorial/initialize.png#lightbox)
+1. Technical data about TAP App Security servers is revealed.You can now copy the **Tenant Url** and **Authorization Token** from this page to be used later on while setting up provisioning in Azure AD.
+ [![Screenshot showing domain details.](media/tap-app-security-provisioning-tutorial/domain-details.png)](media/tap-app-security-provisioning-tutorial/domain-details.png#lightbox)
## Step 3. Add TAP App Security from the Azure AD application gallery Add TAP App Security from the Azure AD application gallery to start managing provisioning to TAP App Security. If you have previously setup TAP App Security for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
<!-- LINKS - internal --> [aks-support-policy]: support-policies.md
-[azure-monitor]: ../azure-monitor/overview.md
-[azure-monitor-container-insights]: ../azure-monitor/containers/container-insights-onboard.md
[keda-arm]: keda-deploy-add-on-arm.md <!-- LINKS - external -->
-[keda-scalers]: https://keda.sh/docs/scalers/
-[keda-metrics]: https://keda.sh/docs/latest/operate/prometheus/
-[keda-event-docs]: https://keda.sh/docs/2.7/operate/events/
+[keda-scalers]: https://keda.sh/docs/latest/scalers/
+[keda-event-docs]: https://keda.sh/docs/latest/operate/events/
[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
az aks create \
--generate-ssh-keys \ --windows-admin-username $WINDOWS_USERNAME \ --vm-set-type VirtualMachineScaleSets \
- --kubernetes-version 1.20.7 \
--network-plugin azure ```
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
AppArmor profiles are added using the `apparmor_parser` command.
spec: containers: - name: hello
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ image: mcr.microsoft.com/dotnet/runtime-deps:6.0
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] ```
To see seccomp in action, create a filter that prevents changing permissions on
spec: containers: - name: chmod
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ image: mcr.microsoft.com/dotnet/runtime-deps:6.0
command: - "chmod" args:
To see seccomp in action, create a filter that prevents changing permissions on
localhostProfile: prevent-chmod containers: - name: chmod
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ image: mcr.microsoft.com/dotnet/runtime-deps:6.0
command: - "chmod" args:
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
spec:
```yaml initContainers: - name: volume-mount
- image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ image: mcr.microsoft.com/dotnet/runtime-deps:6.0
command: ["sh", "-c", "chown -R 100:100 /data"] volumeMounts: - name: <your data volume>
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
az aks nodepool add \
To verify your node pool is FIPS-enabled, use [az aks show][az-aks-show] to check the *enableFIPS* value in *agentPoolProfiles*. ```azurecli-interactive
-az aks show --resource-group myResourceGroup --cluster-name myAKSCluster --query="agentPoolProfiles[].{Name:name enableFips:enableFips}" -o table
+az aks show --resource-group myResourceGroup --name myAKSCluster --query="agentPoolProfiles[].{Name:name enableFips:enableFips}" -o table
``` The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* isn't.
aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.19.9
In the above example, the nodes starting with `aks-fipsnp` are part of the FIPS-enabled node pool. Use `kubectl debug` to run a deployment with an interactive session on one of those nodes in the FIPS-enabled node pool. ```azurecli-interactive
-kubectl debug node/aks-fipsnp-12345678-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+kubectl debug node/aks-fipsnp-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
``` From the interactive session, you can verify the FIPS cryptographic libraries are enabled:
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
The Web Application Routing solution may only be triggered on service resources
```yaml annotations: kubernetes.azure.com/ingress-host: myapp.contoso.com
- kubernetes.azure.com/tls-cert-keyvault-uri: myapp-contoso.vault.azure.net
+ kubernetes.azure.com/tls-cert-keyvault-uri: myapp-contoso.vault.azure.net/certificates/keyvault-certificate-name/keyvault-certificate-name-revision
```
-These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `myapp-contoso`.
+These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `myapp-contoso` and will retrieve the `keyvault-certificate-name` with `keyvault-certificate-name-revision`
-Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` with your DNS host name and `<MY_KEYVAULT_URI>` with the vault URI collected in the previous step of this article.
+Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` with your DNS host name and `<MY_KEYVAULT_URI>` with the full certficicate vault URI.
```yaml apiVersion: apps/v1
apiVersion: v1
kind: Service metadata: name: aks-helloworld
-annotations:
- kubernetes.azure.com/ingress-host: <MY_HOSTNAME>
- kubernetes.azure.com/tls-cert-keyvault-uri: <MY_KEYVAULT_URI>
+ annotations:
+ kubernetes.azure.com/ingress-host: <MY_HOSTNAME>
+ kubernetes.azure.com/tls-cert-keyvault-uri: <MY_KEYVAULT_URI>
spec: type: ClusterIP ports:
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
By combining API Management provisioned in an internal virtual network with the
* Use a single API Management resource and have a subset of APIs defined in API Management available for external consumers. * Provide a turnkey way to switch access to API Management from the public internet on and off.
+For architectural guidance, see:
+* **Basic enterprise integration**: [Reference architecture](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/land?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
++ > [!NOTE] > This article has been updated to use the [Application Gateway WAF_v2 SKU](../application-gateway/application-gateway-autoscaling-zone-redundant.md).
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
This article shows you how to use API DevOps with Azure API Management, through
For details, tools, and code samples to implement the DevOps approach described in this article, see the open-source [Azure API Management DevOps Resource Kit](https://github.com/Azure/azure-api-management-devops-resource-kit) in GitHub. Because customers bring a wide range of engineering cultures and existing automation solutions, the approach isn't a one-size-fits-all solution.
+For architectural guidance, see:
+
+* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/land?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+ ## The problem Organizations today normally have multiple deployment environments (such as development, testing, and production) and use separate API Management instances for each environment. Some instances are shared by multiple development teams, who are responsible for different APIs with different release cadences.
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Custom template models support key-value pairs, selection marks, tables, signatu
## Tabular fields
-With the release of API version **2022-06-30-preview**, custom template models will support tabular fields (tables):
-
-* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
-* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
-* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
-
-Tabular fields support **cross page tables** by default:
+With the release of API version **2022-06-30-preview**, custom template models will add support for **cross page** tabular fields (tables):
* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
-* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages if you expect to see those variations in documents.
Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
* View the REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)
+ > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
The **2022-06-30-preview** release is the latest update to the Form Recognizer s
* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction). * [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields). * [🆕 **Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
-* [🆕 **Invoice model output now includes general document key-value pairs**](concept-custom-template.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs-preview).
-* [🆕 **Invoice language expansion**](concept-custom-template.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
-* [🆕 **Prebuilt business card**](concept-business-card.md). The business card model now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+* [🆕 **Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs-preview).
+* [🆕 **Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+* [🆕 **Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+* [🆕 **Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md#id-document-preview-field-extraction).
* [🆕 **Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages). ## February 2022
attestation Attestation Token Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/attestation-token-examples.md
+
+ Title: Examples of an Azure Attestation token
+description: Examples of Azure Attestation token
++++ Last updated : 06/07/2022++++
+# Examples of an attestation token
+
+Attestation policy is used to process the attestation evidence and determine whether Azure Attestation will issue an attestation token. Attestation token generation can be controlled with custom policies. Below are some examples of an attestation policy.
+
+## Sample JWT generated for SGX attestation
+
+```
+{
+ "alg": "RS256",
+ "jku": "https://tradewinds.us.attest.azure.net/certs",
+ "kid": <self signed certificate reference to perform signature verification of attestation token,
+ "typ": "JWT"
+}.{
+ "aas-ehd": <input enclave held data>,
+ "exp": 1568187398,
+ "iat": 1568158598,
+ "is-debuggable": false,
+ "iss": "https://tradewinds.us.attest.azure.net",
+ "maa-attestationcollateral":
+ {
+ "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
+ "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
+ "qeidhash": <SHA256 value of the QE Identity collateral>,
+ "quotehash": <SHA256 value of the evaluated quote>,
+ "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
+ "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
+ "tcbinfohash": <SHA256 value of the TCB Info collateral>
+ },
+ "maa-ehd": <input enclave held data>,
+ "nbf": 1568158598,
+ "product-id": 4639,
+ "sgx-mrenclave": <SGX enclave mrenclave value>,
+ "sgx-mrsigner": <SGX enclave msrigner value>,
+ "svn": 0,
+ "tee": "sgx"
+ "x-ms-attestation-type": "sgx",
+ "x-ms-policy-hash": <>,
+ "x-ms-sgx-collateral":
+ {
+ "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
+ "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
+ "qeidhash": <SHA256 value of the QE Identity collateral>,
+ "quotehash": <SHA256 value of the evaluated quote>,
+ "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
+ "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
+ "tcbinfohash": <SHA256 value of the TCB Info collateral>
+ },
+ "x-ms-sgx-ehd": <>,
+ "x-ms-sgx-is-debuggable": true,
+ "x-ms-sgx-mrenclave": <SGX enclave mrenclave value>,
+ "x-ms-sgx-mrsigner": <SGX enclave msrigner value>,
+ "x-ms-sgx-product-id": 1,
+ "x-ms-sgx-svn": 1,
+ "x-ms-ver": "1.0",
+ "x-ms-sgx-config-id": "000102030405060708090a0b0c0d8f99000102030405060708090a0b0c860e9a000102030405060708090a0b7d0d0e9b000102030405060708090a740c0d0e9c",
+ "x-ms-sgx-config-svn": 3451,
+ "x-ms-sgx-isv-extended-product-id": "8765432143211234abcdabcdef123456",
+ "x-ms-sgx-isv-family-id": "1234567812344321abcd1234567890ab"
+}.[Signature]
+```
+
+Some of the claims used above are considered deprecated but are fully supported. It is recommended that all future code and tooling use the non-deprecated claim names. See [claims issued by Azure Attestation](claim-sets.md) for more information.
+
+The below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and Sharing Support](https://github.com/openenclave/openenclave/issues/3054)
+
+**x-ms-sgx-config-id**
+
+**x-ms-sgx-config-svn**
+
+**x-ms-sgx-isv-extended-product-id**
+
+**x-ms-sgx-isv-family-id**
+
+## Sample JWT generated for SEV-SNP attestation
+
+```
+{
+ΓÇ» "exp": 1649970020,
+ΓÇ» "iat": 1649941220,
+ΓÇ» "iss": "https://maasandbox0001.wus.attest.azure.net",
+ΓÇ» "jti": "b65da1dcfbb4698b0bb2323cac664b745a2ff1cffbba55641fd65784aa9474d5",
+ΓÇ» "nbf": 1649941220,
+ΓÇ» "x-ms-attestation-type": "sevsnpvm",
+ΓÇ» "x-ms-compliance-status": "azure-compliant-cvm",
+ΓÇ» "x-ms-policy-hash": "LTPRQQju-FejAwdYihF8YV_c2XWebG9joKvrHKc3bxs",
+ΓÇ» "x-ms-runtime": {
+ΓÇ» ΓÇ» "keys": [
+ΓÇ» ΓÇ» ΓÇ» {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "e": "AQAB",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "key_ops": ["encrypt"],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "kid": "HCLTransferKey",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "kty": "RSA",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "n": "ur08DccjGGzRo3OIq445n00Q3OthMIbR3SWIzCcicIM_7nPiVF5NBIknk2zdHZN1iiNhIzJezrXSqVT7Ty1Dl4AB5xiAAqxo7xGjFqlL47NA8WbZRMxQtwlsOjZgFxosDNXIt6dMq7ODh4nj6nV2JMScNfRKyr1XFIUK0XkOWvVlSlNZjaAxj8H4pS0yNfNwr1Q94VdSn3LPRuZBHE7VrofHRGSHJraDllfKT0-8oKW8EjpMwv1ME_OgPqPwLyiRzr99moB7uxzjEVDe55D2i2mPrcmT7kSsHwp5O2xKhM68rda6F-IT21JgdhQ6n4HWCicslBmx4oqkI-x5lVsRkQ"
+ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ],
+ΓÇ» ΓÇ» "vm-configuration": {
+ΓÇ» ΓÇ» ΓÇ» "secure-boot": true,
+ΓÇ» ΓÇ» ΓÇ» "secure-boot-template-id": "1734c6e8-3154-4dda-ba5f-a874cc483422",
+ΓÇ» ΓÇ» ΓÇ» "tpm-enabled": true,
+ΓÇ» ΓÇ» ΓÇ» "vmUniqueId": "AE5CBB2A-DC95-4870-A74A-EE4FB33B1A9C"
+ΓÇ» ΓÇ» }
+ΓÇ» },
+ΓÇ» "x-ms-sevsnpvm-authorkeydigest": "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ΓÇ» "x-ms-sevsnpvm-bootloader-svn": 0,
+ΓÇ» "x-ms-sevsnpvm-familyId": "01000000000000000000000000000000",
+ΓÇ» "x-ms-sevsnpvm-guestsvn": 1,
+ΓÇ» "x-ms-sevsnpvm-hostdata": "0000000000000000000000000000000000000000000000000000000000000000",
+ΓÇ» "x-ms-sevsnpvm-idkeydigest": "38ed94f9aab20bc5eb40e89c7cbb03aa1b9efb435892656ade789ccaa0ded82ff18bae0e849c3166351ba1fa7ff620a2",
+ΓÇ» "x-ms-sevsnpvm-imageId": "02000000000000000000000000000000",
+ΓÇ» "x-ms-sevsnpvm-is-debuggable": false,
+ΓÇ» "x-ms-sevsnpvm-launchmeasurement": "04a170f39a3f702472ed0c7ecbda9babfc530e3caac475fdd607ff499177d14c278c5a15ad07ceacd5230ae63d507e9d",
+ΓÇ» "x-ms-sevsnpvm-microcode-svn": 40,
+ΓÇ» "x-ms-sevsnpvm-migration-allowed": false,
+ΓÇ» "x-ms-sevsnpvm-reportdata": "99dd4593a43f4b0f5f10f1856c7326eba309b943251fededc15592e3250ca9e90000000000000000000000000000000000000000000000000000000000000000",
+ΓÇ» "x-ms-sevsnpvm-reportid": "d1d5c2c71596fae601433ecdfb62799de2a785cc08be3b1c8a4e26a381494787",
+ΓÇ» "x-ms-sevsnpvm-smt-allowed": true,
+ΓÇ» "x-ms-sevsnpvm-snpfw-svn": 0,
+ΓÇ» "x-ms-sevsnpvm-tee-svn": 0,
+ΓÇ» "x-ms-sevsnpvm-vmpl": 0,
+ΓÇ» "x-ms-ver": "1.0"
+}
+```
+
+## Next steps
+
+- [View examples of an attestation policy](policy-examples.md)
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/basic-concepts.md
Attestation policy is used to process the attestation evidence and is configurab
If the default policy in the attestation provider doesnΓÇÖt meet the needs, customers will be able to create custom policies in any of the regions supported by Azure Attestation. Policy management is a key feature provided to customers by Azure Attestation. Policies will be attestation type specific and can be used to identify enclaves or add claims to the output token or modify claims in an output token.
-See [examples of an attestation policy](policy-examples.md) for policy samples.
+See [examples of an attestation policy](policy-examples.md)
## Benefits of policy signing
Azure Attestation response will be a JSON string whose value contains JWT. Azure
The Get OpenID Metadata API returns an OpenID Configuration response as specified by the [OpenID Connect Discovery protocol](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig). The API retrieves metadata about the signing certificates in use by Azure Attestation.
-Example of JWT generated for an SGX enclave:
-
-```
-{
- "alg": "RS256",
- "jku": "https://tradewinds.us.attest.azure.net/certs",
- "kid": <self signed certificate reference to perform signature verification of attestation token,
- "typ": "JWT"
-}.{
- "aas-ehd": <input enclave held data>,
- "exp": 1568187398,
- "iat": 1568158598,
- "is-debuggable": false,
- "iss": "https://tradewinds.us.attest.azure.net",
- "maa-attestationcollateral":
- {
- "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
- "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
- "qeidhash": <SHA256 value of the QE Identity collateral>,
- "quotehash": <SHA256 value of the evaluated quote>,
- "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
- "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
- "tcbinfohash": <SHA256 value of the TCB Info collateral>
- },
- "maa-ehd": <input enclave held data>,
- "nbf": 1568158598,
- "product-id": 4639,
- "sgx-mrenclave": <SGX enclave mrenclave value>,
- "sgx-mrsigner": <SGX enclave msrigner value>,
- "svn": 0,
- "tee": "sgx"
- "x-ms-attestation-type": "sgx",
- "x-ms-policy-hash": <>,
- "x-ms-sgx-collateral":
- {
- "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
- "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
- "qeidhash": <SHA256 value of the QE Identity collateral>,
- "quotehash": <SHA256 value of the evaluated quote>,
- "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
- "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
- "tcbinfohash": <SHA256 value of the TCB Info collateral>
- },
- "x-ms-sgx-ehd": <>,
- "x-ms-sgx-is-debuggable": true,
- "x-ms-sgx-mrenclave": <SGX enclave mrenclave value>,
- "x-ms-sgx-mrsigner": <SGX enclave msrigner value>,
- "x-ms-sgx-product-id": 1,
- "x-ms-sgx-svn": 1,
- "x-ms-ver": "1.0",
- "x-ms-sgx-config-id": "000102030405060708090a0b0c0d8f99000102030405060708090a0b0c860e9a000102030405060708090a0b7d0d0e9b000102030405060708090a740c0d0e9c",
- "x-ms-sgx-config-svn": 3451,
- "x-ms-sgx-isv-extended-product-id": "8765432143211234abcdabcdef123456",
- "x-ms-sgx-isv-family-id": "1234567812344321abcd1234567890ab"
-}.[Signature]
-```
-
-Some of the claims used above are considered deprecated but are fully supported. It is recommended that all future code and tooling use the non-deprecated claim names. See [claims issued by Azure Attestation](claim-sets.md) for more information.
-
-The below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and Sharing Support](https://github.com/openenclave/openenclave/issues/3054)
-
-**x-ms-sgx-config-id**
-
-**x-ms-sgx-config-svn**
-
-**x-ms-sgx-isv-extended-product-id**
-
-**x-ms-sgx-isv-family-id**
+See [examples of attestation token](attestation-token-examples.md).
## Encryption of data at rest
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md
issuancerules {
Claims used in default policy are considered deprecated but are fully supported and will continue to be included in the future. It's recommended to use the non-deprecated claim names. For more information on the recommended claim names, see [claim sets](./claim-sets.md).
+## Sample custom policy to support multiple SGX enclaves
+
+```
+version= 1.0;
+authorizationrules
+{
+ [ type=="x-ms-sgx-is-debuggable", value==true ]&&
+ [ type=="x-ms-sgx-mrsigner", value=="mrsigner1"] => permit();
+ [ type=="x-ms-sgx-is-debuggable", value==true ]&&
+ [ type=="x-ms-sgx-mrsigner", value=="mrsigner2"] => permit();
+};
+```
+
+## Unsigned Policy for an SGX enclave with PolicyFormat=JWT
+
+```
+eyJhbGciOiJub25lIn0.eyJBdHRlc3RhdGlvblBvbGljeSI6ICJkbVZ5YzJsdmJqMGdNUzR3TzJGMWRHaHZjbWw2WVhScGIyNXlkV3hsYzN0ak9sdDBlWEJsUFQwaUpHbHpMV1JsWW5WbloyRmliR1VpWFNBOVBpQndaWEp0YVhRb0tUdDlPMmx6YzNWaGJtTmxjblZzWlhON1l6cGJkSGx3WlQwOUlpUnBjeTFrWldKMVoyZGhZbXhsSWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYVhNdFpHVmlkV2RuWVdKc1pTSXNJSFpoYkhWbFBXTXVkbUZzZFdVcE8yTTZXM1I1Y0dVOVBTSWtjMmQ0TFcxeWMybG5ibVZ5SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzJkNExXMXljMmxuYm1WeUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN1l6cGJkSGx3WlQwOUlpUnpaM2d0YlhKbGJtTnNZWFpsSWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzJkNExXMXlaVzVqYkdGMlpTSXNJSFpoYkhWbFBXTXVkbUZzZFdVcE8yTTZXM1I1Y0dVOVBTSWtjSEp2WkhWamRDMXBaQ0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbkJ5YjJSMVkzUXRhV1FpTENCMllXeDFaVDFqTG5aaGJIVmxLVHRqT2x0MGVYQmxQVDBpSkhOMmJpSmRJRDAtSUdsemMzVmxLSFI1Y0dVOUluTjJiaUlzSUhaaGJIVmxQV011ZG1Gc2RXVXBPMk02VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.
+```
+
+## Signed Policy for an SGX enclave with PolicyFormat=JWT
+
+```
+eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZd0RRWUpLb1pJaHZjTkFRRUxCUUF3SWpFZ01CNEdBMVVFQXhNWFFYUjBaWE4wWVhScGIyNURaWEowYVdacFkyRjBaVEF3SGhjTk1qQXhNVEl6TVRneU1EVXpXaGNOTWpFeE1USXpNVGd5TURVeldqQWlNU0F3SGdZRFZRUURFeGRCZEhSbGMzUmhkR2x2YmtObGNuUnBabWxqWVhSbE1EQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUpyRWVNSlo3UE01VUJFbThoaUNLRGh6YVA2Y2xYdkhmd0RIUXJ5L3V0L3lHMUFuMGJ3MVU2blNvUEVtY2FyMEc1WmYxTUR4alZOdEF5QjZJWThKLzhaQUd4eFFnVVZsd1dHVmtFelpGWEJVQTdpN1B0NURWQTRWNlx1MDAyQkJnanhTZTBCWVpGYmhOcU5zdHhraUNybjYwVTYwQUU1WFx1MDAyQkE1M1JvZjFUUkNyTXNLbDRQVDRQeXAzUUtNVVlDaW9GU3d6TkFQaU8vTy9cdTAwMkJIcWJIMXprU0taUXh6bm5WUGVyYUFyMXNNWkptRHlyUU8vUFlMTHByMXFxSUY2SmJsbjZEenIzcG5uMXk0Wi9OTzJpdFBxMk5Nalx1MDAyQnE2N1FDblNXOC9xYlpuV3ZTNXh2S1F6QVR5VXFaOG1PSnNtSThUU05rLzBMMlBpeS9NQnlpeDdmMTYxQ2tjRm1LU3kwQ0F3RUFBYU1RTUE0d0RBWURWUjBUQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZ1ZKVWRCaXRud3ZNdDdvci9UMlo4dEtCUUZsejFVcVVSRlRUTTBBcjY2YWx2Y2l4VWJZR3gxVHlTSk5pbm9XSUJROU9QamdMa1dQMkVRRUtvUnhxN1NidGxqNWE1RUQ2VjRyOHRsejRISjY0N3MyM2V0blJFa2o5dE9Gb3ZNZjhOdFNVeDNGTnBhRUdabDJMUlZHd3dcdTAwMkJsVThQd0gzL2IzUmVCZHRhQTdrZmFWNVx1MDAyQml4ZWRjZFN5S1F1VkFUbXZNSTcxM1A4VlBsNk1XbXNNSnRrVjNYVi9ZTUVzUVx1MDAyQkdZcU1yN2tLWGwxM3lldUVmVTJWVkVRc1ovMXRnb29iZVZLaVFcdTAwMkJUcWIwdTJOZHNcdTAwMkJLamRIdmFNYngyUjh6TDNZdTdpR0pRZnd1aU1tdUxSQlJwSUFxTWxRRktLNmRYOXF6Nk9iT01zUjlpczZ6UDZDdmxGcEV6bzVGUT09Il19.eyJBdHRlc3RhdGlvblBvbGljeSI6ImRtVnljMmx2YmoweExqQTdZWFYwYUc5eWFYcGhkR2x2Ym5KMWJHVnpJSHRqT2x0MGVYQmxQVDBpSkdsekxXUmxZblZuWjJGaWJHVWlYU0FtSmlCYmRtRnNkV1U5UFhSeWRXVmRJRDAtSUdSbGJua29LVHM5UGlCd1pYSnRhWFFvS1R0OU8ybHpjM1ZoYm1ObGNuVnNaWE1nZXlBZ0lDQmpPbHQwZVhCbFBUMGlKR2x6TFdSbFluVm5aMkZpYkdVaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKT2IzUkVaV0oxWjJkaFlteGxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdJQ0FnSUdNNlczUjVjR1U5UFNJa2FYTXRaR1ZpZFdkbllXSnNaU0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbWx6TFdSbFluVm5aMkZpYkdVaUxDQjJZV3gxWlQxakxuWmhiSFZsS1RzZ0lDQWdZenBiZEhsd1pUMDlJaVJ6WjNndGJYSnphV2R1WlhJaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKelozZ3RiWEp6YVdkdVpYSWlMQ0IyWVd4MVpUMWpMblpoYkhWbEtUc2dJQ0FnWXpwYmRIbHdaVDA5SWlSelozZ3RiWEpsYm1Oc1lYWmxJbDBnUFQ0Z2FYTnpkV1VvZEhsd1pUMGljMmQ0TFcxeVpXNWpiR0YyWlNJc0lIWmhiSFZsUFdNdWRtRnNkV1VwT3lBZ0lDQmpPbHQwZVhCbFBUMGlKSEJ5YjJSMVkzUXRhV1FpWFNBOVBpQnBjM04xWlNoMGVYQmxQU0p3Y205a2RXTjBMV2xrSWl3Z2RtRnNkV1U5WXk1MllXeDFaU2s3SUNBZ0lHTTZXM1I1Y0dVOVBTSWtjM1p1SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzNadUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN0lDQWdJR002VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.c0l-xqGDFQ8_kCiQ0_vvmDQYG_u544CYmoiucPNxd9MU8ZXT69UD59UgSuya2yl241NoVXA_0LaMEB2re0JnTbPD_dliJn96HnIOqnxXxRh7rKbu65ECUOMWPXbyKQMZ0I3Wjhgt_XyyhfEiQGfJfGzA95-wm6yWqrmW7dMI7JkczG9ideztnr0bsw5NRsIWBXOjVy7Bg66qooTnODS_OqeQ4iaNsN-xjMElHABUxXhpBt2htbhemDU1X41o8clQgG84aEHCgkE07pR-7IL_Fn2gWuPVC66yxAp00W1ib2L-96q78D9J52HPdeDCSFio2RL7r5lOtz8YkQnjacb6xA
+```
+ ## Sample policy for TPM using Policy version 1.0 ```
c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverL
The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot.
-## Sample custom policy to support multiple SGX enclaves
-
-```
-version= 1.0;
-authorizationrules
-{
- [ type=="x-ms-sgx-is-debuggable", value==true ]&&
- [ type=="x-ms-sgx-mrsigner", value=="mrsigner1"] => permit();
- [ type=="x-ms-sgx-is-debuggable", value==true ]&&
- [ type=="x-ms-sgx-mrsigner", value=="mrsigner2"] => permit();
-};
-```
-
-## Unsigned Policy for an SGX enclave with PolicyFormat=JWT
-
-```
-eyJhbGciOiJub25lIn0.eyJBdHRlc3RhdGlvblBvbGljeSI6ICJkbVZ5YzJsdmJqMGdNUzR3TzJGMWRHaHZjbWw2WVhScGIyNXlkV3hsYzN0ak9sdDBlWEJsUFQwaUpHbHpMV1JsWW5WbloyRmliR1VpWFNBOVBpQndaWEp0YVhRb0tUdDlPMmx6YzNWaGJtTmxjblZzWlhON1l6cGJkSGx3WlQwOUlpUnBjeTFrWldKMVoyZGhZbXhsSWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYVhNdFpHVmlkV2RuWVdKc1pTSXNJSFpoYkhWbFBXTXVkbUZzZFdVcE8yTTZXM1I1Y0dVOVBTSWtjMmQ0TFcxeWMybG5ibVZ5SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzJkNExXMXljMmxuYm1WeUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN1l6cGJkSGx3WlQwOUlpUnpaM2d0YlhKbGJtTnNZWFpsSWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzJkNExXMXlaVzVqYkdGMlpTSXNJSFpoYkhWbFBXTXVkbUZzZFdVcE8yTTZXM1I1Y0dVOVBTSWtjSEp2WkhWamRDMXBaQ0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbkJ5YjJSMVkzUXRhV1FpTENCMllXeDFaVDFqTG5aaGJIVmxLVHRqT2x0MGVYQmxQVDBpSkhOMmJpSmRJRDAtSUdsemMzVmxLSFI1Y0dVOUluTjJiaUlzSUhaaGJIVmxQV011ZG1Gc2RXVXBPMk02VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.
-```
-
-## Signed Policy for an SGX enclave with PolicyFormat=JWT
-
-```
-eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZd0RRWUpLb1pJaHZjTkFRRUxCUUF3SWpFZ01CNEdBMVVFQXhNWFFYUjBaWE4wWVhScGIyNURaWEowYVdacFkyRjBaVEF3SGhjTk1qQXhNVEl6TVRneU1EVXpXaGNOTWpFeE1USXpNVGd5TURVeldqQWlNU0F3SGdZRFZRUURFeGRCZEhSbGMzUmhkR2x2YmtObGNuUnBabWxqWVhSbE1EQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUpyRWVNSlo3UE01VUJFbThoaUNLRGh6YVA2Y2xYdkhmd0RIUXJ5L3V0L3lHMUFuMGJ3MVU2blNvUEVtY2FyMEc1WmYxTUR4alZOdEF5QjZJWThKLzhaQUd4eFFnVVZsd1dHVmtFelpGWEJVQTdpN1B0NURWQTRWNlx1MDAyQkJnanhTZTBCWVpGYmhOcU5zdHhraUNybjYwVTYwQUU1WFx1MDAyQkE1M1JvZjFUUkNyTXNLbDRQVDRQeXAzUUtNVVlDaW9GU3d6TkFQaU8vTy9cdTAwMkJIcWJIMXprU0taUXh6bm5WUGVyYUFyMXNNWkptRHlyUU8vUFlMTHByMXFxSUY2SmJsbjZEenIzcG5uMXk0Wi9OTzJpdFBxMk5Nalx1MDAyQnE2N1FDblNXOC9xYlpuV3ZTNXh2S1F6QVR5VXFaOG1PSnNtSThUU05rLzBMMlBpeS9NQnlpeDdmMTYxQ2tjRm1LU3kwQ0F3RUFBYU1RTUE0d0RBWURWUjBUQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZ1ZKVWRCaXRud3ZNdDdvci9UMlo4dEtCUUZsejFVcVVSRlRUTTBBcjY2YWx2Y2l4VWJZR3gxVHlTSk5pbm9XSUJROU9QamdMa1dQMkVRRUtvUnhxN1NidGxqNWE1RUQ2VjRyOHRsejRISjY0N3MyM2V0blJFa2o5dE9Gb3ZNZjhOdFNVeDNGTnBhRUdabDJMUlZHd3dcdTAwMkJsVThQd0gzL2IzUmVCZHRhQTdrZmFWNVx1MDAyQml4ZWRjZFN5S1F1VkFUbXZNSTcxM1A4VlBsNk1XbXNNSnRrVjNYVi9ZTUVzUVx1MDAyQkdZcU1yN2tLWGwxM3lldUVmVTJWVkVRc1ovMXRnb29iZVZLaVFcdTAwMkJUcWIwdTJOZHNcdTAwMkJLamRIdmFNYngyUjh6TDNZdTdpR0pRZnd1aU1tdUxSQlJwSUFxTWxRRktLNmRYOXF6Nk9iT01zUjlpczZ6UDZDdmxGcEV6bzVGUT09Il19.eyJBdHRlc3RhdGlvblBvbGljeSI6ImRtVnljMmx2YmoweExqQTdZWFYwYUc5eWFYcGhkR2x2Ym5KMWJHVnpJSHRqT2x0MGVYQmxQVDBpSkdsekxXUmxZblZuWjJGaWJHVWlYU0FtSmlCYmRtRnNkV1U5UFhSeWRXVmRJRDAtSUdSbGJua29LVHM5UGlCd1pYSnRhWFFvS1R0OU8ybHpjM1ZoYm1ObGNuVnNaWE1nZXlBZ0lDQmpPbHQwZVhCbFBUMGlKR2x6TFdSbFluVm5aMkZpYkdVaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKT2IzUkVaV0oxWjJkaFlteGxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdJQ0FnSUdNNlczUjVjR1U5UFNJa2FYTXRaR1ZpZFdkbllXSnNaU0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbWx6TFdSbFluVm5aMkZpYkdVaUxDQjJZV3gxWlQxakxuWmhiSFZsS1RzZ0lDQWdZenBiZEhsd1pUMDlJaVJ6WjNndGJYSnphV2R1WlhJaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKelozZ3RiWEp6YVdkdVpYSWlMQ0IyWVd4MVpUMWpMblpoYkhWbEtUc2dJQ0FnWXpwYmRIbHdaVDA5SWlSelozZ3RiWEpsYm1Oc1lYWmxJbDBnUFQ0Z2FYTnpkV1VvZEhsd1pUMGljMmQ0TFcxeVpXNWpiR0YyWlNJc0lIWmhiSFZsUFdNdWRtRnNkV1VwT3lBZ0lDQmpPbHQwZVhCbFBUMGlKSEJ5YjJSMVkzUXRhV1FpWFNBOVBpQnBjM04xWlNoMGVYQmxQU0p3Y205a2RXTjBMV2xrSWl3Z2RtRnNkV1U5WXk1MllXeDFaU2s3SUNBZ0lHTTZXM1I1Y0dVOVBTSWtjM1p1SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzNadUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN0lDQWdJR002VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.c0l-xqGDFQ8_kCiQ0_vvmDQYG_u544CYmoiucPNxd9MU8ZXT69UD59UgSuya2yl241NoVXA_0LaMEB2re0JnTbPD_dliJn96HnIOqnxXxRh7rKbu65ECUOMWPXbyKQMZ0I3Wjhgt_XyyhfEiQGfJfGzA95-wm6yWqrmW7dMI7JkczG9ideztnr0bsw5NRsIWBXOjVy7Bg66qooTnODS_OqeQ4iaNsN-xjMElHABUxXhpBt2htbhemDU1X41o8clQgG84aEHCgkE07pR-7IL_Fn2gWuPVC66yxAp00W1ib2L-96q78D9J52HPdeDCSFio2RL7r5lOtz8YkQnjacb6xA
-```
- ## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)
attestation Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/workflow.md
The following actors are involved in an Azure Attestation work flow:
Here are the general steps in a typical SGX enclave attestation workflow (using Azure Attestation):
-1. Client collects evidence from an enclave. Evidence is information about the enclave environment and the client library running inside the enclave.
-1. The client has an URI which refers to an instance of Azure Attestation. The client sends evidence to Azure Attestation. Exact information submitted to the provider depends on the enclave type.
-1. Azure Attestation validates the submitted information and evaluates it against a configured policy. If the verification succeeds, Azure Attestation issues an attestation token and returns it to the client. If this step fails, Azure Attestation reports an error to the client.
-1. The client sends the attestation token to relying party. The relying party calls public key metadata endpoint of Azure Attestation to retrieve signing certificates. The relying party then verifies the signature of the attestation token and ensures the enclave trustworthiness.
+1. Client collects evidence from an enclave. Evidence is information about the enclave environment and the client library running inside the enclave
+1. The client has an URI which refers to an instance of Azure Attestation. The client sends evidence to Azure Attestation. Exact information submitted to the provider depends on the enclave type
+1. Azure Attestation validates the submitted information and evaluates it against a configured policy. If the verification succeeds, Azure Attestation issues an attestation token and returns it to the client. If this step fails, Azure Attestation reports an error to the client
+1. The client sends the attestation token to relying party. The relying party calls public key metadata endpoint of Azure Attestation to retrieve signing certificates. The relying party then verifies the signature of the attestation token and ensures the enclave trustworthiness
![SGX enclave validation flow](./media/sgx-validation-flow.png)
Here are the general steps in a typical SGX enclave attestation workflow (using
Here are the general steps in a typical TPM enclave attestation workflow (using Azure Attestation):
-1. On device/platform boot, various boot loaders and boot services measure events which backed by the TPM and are securely stored (TCG log).
-2. Client collects the TCG logs from the device and TPM quote, which acts the evidence for attestation.
-3. The client has an URI which refers to an instance of Azure Attestation. The client sends evidence to Azure Attestation. Exact information submitted to the provider depends on the platform.
-4. Azure Attestation validates the submitted information and evaluates it against a configured policy. If the verification succeeds, Azure Attestation issues an attestation token and returns it to the client. If this step fails, Azure Attestation reports an error to the client. The communication between the client and attestation service is dictated by the Azure attestation TPM protocol.
-5. The client then sends the attestation token to relying party. The relying party calls public key metadata endpoint of Azure Attestation to retrieve signing certificates. The relying party then verifies the signature of the attestation token and ensures the platforms trustworthiness.
+1. On device/platform boot, various boot loaders and boot services measure events backed by TPM and securely store them as TCG logs. Client collects the TCG logs from the device and TPM quote which acts evidence for attestation
+2. The client authenticates to Azure AD and obtains a access token
+3. The client has an URI which refers to an instance of Azure Attestation. The client sends the evidence and the Azure Active Directory (Azure AD) access token to Azure Attestation. Exact information submitted to the provider depends on the platform
+4. Azure Attestation validates the submitted information and evaluates it against a configured policy. If the verification succeeds, Azure Attestation issues an attestation token and returns it to the client. If this step fails, Azure Attestation reports an error to the client. The communication between the client and attestation service is dictated by the Azure attestation TPM protocol
+5. The client then sends the attestation token to relying party. The relying party calls public key metadata endpoint of Azure Attestation to retrieve signing certificates. The relying party then verifies the signature of the attestation token and ensures the platforms trustworthiness
![TPM validation flow](./media/tpm-validation-flow.png)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview). * Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines)
-* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
+* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md)
+* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
- If you attempt to run `azcmagent connect` on a server that is already connected to Azure, the resource ID is now printed to the console to help you locate the resource in Azure. - The `azcmagent connect` timeout has been extended to 10 minutes.-- `azcmagent show` no longer prints the private link scope ID. You can check if the server is associated with an Azure Arc private link scope by reviewing the machine details in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/servers), [CLI](/cli/azure/connectedmachine?view=azure-cli-latest#az-connectedmachine-show), [PowerShell](/powershell/module/az.connectedmachine/get-azconnectedmachine), or [REST API](/rest/api/hybridcompute/machines/get).
+- `azcmagent show` no longer prints the private link scope ID. You can check if the server is associated with an Azure Arc private link scope by reviewing the machine details in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/servers), [CLI](/cli/azure/connectedmachine?view=azure-cli-latest#az-connectedmachine-show&preserve-view=true), [PowerShell](/powershell/module/az.connectedmachine/get-azconnectedmachine), or [REST API](/rest/api/hybridcompute/machines/get).
- `azcmagent logs` collects only the 2 most recent logs for each service to reduce ZIP file size. - `azcmagent logs` collects Guest Configuration logs again.
azure-arc Create Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/create-virtual-machine.md
+
+ Title: Create a virtual machine on System Center Virtual Machine Manager using Azure Arc (preview)
+description: This article helps you create a virtual machine using Azure portal (preview).
Last updated : 05/25/2022+
+ms.
++
+keywords: "VMM, Arc, Azure"
+++
+# Create a virtual machine on System Center Virtual Machine Manager using Azure Arc (preview)
+
+Once your administrator has connected an SCVMM management server to Azure, represented VMM resources such as private clouds, VM templates in Azure, and provided you the required permissions on those resources, you'll be able to create a virtual machine in Azure.
+
+## Prerequisites
+
+- An Azure subscription and resource group where you have *Arc SCVMM VM Contributor* role.
+- A cloud resource on which you have *Arc SCVMM Private Cloud Resource User* role.
+- A virtual machine template resource on which you have *Arc SCVMM Private Cloud Resource User role*.
+- A virtual network resource on which you have *Arc SCVMM Private Cloud Resource User* role.
+
+## How to create a VM in Azure portal
+
+1. Go to Azure portal.
+2. Select **Azure Arc** as the service and then select **Azure Arc virtual machine** from the left blade.
+3. Click **+ Create**, **Create an Azure Arc virtual machine** page opens.
+
+3. Under **Basics** > **Project details**, select the **Subscription** and **Resource group** where you want to deploy the VM.
+4. Under **Instance details**, provide the following details:
+ - Virtual machine name - Specify the name of the virtual machine.
+ - Custom location - Select the custom location that your administrator has shared with you.
+ - Virtual machine kind ΓÇô Select **System Center Virtual Machine Manager**.
+ - Cloud ΓÇô Select the target VMM private cloud.
+ - Availability set - (Optional) Use availability sets to identify virtual machines that you want VMM to keep on separate hosts for improved continuity of service.
+5. Under **Template details**, provide the following details:
+ - Template ΓÇô Choose the VM template for deployment.
+ - Override template details - Select the checkbox to override the default CPU cores and memory on the VM templates.
+ - Specify computer name for the VM, if the VM template has computer name associated with it.
+6. Under **Administrator account**, provide the following details and click **Next : Disks >**.
+ - Username
+ - Password
+ - Confirm password
+7. Under **Disks**, you can optionally change the disks configured in the template. You can add more disks or update existing disks.
+8. Under **Networking**, you can optionally change the network interfaces configured in the template. You can add Network interface cards (NICs) or update the existing NICs. You can also change the network that this NIC will be attached to provided you have appropriate permissions to the network resource.
+9. Under **Advanced**, enable processor compatibility mode if required.
+10. Under **Tags**, you can optionally add tags to the VM resource.
+ >[!NOTE]
+ > Custom properties defined for the VM in VMM will be synced as tags in Azure.
+
+11. Under **Review + create**, review all the properties and select **Create**. The VM will be created in a few minutes.
azure-arc Enable Scvmm Inventory Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-scvmm-inventory-resources.md
+
+ Title: Enable SCVMM inventory resources in Azure Arc center (preview)
+description: This article helps you enable SCVMM inventory resources from Azure portal (preview)
+++ Last updated : 05/25/2022+
+keywords: "VMM, Arc, Azure"
++
+# Enable SCVMM inventory resources from Azure portal (preview)
+
+The article describes how you can view SCVMM management servers and enable SCVMM inventory from Azure portal, after connecting to the SCVMM management server.
+
+## View SCVMM management servers
+
+You can view all the connected SCVMM management servers under **SCVMM management servers** in Azure Arc center.
++
+In the inventory view, you can browse the virtual machines (VMs), VMM clouds, VM network, and VM templates.
+Under each inventory, you can select and enable one or more SCVMM resources in Azure to create an Azure resource representing your SCVMM resource.
+
+You can further use the Azure resource to assign permissions or perform management operations.
+
+## Enable SCVMM cloud, VM templates and VM networks in Azure
+
+To enable the SCVMM inventory resources, follow these steps:
+
+1. From Azure home > **Azure Arc** center, go to **SCVMM management servers (preview)** blade and go to inventory resources blade.
+
+ :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-server-blade-inline.png" alt-text="Screenshot of how to go to SCVMM management servers blade." lightbox="media/enable-scvmm-inventory-resources/scvmm-server-blade-expanded.png":::
+
+1. Select the resource(s) you want to enable and select **Enable in Azure**.
+
+ :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-enable-azure-inline.png" alt-text="Screenshot of how to enable in Azure option." lightbox="media/enable-scvmm-inventory-resources/scvmm-enable-azure-expanded.png":::
+
+1. In **Enable in Azure**, select your **Azure subscription** and **Resource Group** and select **Enable**.
+
+ :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-select-sub-resource-inline.png" alt-text="Screenshot of how to select subscription and resource group." lightbox="media/enable-scvmm-inventory-resources/scvmm-select-sub-resource-expanded.png":::
+
+ The deployment is initiated and it creates a resource in Azure, representing your SCVMM resources. It allows you to manage the access to these resources through the Azure role-based access control (RBAC) granularly.
+
+ Repeat the above steps for one or more VM networks and VM template resources.
+
+## Enable existing virtual machines in Azure
+
+To enable the existing virtual machines in Azure, follow these steps:
+
+1. From Azure home > **Azure Arc** center, go to **SCVMM management servers (preview)** blade and go to inventory resources blade.
+
+1. Go to **SCVMM inventory** resource blade, select **Virtual machines** and then select the VMs you want to enable and select **Enable in Azure**.
+
+ :::image type="content" source="media/enable-scvmm-inventory-resources/scvmm-enable-existing-vm-inline.png" alt-text="Screenshot of how to enable existing virtual machines in Azure." lightbox="media/enable-scvmm-inventory-resources/scvmm-enable-existing-vm-expanded.png":::
+
+1. Select your **Azure subscription** and **Resource group**.
+
+1. Select **Enable** to start the deployment of the VM represented in Azure.
+
+## Next steps
+
+[Connect virtual machines to Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
+
+ Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview)
+description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview).
Last updated : 05/25/2022+
+ms.
++
+keywords: "VMM, Arc, Azure"
+++
+# Overview of Arc-enabled System Center Virtual Machine Manager (preview)
+
+Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. With Azure Arc-enabled SCVMM, you get a consistent management experience across Azure.
+
+Azure Arc-enabled System Center Virtual Machine Manager allows you to manage your Hybrid environment and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations.
+
+Arc-enabled System Center VMM allows you to:
+
+- Perform various VM lifecycle operations such as start, stop, pause, delete VMs on VMM managed VMs directly from Azure.
+- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview).
+- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments.
+- Discover and onboard existing SCVMM managed VMs to Azure.
+
+## How does it work?
+
+To Arc-enable a System Center VMM management server, deploy [Azure Arc resource bridge](/azure/azure-arc/resource-bridge/overview) (preview) in the VMM environment. Arc resource bridge is a virtual appliance that connects VMM management server to Azure. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and do various operations on them.
+
+## Architecture
+
+The following image shows the architecture for the Arc-enabled SCVMM:
++
+### Supported VMM versions
+
+Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions.
+
+### Supported scenarios
+
+The following scenarios are supported in Azure Arc-enabled SCVMM (preview):
+
+- SCVMM administrators can connect a VMM instance to Azure and browse the SCVMM virtual machine inventory in Azure.
+- Administrators can use the Azure portal to browse SCVMM inventory and register SCVMM cloud, virtual machines, VM networks, and VM templates into Azure.
+- Administrators can provide app teams/developers fine-grained permissions on those SCVMM resources through Azure RBAC.
+- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
+
+### Supported regions
+
+Azure Arc-enabled SCVMM (preview) is currently supported in the following regions:
+
+- East US
+- West Europe
+
+## Next steps
+
+[See how to create a Azure Arc VM](create-virtual-machine.md)
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
+
+ Title: Quick Start for Azure Arc-enabled System Center Virtual Machine Manager (preview)
+description: In this QuickStart, you will learn how to use the helper script to connect your System Center Virtual Machine Manager management server to Azure Arc (preview).
+++ Last updated : 05/25/2022+++
+# QuickStart: Connect your System Center Virtual Machine Manager management server to Azure Arc (preview)
+
+Before you can start using the Azure Arc-enabled SCVMM features, you need to connect your VMM management server to Azure Arc.
+
+This QuickStart shows you how to connect your SCVMM management server to Azure Arc using a helper script. The script deploys a lightweight Azure Arc appliance (called Azure Arc resource bridge) as a virtual machine running in your VMM environment and installs an SCVMM cluster extension on it, to provide a continuous connection between your VMM management server and Azure Arc.
+
+## Prerequisites
+
+| **Requirement** | **Details** |
+| | |
+| **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
+| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud that has at least one cluster with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
+| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
+| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you may experience performance issues. |
+
+## Prepare SCVMM management server
+
+- Create an SCVMM private cloud if you don't have one. The private cloud should have a reservation of at least 16 GB of RAM and 4 vCPUs. It should also have at least 100 GB of disk space.
+- Ensure that SCVMM administrator account has the appropriate permissions.
+
+## Download the onboarding script
+
+1. Go to [Azure portal](https://aka.ms/SCVMM/MgmtServers).
+1. Search and select **Azure Arc**.
+1. In the **Overview** page, select **Add** in **Add your infrastructure for free** or move to the **infrastructure** tab.
+
+ :::image type="content" source="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure-inline.png" alt-text="Screenshot of how to select Add your infrastructure for free." lightbox="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure-expanded.png":::
+
+1. In the **Platform** section, in **System Center VMM** select **Add**.
+
+ :::image type="content" source="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm-inline.png" alt-text="Screenshot of how to select System Center V M M platform." lightbox="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm-expanded.png":::
+
+1. Select **Create new resource bridge** and select **Next**.
+1. Provide a name for **Azure Arc resource bridge**. For example: *contoso-nyc-resourcebridge*.
+1. Select a subscription and resource group where you want to create the resource bridge.
+1. Under **Region**, select an Azure location where you want to store the resource metadata. The currently supported regions are **East US** and **West Europe**.
+1. Provide a name for **Custom location**.
+ This is the name that you'll see when you deploy virtual machines. Name it for the datacenter or the physical location of your datacenter. For example: *contoso-nyc-dc.*
+1. Leave the option **Use the same subscription and resource group as your resource bridge** selected.
+1. Provide a name for your **SCVMM management server instance** in Azure. For example: *contoso-nyc-scvmm.*
+1. Select **Next: Download and run script**.
+1. If your subscription isn't registered with all the required resource providers, select **Register** to proceed to next step.
+1. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the workstation.
+1. To see the status of your onboarding after you run the script on your workstation, select **Next:Verification**. The onboarding isn't affected when you close this page.
+
+## Run the script
+
+Use the following instructions to run the script, depending on the Operating System of the workstation.
+
+>[!NOTE]
+>Before running the script, install the latest version of Azure CLI (2.36.0 or later).
++
+### Windows
+
+Follow these instructions to run the script on a Windows machine.
+
+1. Open a new PowerShell window and verify if Azure CLI is successfully installed in the workstation, use the following command:
+ ```azurepowershell-interactive
+ az
+ ```
+1. Navigate to the folder where you've downloaded the PowerShell script:
+ *cd C:\Users\ContosoUser\Downloads*
+
+1. Run the following command to allow the script to run since it's an unsigned script (if you close the session before you complete all the steps, run this command again for the new session):
+ ```azurepowershell-interactive
+ Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
+ ```
+1. Run the script:
+ ```azurepowershell-interactive
+ ./resource-bridge-onboarding-script.ps1
+ ```
+### Linux
+
+Follow these instructions to run the script on a Linux machine:
+
+1. Open the terminal and navigate to the folder, where you've downloaded the Bash script.
+2. Execute the script using the following command:
+
+ ```sh
+ bash resource-bridge-onboarding-script.sh
+ ```
+
+## Script runtime
+The script execution will take up to half an hour and you'll be prompted for various details. See the following table for related information:
+
+| **Parameter** | **Details** |
+| | |
+| **Azure login** | You would be asked to log in to Azure by visiting [this site](https://www.microsoft.com/devicelogin) and pasting the prompted code. |
+| **SCVMM management server FQDN/Address** | FQDN for the VMM server (or an IP address). </br> Provide role name if itΓÇÖs a Highly Available VMM deployment. </br> For example: nyc-scvmm.contoso.com or 10.160.0.1 |
+| **SCVMM Username**</br> (domain\username) | Username for the SCVMM administrator account. The required permissions for the account are listed in the prerequisites above.</br> Example: contoso\contosouser |
+| **SCVMM password** | Password for the SCVMM admin account |
+| **Private cloud selection** | Select the name of the private cloud where the Arc resource bridge VM should be deployed. |
+| **Virtual Network selection** | Select the name of the virtual network to which *Arc resource bridge VM* needs to be connected. This network should allow the appliance to talk to the VMM management server and the Azure endpoints (or internet). |
+| **Static IP pool** | Select the VMM static IP pool that will be used to allot IP address. |
+| **Control Pane IP** | Provide a reserved IP address (a reserved IP address in your DHCP range or a static IP outside of DHCP range but still available on the network). The key thing is this IP address shouldn't be assigned to any other machine on the network. |
+| **Appliance proxy settings** | Type ΓÇÿYΓÇÖ if there's a proxy in your appliance network, else type ΓÇÿNΓÇÖ.|
+| **http** | Address of the HTTP proxy server. |
+| **https** | Address of the HTTPS proxy server.|
+| **NoProxy** | Addresses to be excluded from proxy.|
+|**CertificateFilePath** | For SSL based proxies, provide the path to the certificate. |
+
+Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc- enabled SCVMM.
+
+### Retry command - Windows
+
+If for any reason, the appliance creation fails, you need to retry it. Run the command with ```-Force``` to clean up and onboard again.
+
+```powershell-interactive
+ ./resource-bridge-onboarding-script.ps1-Force -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VMMservername <VMMservername>
+```
+
+### Retry command - Linux
+
+If for any reason, the appliance creation fails, you need to retry it. Run the command with ```--force``` to clean up and onboard again.
+
+ ```sh
+ bash resource-bridge-onboarding-script.sh --force
+ ```
+>[!NOTE]
+> - After successful deployment, we recommend to maintain the state of **Arc Resource Bridge VM** as *online*.
+> - Intermittently appliance might become unreachable, when you shut down and restart the VM.
++
+## Next steps
+
+[Create a VM](create-virtual-machine.md)
azure-australia Australia Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/australia-overview.md
- Title: What is Azure Australia? | Microsoft Docs
-description: Guidance on configuring Azure within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019----
-# What is Azure Australia?
-
-In 2014, Azure was launched in Australia, with two regions; Australia East (Sydney) and Australia Southeast (Melbourne). In April 2018, two new Azure Regions located in Canberra ΓÇô Australia Central and Australia Central 2, were launched. The Australia Central and Australia Central 2 regions are purposely designed to meet the needs of government and critical national infrastructure, and offer specialised connectivity and flexibility so you can locate your systems beside the cloud, with levels of security and resilience only expected of Secret-classified networks. Azure Australia is a platform for the digital transformation of government and critical national infrastructure ΓÇô and the only mission-critical cloud available in Australia designed specifically for those needs.
-
-There are specific Australian Government requirements for connecting to, consuming, and operating within [Microsoft Azure Australia](https://azure.microsoft.com/global-infrastructure/australia/) for Australian Government data and systems. The resources on this page also provide general guidance applicable to all customers with a specific focus on secure configuration and operation.
-
-Refer to the Australia page of the [Microsoft Service Trust Portal](https://aka.ms/au-irap) for current information on the Azure Australia Information Security Registered Assessor (IRAP) Assessments, certification and inclusion on the Certified Cloud Services List (CCSL). On the Australia page, you will also find other Microsoft advice specific to Government and Critical Infrastructure providers.
-
-## Principles for securing customer data in Azure Australia
-
-Azure Australia provides a range of features and services that you can use to build cloud solutions to meet your regulated/controlled data needs. A compliant customer solution is nothing more than the effective implementation of out-of-the-box Azure Australia capabilities, coupled with a solid data security practice.
-
-When you host a solution in Azure Australia, Microsoft handles many of these requirements at the cloud infrastructure level.
-
-The following diagram shows the Azure defence-in-depth model. For example, Microsoft provides basic cloud infrastructure DDoS, along with customer capabilities such as security appliances or premium DDoS services for customer-specific application needs.
-
-![alt text](media/defenceindepth.png)
-
-These articles outline the foundational principles for securing your services and applications, with guidance and best practices on how to apply these principles. In other words, how customers should make smart use of Azure Australia to meet the obligations and responsibilities that are required for a solution that handles Government sensitive and classified information.
-
-There are two categories of documentation provided for Australian Government agencies migrating to Azure.
-
-## Security in Azure Australia
-
-Identity, Azure role-based access control, data protection through encryption and rights management, and effective monitoring and configuration control are key elements that you need to implement. In this section, there are a series of articles explaining the built-in capabilities of Azure and how they relate to the ISM and ASD Essential 8.
-
-These articles can be accessed through the menu under *Concepts -> Security in Azure Australia*.
-
-## Gateways in Azure Australia
-
-Another key step for Government agencies is the establishment of perimeter security capabilities. These capabilities are called Secure Internet Gateways (SIG) and when using Azure it is your responsibility to ensure these protections are in place. Microsoft does not operate a SIG; however, by combining our edge networking services that protect all customers, and specific services deployed within your Azure environment you can operate an equivalent capability.
-
-These articles can be accessed through the menu under *Concepts -> Gateways in Azure Australia*.
-
-## Next steps
-
-* If your key focus is securing your data in Azure, start with [Data Security](secure-your-data.md)
-* If your key focus is building a Gateway in Azure, start with [Gateway auditing, logging, and visibility](gateway-log-audit-visibility.md).
azure-australia Azure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/azure-key-vault.md
- Title: Azure Key Vault in Azure Australia
-description: Guidance on configuring and using Azure Key Vault for key management within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# Azure Key Vault in Azure Australia
-
-The secure storage of cryptographic keys and management of the cryptographic key lifecycle are critical elements within cryptographic systems. The service that provides this capability in Azure is the Azure Key Vault. Key Vault has been IRAP security accessed and ACSC certified for PROTECTED. This article outlines the key considerations when using Key Vault to comply with the Australian Signals Directorate's (ASD) [Information Security Manual Controls](https://acsc.gov.au/infosec/ism/) (ISM).
-
-Azure Key Vault is a cloud service that safeguards encryption keys and secrets. Because this data is sensitive and business critical, Key Vault enables secure access to key vaults, allowing only authorized users and applications. There are three main artifacts managed and controlled by Key Vault:
--- keys-- secrets-- certificates-
-This article will focus on management of keys using Key Vault.
-
-![Azure Key Vault](media/azure-key-vault-overview.png)
-
-*Diagram 1 ΓÇô Azure Key Vault*
-
-## Key design considerations
-
-### Deployment options
-
-There are two options for creating Azure Key Vaults. Both options use the nCipher nShield family of Hardware Security Modules (HSM), are Federal Information Processing Standards (FIPS) validated, and are approved to store keys in PROTECTED environments. The options are:
--- **Software-protected vaults:** FIPS 140-2 level 1 validated. Keys stored on an HSM. Encryption and decryption operations are performed in compute resources on Azure.-- **HSM-protected vaults:** FIPS 140-2 level 2 validated. Keys stored on an HSM. Encryption and decryption operations are performed on the HSM.-
-Key Vault supports Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC) keys. The default is RSA 2048-bit keys but there is an advanced option for RSA 3072-bit, RSA 4096-bit, and ECC keys. All keys meet the ISM controls, but Elliptic Curve keys are preferred.
-
-### Resource operations
-
-There are several personas involved in Azure Key Vault:
--- **Key Vault administrator:** Manages the lifecycle of the vault-- **Key administrator:** Manages the lifecycle of keys in the vault-- **Developer/operator:** Integrate keys from the vault into applications and services-- **Auditor:** Monitor key usage and access-- **Applications:** Use keys to secure information-
-Azure Key Vault is secured with two separate interfaces:
--- **Management Plane:** This plane deals with managing the vault and it secured by Azure RBAC.-- **Data Plane:** This plane deals with managing and accessing the artifacts in the vault. Secured using Key Vault access policy.-
-As required by the ISM, proper authentication and authorisation are required before a caller (a user or an application) before they can get access to key vault by either plane.
-
-Azure RBAC has one built-in role for Key Vault, [Key Vault Contributor](../role-based-access-control/built-in-roles.md#key-vault-contributor), to control management of the Key Vaults. The creation of custom roles aligned to more granular roles for managing your Key Vaults is recommended.
-
->[!WARNING]
->When access to keys is enabled via Key Vault access policy then the user or application has that access to all keys in the key vault (for example, if a user has 'delete' access then they can delete all keys). Therefore, multiple key vaults should be deployed to align with security domains/boundaries.
-
-### Networking
-
-You can configure Key Vault firewalls and virtual networks to control access to the data plane. You can allow access to users or applications on specified networks while denying access to users or applications on all other networks. [Trusted services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services) are an exception to this control if "Allow trusted services" is enabled. The virtual networking control does not apply to the management plane.
-
-Access to Key Vaults should be explicitly restricted to the minimum set of networks that have users or applications requiring access to keys.
-
-### Bring Your Own Key (BYOK)
-
-Key Vault supports BYOK. BYOK enables users to import keys from their existing key infrastructures. The BYOK toolset supports the secure transfer and import of keys from an external HSM (for example, keys generated with an offline workstation) into Key Vault.
-
-### Key Vault auditing and logging
-
-The ACSC requires Commonwealth entities to use the appropriate Azure services to undertake real-time monitoring and reporting on their Azure workloads.
-
-Logging is enabled by enabling the **_"AuditEvent"_** diagnostic setting on Key Values. Audit events will be logged to the specified storage account. **_"RetentionInDays"_** period should be set according to the data retention policy. [Operations](../key-vault/general/logging.md#interpret-your-key-vault-logs) on both the management plane and data plane are audited and logged. The [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) can be used to review Key Vault AuditEvent logs. A number of other Azure services can be used to process and distribute Key Vault AuditEvents.
-
-### Key rotation
-
-Storing keys in Key Vault provided a single point to maintain keys outside applications that enable keys to be updated without affecting the behaviour of the applications. Storing keys in Azure Key Vault enables various strategies for supporting key rotation:
--- Manually-- Programmatically via APIs-- Automation Scripts (for example, using PowerShell and Azure Automation)-
-These options enable keys to be rotated on a periodic basis to satisfy compliance requirements or on an ad-hoc basis if there are concerns that keys may have been compromised.
-
-#### Key rotation strategies
-
-It is important to develop an appropriate key rotation strategy for keys which are stored in KeyVault. Using the wrong key will lead to information being incorrectly decrypted, and losing keys can lead to the complete loss of access to information. Examples of key rotation strategies for different scenarios include:
--- **Inflight data:** volatile information is transmitted between 2 parties. When a key is rotated then both parties must have a mechanism to synchronous retrieving the updated keys from the key vault.-- **Data as rest:** A party stores encrypted data and decrypts it in the future to use. When a key is going to rotated then the data must be decrypted with the old key and then encrypted with the new, rotated key. There are approaches to minimize the impact of the decrypt/encrypt process using key encrypting keys (see example). Microsoft manages the majority of the process related to key rotation for Azure Storage (see…)-- **Access keys:** a number of Azure services have access keys that can be stored in Key Vault (for example, CosmosDB). The azure services have primary and secondary access keys. It is important that both keys are not rotated at the same time. Therefore, one key should be rotated then after a period and the key operation has been verified then the second key can be rotated.-
-### High availability
-
-The ISM has several controls that relate to Business Continuity.
-Azure Key Vault has multiple layers of redundancy with contents replicated within the region and to the secondary, [paired region](../availability-zones/cross-region-replication-azure.md).
-
-When the key vault is in a fail-over state, it is in read-only mode and will return to read-write mode the primary service is restored.
-
-The ISM has several controls related to backup. It is important to develop and execute appropriate backup/restore plans for vaults and their keys.
-
-## Key lifecycle
-
-### Key operations
-
-Key Vault support the following operations on a key:
--- **create:** Allows a client to create a key in Key Vault. The value of the key is generated by Key Vault and stored, and isn't released to the client. Asymmetric keys may be created in Key Vault.-- **import:** Allows a client to import an existing key to Key Vault. Asymmetric keys may be imported to Key Vault using a number of different packaging methods within a JWK construct.-- **update:** Allows a client with sufficient permissions to modify the metadata (key attributes) associated with a key previously stored within Key Vault.-- **delete:** Allows a client with sufficient permissions to delete a key from Key Vault.-- **list:** Allows a client to list all keys in a given Key Vault.-- **list versions:** Allows a client to list all versions of a given key in a given Key Vault.-- **get:** Allows a client to retrieve the public parts of a given key in a Key Vault.-- **backup:** Exports a key in a protected form.-- **restore:** Imports a previously backed up key.-
-There is a corresponding set of permissions that can be granted to users, service principals, or applications using Key Vault access control entries to enable them to execute key operations.
-
-Key Vault has a soft delete feature to allow the recovery of deleted vaults and keys. By default, **_"soft delete"_** is not enabled, but once enabled, objects are held for 90 days (the retention period) while appearing to be deleted. An additional permission **_"purge"_**, allows the permanent deletion of keys if the **_"Purge Protection"_** option is disabled.
-
-Creating or importing an existing key creates a new version of the key.
-
-### Cryptographic operations
-
-Key Vault also supports cryptographic operations using keys:
--- **sign and verify:** this operation is a "sign hash" or "verify hash". Key Vault does not support hashing of content as part of signature creation.-- **key encryption/wrapping:** this operation is used to protect another key.-- **encrypt and decrypt:** the stored key is used to encrypt or decrypt a single block of data-
-There is a corresponding set of permissions that can be granted to users, service principals, or applications using Key Vault access control entries to enable them to execute cryptographic operations.
-
-There are three key attributes that can set to control whether a key is enabled and useable of cryptographic operations:
--- **enabled**-- **nbf:** not before enabled before specified date-- **exp:** expiration date-
-## Storage and keys
-
-Customer-managed keys are more flexibility and enable assess to and management of the keys to be controlled. They also enable auditing the encryption keys used to protect data.
-There are three aspects to storage and keys stored in Key Vault:
--- Key Vault managed storage account keys-- Azure Storage Service Encryption (SSE) for data at rest-- Managed disks and Azure Disk Encryption-
-Key Vault's Azure Storage account key management is an extension to Key Vault's key service that supports synchronization and regeneration (rotation) of storage account keys. [Azure Storage integration with Azure Active Directory](../storage/blobs/authorize-access-azure-active-directory.md) (preview) is recommended when released as it provides superior security and ease of use.
-SSE uses two keys to manage encryption of data at rest:
--- Key Encryption Keys (KEK)-- Data Encryption Keys (DEK)-
-While Microsoft manages the DEKs, SSE has an option to use customer-managed KEKs that can be stored in Key Vault. This enables the rotation of keys in Azure Key Vault as per the appropriate compliance policies. When keys are rotated, Azure Storage re-encrypts the Account Encryption Key for that storage account. This does not result in re-encryption of all data and there is no other action required.
-
-SSE is used for managed disks but customer-managed keys are not supported. Encryption of managed disks can be done using Azure Disk Encryption with customer-managed KEK keys in Key Vault.
-
-## Next Steps
-
-Review the article on [Identity Federation](identity-federation.md)
-
-Review additional Azure Key Vault documentation and tutorials in the [Reference Library](reference-library.md)
azure-australia Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/azure-policy.md
- Title: Security compliance with Azure Policy and Azure Blueprints
-description: Ensuring compliance and enforcing security with Azure Policy and Azure Blueprints for Australian Government agencies as it relates to the ASD ISM and Essential 8
--- Previously updated : 07/22/2019---
-# Security compliance with Azure Policy and Azure Blueprints
-
-The challenge of enforcing governance within your IT environment, whether it be an on-premises, cloud native or a hybrid environment, exists for all organisations. A robust technical governance framework needs to be in place to ensure your Microsoft Azure environment conforms with design, regulatory, and security requirements.
-
-For Australian Government agencies, they key controls to consider when assessing risk are in the [Australian Cyber Security Centre (ACSC) Information Security Manual](https://acsc.gov.au/infosec/ism/index.htm) (ISM). The majority of controls detailed within the ISM require the application of technical governance to be effectively managed and enforced. It is important you have the appropriate tools to evaluate and enforce configuration in your environments.
-
-Microsoft Azure provides two complimentary services to assist with these challenges, Azure Policy and Azure Blueprints.
-
-## Azure Policy
-
-Azure Policy enables the application of the technical elements of an organisation's IT governance. Azure Policy contains a constantly growing library of built-in policies. Each Policy enforces rules and effects on the targeted Azure Resources.
-
-Once a policy is assigned to resources, the overall compliance against that policy can be evaluated, and be remediated if necessary.
-
-This library of built-in Azure Polices enable an organisation to quickly enforce the types of controls found in the ACSC ISM. Examples of controls include:
-
-* Monitoring virtual machines for missing system updates
-* Auditing accounts with elevated permissions for multi-factor authentication
-* Identifying unencrypted SQL Databases
-* Monitoring the use of custom Azure role-based access control (Azure RBAC)
-* Restricting the Azure regions that resources can be created in
-
-If governance or regulatory controls are not met by a built-in Azure Policy definition, a custom definition can be created and assigned. All Azure Policy definitions are defined in JSON and follow a standard [definition structure](../governance/policy/concepts/definition-structure.md). Existing Azure Policy definitions can also be duplicated and used to form the basis of a custom Policy definition.
-
-Assigning individual Azure Policies to resources, especially in complex environments or in environments with strict regulatory requirements, can create large overhead for your administrators. To assist with these challenges, a set of Azure Policies can be grouped together to form an Azure Policy Initiative. Policy Initiatives are used to combine related Azure policies that, when applied together as a group, form the basis of a specific security or compliance objective. Microsoft is adding built-in Azure Policy Initiative definitions, including definitions designed to meet specific regulatory requirements:
-
-![Regulatory Compliance Policy Initiatives](media/regulatory-initiatives.png)
-
-All Azure Policies and Initiatives are assigned to an assignment scope. This scope is defined at either the Azure Subscription, Azure Management Group, or Azure Resource Group levels. Once the required Azure Policies or Policy Initiatives have been assigned, an organisation will be able to enforce the configuration requirements on all newly created Azure resources.
-
-Assigning a new Azure Policy or Initiative will not affect existing Azure resources. Azure Policy can; however, enable an organisation to view the compliance of existing Azure resources. Any resources that have been identified as being non-compliant can be remediated at the organisation's discretion
-
-### Azure Policy and initiatives in action
-
-The available built-in Azure Policy and Initiative definitions can be found under the Definition node in the Policy section of the Azure portal:
-
-![Built-In Azure Policy Definitions](media/policy-definitions.png)
-
-Using the library of built-in definitions, you can quickly search for Policies that meet an organisational requirement, review the policy definition, and assign the Policy to the appropriate resources. For example, the ISM requires multi-factor authentication (MFA) for all privileged users, and for all users with access to important data repositories. In Azure Policy you can search for "MFA" amongst the Azure Policy definitions:
-
-![Azure AD MFA Policies](media/mfa-policies.png)
-
-Once a suitable policy is identified, you assign the policy to the desired scope. If there is no built-in policy that meets your requirements, you can duplicate the existing policy and make the desired changes:
-
-![Duplicate existing Azure Policy](media/duplicate-policy.png)
-
-Microsoft also provides a collection of Azure Policy samples on [GitHub](https://github.com/Azure/azure-policy) as a 'quickstart' for you to build custom Azure Policies. These Policy samples can be copied directly into the Azure Policy editor within the Azure portal.
-
-When creating Azure Policy Initiatives, you can sort the list of available policy definitions, both built-in and custom, adding the required definitions.
-
-For instance, you could search through the list of available Azure Policy definitions for all of the policies related to Windows virtual machines. Then you those definitions to an Initiative designed to enforce recommended virtual machine hardening practices:
-
-![List of Azure Policies](media/initiative-definitions.png)
-
-While assigning an Azure Policy or Policy Initiative to an assignment scope, it is possible for you to exclude Azure resources from the effects of the Policies by excluding either Azure Management Groups or Azure Resource Groups.
-
-### Real-time enforcement and compliance assessment
-
-Azure Policy compliance scans of in-scope Azure resources are undertaken when the following conditions are met:
-
-* When an Azure Policy or Azure Policy Initiative is assigned
-* When the scope of an existing Azure Policy or Initiative is changed
-* On demand via the API up to a maximum of 10 scans per hour
-* Once every 24 hours - the default behaviour
-
-A policy compliance scan for a single Azure resource is undertaken 15 minutes after a change has been made to the resource.
-
-An overview of the Azure Policy compliance of resources can be reviewed within the Azure portal via the Policy Compliance dashboard:
-
-![Azure Policy compliance score](media/simple-compliance.png)
-
-The overall resource compliance percentage figure is an aggregate of the compliance of all in-scope deployed resources against all of your assigned Azure Policies. This allows you to identify the resources within an environment that are non-compliant and devise the plan to best remediate these resources.
-
-The Policy Compliance dashboard also includes the change history for each resource. If a resource is identified as no longer being compliant with assigned policy, and automatic remediation is not enabled, you can view who made the change, what was changed, and when the changes were made to that resource.
-
-## Azure Blueprints
-
-Azure Blueprints extend the capability of Azure Policy by combining them with:
-
-* Azure RBAC
-* Azure Resource Groups
-* [Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md)
-
-Blueprints allow for the creation of environment designs that deploy Azure resources from Resource Manager templates, configure Azure RBAC, and enforce and audit configuration by assigning Azure Policy. Blueprints form an editable and redeployable environment template. Once the blueprint has been created, it can then be assigned to an Azure Subscription. Once assigned, all of the Azure resources defined within the blueprint will be created and the Azure Policies applied. The deployment and configuration of resources defined in an Azure blueprint can be monitored from the Azure Blueprints console in the Azure portal.
-
-Azure Blueprints that have been edited must be republished in the Azure portal. Each time a Blueprint is republished, the version number of the Blueprint is incremented. The version number allows you to determine which specific version of a Blueprint has been deployed to an organisation's Azure Subscriptions. If desired, the currently assigned version of the Blueprint can be updated to the latest version.
-
-Resources deployed using an Azure blueprint can be configured with [Azure Resource Locks](../azure-resource-manager/management/lock-resources.md) at the time of deployment. Resource locks prevent resources from being accidentally modified or deleted.
-
-Microsoft is developing Azure Blueprints templates for a range of industries and regulatory requirements. The current library of available Azure Blueprints definitions can be viewed in the Azure portal or the [Azure Security and Compliance Blueprint](https://servicetrust.microsoft.com/ViewPage/BlueprintOverview/) page in the Service Trust Portal.
-
-### Azure Blueprints artifacts
-
-To create an Azure Blueprint, you can start with a blank Blueprint template, or use one of the existing sample Blueprints as a starting point. You can add artifacts to the Blueprint that will be configured as part of deployment:
-
-![Azure Blueprints Artifacts](media/blueprint-artifacts.png)
-
-These artifacts could include the Azure Resource Group and Resources and associated Azure Policy and Policy Initiatives to enforce the configuration required for your environment to be compliant you're your regulatory requirements, for example, the ISM controls for system hardening.
-
-Each of these artifacts can also be configured with parameters. These values are provided when the Blueprint has been assigned to an Azure subscription and deployed. Parameters allow for a single Blueprint to be created and used to deploy resources into different environments without having to edit the underlying Blueprint.
-
-Microsoft is developing Azure PowerShell and CLI cmdlets to create and manage Azure Blueprints with the intention that a Blueprint could be maintained and deployed by an organisation via a CI/CD pipeline.
-
-## Next steps
-
-This article explained how governance and security can be enforced with Azure Policy and Azure Blueprints. Now that you've been exposed at a high level, learn how to use each service in more detail:
-
-* [Azure Policy Overview](../governance/policy/overview.md)
-* [Azure Blueprints Overview](https://azure.microsoft.com/services/blueprints/)
-* [Azure Policy Samples](../governance/policy/samples/index.md)
-* [Azure Policy Samples Repository](https://github.com/Azure/azure-policy)
-* [Azure Policy Definition Structure](../governance/policy/concepts/definition-structure.md)
-* [Azure Policy Effects](../governance/policy/concepts/effects.md)
azure-australia Gateway Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/gateway-egress-traffic.md
- Title: Controlling egress traffic in Azure Australia
-description: Key elements of controlling egress traffic in Azure to meet Australian Government requirements for Secure Internet Gateways
--- Previously updated : 07/29/2019---
-# Controlling egress traffic in Azure Australia
-
-A fundamental component of securing ICT systems is controlling network traffic. Restricting communication to only the traffic necessary for a system to function reduces the potential for compromise. Visibility and control over the external systems that your applications and services communicate with helps detect compromised systems, and attempted or successful data exfiltration. This article provides information on how outbound (egress) network traffic works within Azure and provides recommendations for implementing network security controls for an internet connected system that aligns with the Australian Cyber Security Centre (ACSC) Consumer Guidance and the intent of the ACSC's Information Security Manual (ISM).
-
-## Requirements
-
-The overall security requirements for Commonwealth systems are defined in the ISM. To assist Commonwealth entities in implementing network security, the ACSC has published _ACSC Protect: Implementing Network Segmentation and Segregation_, and to assist with securing systems in Cloud environments the ACSC has published _Cloud Computing Security for Tenants_.
-
-The ACSC documents outline the context for implementing network security and controlling traffic, and provide practical recommendations for network design and configuration.
-
-The following key requirements for controlling egress traffic in Azure have been identified in the ACSC documents.
-
-Description|Source
- |
-**Implement Network Segmentation and Segregation**, for example, use an n-tier architecture, using host-based firewalls and network access controls to limit inbound and outbound network connectivity to only required ports and protocols.| [Cloud Computing for Tenants](https://www.cyber.gov.au/acsc/view-all-content/publications/cloud-computing-security-tenants)
-**Implement adequately high bandwidth, low latency, reliable network connectivity** between the tenant (including the tenant's remote users) and the cloud service to meet the tenant's availability requirements | [ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Apply technologies at more than just the network layer**. Each host and network should be segmented and segregated, where possible, at the lowest level that can be practically managed. In most cases, this applies from the data link layer up to and including the application layer; however, in sensitive environments, physical isolation may be appropriate. Host-based and network-wide measures should be deployed in a complementary manner and be centrally monitored. Just implementing a firewall or security appliance as the only security measure is not sufficient. |[ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Use the principles of least privilege and needΓÇÉtoΓÇÉknow**. If a host, service, or network doesn't need to communicate with another host, service, or network, it should not be allowed to. If a host, service, or network only needs to talk to another host, service, or network on a specific port or protocol, it should be restricted to only those ports and protocols. Adopting these principles across a network will complement the minimisation of user privileges and significantly increase the overall security posture of the environment. |[ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Separate hosts and networks based on their sensitivity or criticality to business operations**. This may include using different hardware or platforms depending on different security classifications, security domains, or availability/integrity requirements for certain hosts or networks. In particular, separate management networks and consider physically isolating out-of-band management networks for sensitive environments. |[ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Identify, authenticate, and authorise access by all entities to all other entities**. All users, hosts, and services should have their access to all other users, hosts, and services restricted to only those required to perform their designated duties or functions. All legacy or local services which bypass or downgrade the strength of identification, authentication, and authorisation services should be disabled wherever possible and have their use closely monitored. |[ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Implement allowlisting of network traffic instead of deny listing**. Only permit access for known good network traffic (traffic that is identified, authenticated, and authorised), rather than denying access to known bad network traffic (for example, blocking a specific address or service). Allowlists result in a superior security policy to deny lists, and significantly improve your capacity to detect and assess potential network intrusions. |[ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-**Defining an allowlist of permitted websites and blocking all unlisted websites** effectively removes one of the most common data delivery and exfiltration techniques used by an adversary. If users have a legitimate requirement to access numerous websites, or a rapidly changing list of websites; you should consider the costs of such an implementation. Even a relatively permissive allowlist offers better security than relying on deny lists, or no restrictions at all, while still reducing implementation costs. An example of a permissive allowlist could be permitting the entire Australian subdomain, that is '*.au', or allowing the top 1,000 sites from the Alexa site ranking (after filtering Dynamic Domain Name System (DDNS) domains and other inappropriate domains).| [Australian Government Information Security Manual (ISM)](https://www.cyber.gov.au/ism)
-|
-
-This article provides information and recommendations on how network traffic leaving your Azure environment is controlled. It covers systems deployed in Azure using both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).
-
-The [Gateway Ingress Traffic](gateway-ingress-traffic.md) article addresses network traffic entering your Azure environment and is the companion to this article for full network control coverage.
-
-## Architecture
-
-To appropriately control egress traffic, when you design and implement network security, you must first understand how egress network traffic works within Azure across both IaaS and PaaS. This section provides an overview of how outbound traffic generated by a resource hosted in Azure is processed, and the security controls available to restrict, and control that traffic.
-
-### Architecture components
-
-The architectural diagram shown here depicts the possible paths that network traffic can take when exiting a system that is deployed into a subnet in a virtual network. Traffic in a virtual network is managed and governed at a subnet level, with routing and security rules applying to the resources contained within. The components related to egress traffic are divided into Systems, Effective Routes, Next Hop types, Security Controls, and PaaS egress.
-
-![Architecture](media/egress-traffic.png)
-
-### Systems
-
-Systems are the Azure resources and related components that generate outbound traffic within an IP subnet that is part of a virtual network.
-
-| Component | Description |
-| | |
-|Virtual Network (VNet) | A VNet is a foundational resource within Azure that provides a platform and boundary for deploying resources and enabling communication. The VNet exists within an Azure Region and defines the IP Address Space and Routing boundaries for VNet integrated resources such as Virtual Machines.|
-|Subnet | A subnet is an IP address range that is created within a VNet. Multiple subnets can be created within a VNet for network segmentation.|
-|Network Interface| A network interface is a resource that exists in Azure. It is attached to a Virtual Machine and assigned a private, non-Internet routable IP address from the subnet that it is associated with. This IP address is dynamically or statically assigned through Azure Resource Manager.|
-|Public IPs| A Public IP is a resource that reserves one of the Microsoft owned Public, Internet-Routable IP Addresses from the specified region for use within the virtual network. It can be associated with a specific Network Interface or PaaS resource, which enables the resource to communicate with the Internet, ExpressRoute and to other PaaS systems.|
-|
-
-### Routes
-
-The path that egress traffic takes is dependent on the effective routes for that resource, which is the resultant set of routes determined by the combination of routes learned from all possible sources and the application of Azure routing logic.
-
-| Component | Description |
-| | |
-|System Routes| Azure automatically creates system routes and assigns the routes to each subnet in a virtual network. System routes cannot be created or removed, but some can be overridden with custom routes. Azure creates default system routes for each subnet, and adds additional optional default routes to specific subnets, or every subnet, when specific Azure capabilities are utilised.|
-|Service Endpoints| Service endpoints provide a direct, private egress connection from a subnet to a specific PaaS capability. Service endpoints, which are only available for a subset of PaaS capabilities, provide increased performance and security for resources in a VNet accessing PaaS.|
-|Route Tables| A route table is a resource in Azure that can be created to specify User-Defined Routes (UDRs) that can complement or override systems routes or routes learned via Border Gateway Protocol. Each UDR specifies a network, a network mask, and a next hop. A route table can be associated to a subnet and the same route table can be associated to multiple subnets, but a subnet can only have zero or one route table.|
-|Border Gateway Protocol (BGP)| BGP is an inter-autonomous system routing protocol. An autonomous system is a network or group of networks under a common administration and with common routing policies. BGP is used to exchange routing information between autonomous systems. BGP can be integrated into virtual networks through virtual network gateways.|
-|
-
-### Next hop types defined
-
-Each route within Azure includes the network range and associated subnet mask and the next hop, which determines how the traffic is processed.
-
-Next Hop Type | Description
-- | -
-**Virtual Network** | Routes traffic between address ranges within the address space of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple addresses ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets within a VNet using the routes created for each address range.
-**VNet peering** | When a virtual network peering is created between two virtual networks, a route is added for each address range of each virtual network to the virtual network it is peered to. Traffic is routed between the peered virtual networks in the same way as subnets within a virtual network.
-**Virtual network gateway** | One or more routes with virtual network gateway listed as the next hop type are added when a virtual network gateway is added to a virtual network. The routes included are those that are configured within the local network gateway resource and any routes learned via BGP.
-**Virtual appliance** | A virtual appliance typically runs a network application, such as a firewall. The virtual appliance allows additional processing of the traffic to occur, such as filtering, inspection, or address translation. Each route with the virtual appliance hop type must also specify a next hop IP address.
-**VirtualNetworkServiceEndpoint** | The public IP addresses for a specific service are added as routes to a subnet with a next hop of VirtualNetworkServiceEndpoint when a service endpoint is enabled. Service endpoints are enabled for individual services on individual subnets within a virtual network. The public IP addresses of Azure services change periodically. Azure manages the addresses in the route table automatically when the addresses change.
-**Internet** | Traffic with a next hop of Internet will exit the virtual network and automatically be translated to a Public IP address either from a dynamic pool available in the associated Azure region, or by using a Public IP address configured for that resource. If the destination address is for one of Azure's services, traffic is routed directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in.
-**None** | Traffic with a next hop of none is dropped. Azure creates system default routes for reserved address prefixes with none as the next hop type. Routes with a next hop of none can also be added using route tables to prevent traffic from being routed to specific networks.
-|
-
-### Security controls
-
-Control | Description
| --
-**Network Security Groups (NSGs)** | NSGs control traffic into and out of virtual network resources in Azure. NSGs apply rules for the traffic flows that are permitted or denied, which includes traffic within Azure and between Azure and external networks such as on-premises or the Internet. NSGs are applied to subnets within a virtual network or to individual network interfaces.
-**Azure Firewall** | Azure Firewall is a managed, cloud-based network security service that protects Azure virtual network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Azure Firewall can be configured with traditional network filtering rules based on IP addresses, protocols, and ports, but also supports filtering based on Fully Qualified Domain Names (FQDN), Service Tags and inbuilt Threat Intelligence.
-**Network Virtual Appliance (NVA)** | Network Virtual Appliances are virtual machine media that can provide networking, security, and other functions to Azure. NVAs support network functionality and services in the form of VMs in virtual networks and deployments. NVAs can be used to address specific requirements, integrate with management and operational tools, or to provide consistency with existing products. Azure supports a broad list of third-party network virtual appliances including web application firewalls (WAF), firewalls, gateways/routers, application delivery controllers (ADC), and WAN optimizers.
-**Service endpoint policies (Preview)** | Virtual network service endpoint policies allow you to filter virtual network traffic to Azure services, allowing only specific Azure service resources, over service endpoints. Endpoint policies provide granular access control for virtual network traffic to Azure services.
-**Azure Policy** | Azure Policy is a service in Azure for creating, assigning, and managing policies. These policies use rules to control the types of resources that can be deployed and the configuration of those resources. Policies can be used to enforce compliance by preventing resources from being deployed if they do not meet requirements or can be used for monitoring to report on compliance status.
-|
-
-### PaaS egress
-
-The majority of PaaS resources do not generate egress traffic, but either respond to inbound requests (such as an Application Gateway, Storage, SQL Database, etc.) or relay data from other resources (such as Service Bus and Azure Relay). The network communication flows between PaaS resources such as App Services to Storage or SQL Databases is controlled and contained by Azure and secured through identity and other resource configuration controls rather than network segmentation or segregation.
-
-PaaS resources deployed into a virtual network receive dedicated IP addresses and are subject to any routing controls and NSGs in the same way as other resources in the virtual network. PaaS resources that do not exist within a virtual network will utilise a pool of IP addresses that are shared across all instances of the resource, which are either published through Microsoft documentation or can be determined through Azure Resource Manager.
-
-## General guidance
-
-To design and build secure solutions within Azure, it is critical to understand and control the network traffic so that only identified and authorised communication can occur. The intent of this guidance and the specific component guidance in later sections is to describe the tools and services that can be utilised to apply the principles outlined in the [ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation) across Azure workloads. This includes detailing how to create a virtual architecture for securing resources when it is not possible to apply the same traditional physical and network controls that are possible in an on-premises environment.
-
-### Guidance
-
-* Limit the number of egress points for virtual networks
-* Override the system default route for all subnets that do not need direct outbound communication to the Internet
-* Design and implement a complete network architecture to identify and control all ingress and egress points to Azure resources
-* Consider utilising a Hub and Spoke Network Design for virtual networks as discussed in the Microsoft Virtual Data Centre (VDC) documentation
-* Utilise products with inbuilt security capabilities for outbound connections to the Internet (for example, Azure Firewall, Network Virtual Appliances or Web Proxies)
-* Use identity controls such as Azure role-based access control, Conditional Access, and Multi-Factor Authentication (MFA) to limit network configuration privileges
-* Implement Locks to prevent modification or deletion of key elements of the network configuration
-* Deploy PaaS in a VNet integrated configuration for increased segregation and control
-* Implement ExpressRoute for connectivity with on-premises networks
-* Implement VPNs for integration with external networks
-* Utilise Azure Policy to restrict the regions and resources to only those that are necessary for system functionality
-* Utilise Azure Policy to enforce baseline security configuration for resources
-* Leverage Network Watcher and Azure Monitor for logging, auditing, and visibility of network traffic within Azure
-
-### Resources
-
-Item | Link
|
-_Australian Regulatory and Policy Compliance Documents including Consumer Guidance_ | [https://aka.ms/au-irap](https://aka.ms/au-irap)
-_Azure Virtual Data Centre_ | [https://docs.microsoft.com/azure/architecture/vdc/networking-virtual-datacenter](/azure/architecture/vdc/networking-virtual-datacenter)
-_ACSC Network Segmentation_ | [https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)
-_ACSC Cloud Security for Tenants_ | [https://www.cyber.gov.au/acsc/view-all-content/publications/cloud-computing-security-tenants](https://www.cyber.gov.au/acsc/view-all-content/publications/cloud-computing-security-tenants)
-_ACSC Information Security Manual_ | [https://acsc.gov.au/infosec/ism/index.htm](https://acsc.gov.au/infosec/ism/index.htm)
-|
-
-## Component guidance
-
-This section provides further guidance on the individual components that are relevant to egress traffic for systems deployed in Azure. Each section describes the intent of the specific component with links to documentation and configuration guides that can be used to assist with design and build activities.
-
-### Systems security
-
-All communication to resources within Azure passes through the Microsoft maintained network infrastructure, which provides connectivity and security functionality. A range of protections are automatically put in place by Microsoft to protect the Azure platform and network infrastructure and additional capabilities are available as services within Azure to control network traffic and establish network segmentation and segregation.
-
-### Virtual Network (VNet)
-
-Virtual networks are one of the fundamental building blocks for networking in Azure. Virtual networks define an IP address space and routing boundary to be used across a variety of systems. Virtual networks are divided into subnets and all subnets within a Virtual Network have a direct network route to each other. By using virtual network gateways (ExpressRoute or VPN), systems within a virtual network can integrate with on-premises and external environments and through Azure provided Network Address Translation (NAT) and Public IP address allocation, systems can connect to the Internet or other Azure Regions and Services. Understanding virtual networks and the associated configuration parameters and routing is crucial in understanding and controlling egress network traffic.
-
-As virtual networks form the base address space and routing boundary within Azure, it can be used as a primary building block of network segmentation and segregation.
-
-| Resource | Link |
-| | |
-| *Virtual Networks Overview* | [https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview](../virtual-network/virtual-networks-overview.md) |
-| *Plan Virtual Networks How-to Guide* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-vnet-plan-design-arm](../virtual-network/virtual-network-vnet-plan-design-arm.md) |
-| *Create a Virtual Network Quickstart* | [https://docs.microsoft.com/azure/virtual-network/quick-create-portal](../virtual-network/quick-create-portal.md)
-|
-
-### Subnet
-
-Subnets are a crucial component for network segmentation and segregation within Azure. Subnets can be used to provide separation between systems. A subnet is the resource within a virtual network where NSGs, Route Tables, and service endpoints are applied. Subnets can be used as both source and destination addresses for firewall rules and access-control lists.
-
-The subnets within a virtual network should be planned to meet the requirements of workloads and systems. Individuals involved in the design or implementation of subnets should refer to the ACSC guidelines for network segmentation to determine how systems should be grouped together within a subnet.
-
-|Resource|Link|
-|||
-|*Add, change, or delete a virtual network subnet* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-subnet](../virtual-network/virtual-network-manage-subnet.md)
-|
-
-### Network interface
-
-Network interfaces are the source for all egress traffic from a virtual machine. Network Interfaces enable the configuration of IP Addressing, and can be used to apply NSGs or for routing traffic through an NVA. The Network Interfaces for virtual machines should be planned and configured appropriately to align with overall network segmentation and segregation objectives.
-
-|Resource|Link|
-|||
-|*Create, Change, or Delete a Network Interface* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-network-interface](../virtual-network/virtual-network-network-interface.md) |
-|*Network Interface IP Addressing* | [https://docs.microsoft.com/azure/virtual-network/private-ip-addresses](../virtual-network/ip-services/private-ip-addresses.md)
-|
-
-### VNet integrated PaaS
-
-PaaS can provide enhanced functionality and availability and reduce management overhead but must be secured appropriately. To increase control, enforce network segmentation, or to provide a secure egress point for applications and services, many PaaS capabilities can be integrated with a virtual network.
-
-Using PaaS as an integrated part of system or application architecture, Microsoft provides multiple mechanisms to deploy PaaS into a virtual network. The deployment methodology can help restrict access while providing connectivity and integration with internal systems and applications. Examples include App Service Environments, SQL Managed Instance, and more.
-
-When deploying PaaS into a virtual network where routing and NSG controls have been implemented, it is crucial to understand the specific communication requirements of the resource, including management access from Microsoft services and the path that communications traffic will take when replying to incoming requests from these services.
-
-| Resource | Link |
-| | |
-| *Virtual network integration for Azure services* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-for-azure-services](../virtual-network/virtual-network-for-azure-services.md) |
-| *Integrate your app with an Azure Virtual Network How-to guide* | [https://docs.microsoft.com/azure/app-service/web-sites-integrate-with-vnet](../app-service/overview-vnet-integration.md)
-|
-
-### Public IP
-
-Public IP addresses are used when communicating outside a virtual network. This includes PaaS resources and any routes with a next hop of Internet. Commonwealth entities should plan the allocation of Public IP addresses carefully and only assign them to resources where there is a genuine requirement. As a general design practice, Public IP addresses should be allocated to controlled egress points for the virtual network such as Azure Firewall, VPN Gateway, or Network Virtual Appliances.
-
-|Resource|Link|
-|||
-|*Public IP Addresses Overview* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-ip-addresses-overview-arm#public-ip-addresses](../virtual-network/ip-services/public-ip-addresses.md#public-ip-addresses) |
-|*Create, change, or delete a public IP address* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address](../virtual-network/ip-services/virtual-network-public-ip-address.md)
-|
-
-## Effective routes
-
-Effective routes are the resultant set of routes determined by the combination of system routes, service endpoints, Route Tables, and BGP and the application of Azure routing logic. When outbound traffic is sent from a subnet, Azure selects a route based on the destination IP address, using the longest prefix match algorithm. If multiple routes contain the same address prefix, Azure selects the route type, based on the following priority:
-
-1. User-defined route
-2. BGP route
-3. System route
-
-It is important to note that system routes for traffic related to virtual network, virtual network peerings, or virtual network service endpoints, are preferred routes, even if BGP routes are more specific.
-
-Individuals involved in the design or implementation of routing topologies in Azure should understand how Azure routes traffic and develop an architecture that balances the necessary functionality of systems with the required security and visibility. Care should be taken to plan the environment appropriately to avoid excessive interventions and exceptions to routing behaviours as this will increase complexity and may make troubleshooting and fault finding more difficult and time consuming.
-
-| Resource |
-| |
-| [View effective routes](../virtual-network/manage-route-table.md#view-effective-routes)
-
-### System routes
-
-For [System Routes](../virtual-network/virtual-networks-udr-overview.md#system-routes), individuals involved in the design or implementation of virtual networks should understand the default system routes and the options available to complement or override those routes.
-
-### Service endpoints
-
-Enabling [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) on a subnet provides a direct communication path to the associated PaaS resource. This can provide increased performance and security by restricting the available communication path to just that service. The use of service endpoints does introduce a potential data exfiltration path as the default configuration allows access to all instances of the PaaS service rather than the specific instances required for an application or system.
-
-Commonwealth entities should evaluate the risk associated with providing direct access to the PaaS resource including the likelihood and consequence of the path being misused.
-
-To reduce potential risks associated with service endpoints, implement service endpoint policies where possible or consider enabling service endpoints on an Azure Firewall or NVA subnet and routing traffic from specific subnets through it where additional filtering, monitoring, or inspection can be applied.
-
-|Resource|Link|
-|||
-|*Tutorial: Restrict network access to PaaS resources with virtual network service endpoints using the Azure portal* |[https://docs.microsoft.com/azure/virtual-network/tutorial-restrict-network-access-to-resources](../virtual-network/tutorial-restrict-network-access-to-resources.md)|
-|
-
-### Route tables
-
-Route tables provide an administrator configured mechanism for controlling network traffic within Azure. Route tables can be utilised for forwarding traffic through to an Azure Firewall or NVA, connect directly to external resources, or to override Azure system routes. Route tables can also be used to prevent networks learned through a virtual network gateway from being made available to resources in a subnet by disabling virtual network gateway route propagation.
-
-|Resource|Link|
-|||
-|*Routing Concepts - custom routes* |[https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-overview#custom-routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes)|
-|*Tutorial: Route network traffic* |[https://docs.microsoft.com/azure/virtual-network/tutorial-create-route-table-portal](../virtual-network/tutorial-create-route-table-portal.md)|
-|
-
-### Border Gateway Protocol (BGP)
-
-BGP can be utilised by virtual network gateways to dynamically exchange routing information with on-premises or other external networks. BGP applies to a virtual network when configured through an ExpressRoute virtual network gateway over ExpressRoute private peering and when enabled on an Azure VPN Gateway.
-
-Individuals involved in the design or implementation of virtual networks and virtual network gateways in Azure should take time to understand the behaviour and configuration options available for BGP in Azure.
-
-|Resource|Link|
-|||
-|*Routing Concepts: BGP* | [https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-overview#next-hop-types-across-azure-tools](../virtual-network/virtual-networks-udr-overview.md#next-hop-types-across-azure-tools)|
-|*ExpressRoute routing requirements* | [https://docs.microsoft.com/azure/expressroute/expressroute-routing](../expressroute/expressroute-routing.md)|
-|*About BGP with Azure VPN Gateway* |[https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-bgp-overview](../vpn-gateway/vpn-gateway-bgp-overview.md)|
-|*Tutorial: Configure a site-to-site VPN over ExpressRoute Microsoft peering* |[https://docs.microsoft.com/azure/expressroute/site-to-site-vpn-over-microsoft-peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md)|
-|
-
-## Next hop types
-
-### Virtual Network
-
-Routes with a Next Hop of Virtual Network are added automatically as system routes, but can also be added to user-defined routes to direct traffic back to the virtual network in instances where the system route has been overridden.
-
-### VNet peering
-
-VNet peering enables communication between two disparate virtual networks. Configuring VNet peering must be enabled on each virtual network, but the virtual networks do not need to be in the same region, subscription or associated to the same Azure Active Directory (Azure AD) tenant.
-
-When configuring VNet peering, it is critical that individuals involved in the design or implementation of VNet peering understand the four associated configuration parameters and how they apply to each side of the peer:
-
-1. **Allow virtual network access:** Select **Enabled** (default) to enable communication between the two virtual networks. Enabling communication between virtual networks allows resources connected to either virtual network to communicate with each other with the same bandwidth and latency as if they were connected to the same virtual network.
-2. **Allow forwarded traffic:** Check this box to allow traffic *forwarded* by a network - traffic that didn't originate from the virtual network - to flow to this virtual network through a peering. This setting is fundamental to implementing a hub and spoke network topology.
-3. **Allow gateway transit:** Check this box to allow the peered virtual network to utilise the virtual network gateway attached to this virtual network. *Allow gateway transit* is enabled on the virtual network with the virtual network gateway resource, but only applies if *Use remote gateways* is enabled on the other virtual network.
-4. **Use remote gateways:** Check this box to allow traffic from this virtual network to flow through a virtual network gateway attached to the virtual network being peered with. *Use remote gateways* is enabled on the virtual network without a virtual network gateway and only applies if the *Allow gateway transit* option is enabled on the other virtual network.
-
-|Resource|Link|
-|||
-| Concepts: Virtual network peering | [https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview](../virtual-network/virtual-network-peering-overview.md) |
-| Create, change, or delete a virtual network peering | [https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-peering](../virtual-network/virtual-network-manage-peering.md)|
-|
-
-### Virtual network gateway
-
-Virtual network gateways provide a mechanism for integrating virtual networks with external networks, such as on-premises environments, partner environments, and other cloud deployments. The two types of virtual network gateway are ExpressRoute and VPN.
-
-#### ExpressRoute Gateway
-
-ExpressRoute Gateways provide an egress point from the virtual network to an on-premises environment and should be deployed to meet security, availability, financial, and performance requirements. ExpressRoute Gateways provide a defined network bandwidth and incur usage costs after deployment. Virtual networks can have only one ExpressRoute Gateway, but this can be connected to multiple ExpressRoute circuits and can be leveraged by multiple virtual networks through VNet Peering, allowing bandwidth and connectivity to be shared. Care should be taken to configure routing between on-premises environments and virtual networks using ExpressRoute Gateways to ensure end to end connectivity using known, controlled network egress points. Commonwealth entities using ExpressRoute Gateway over ExpressRoute private peering must also deploy Network Virtual Appliances (NVA) to establish VPN connectivity to the on-premises environment for compliance with the ACSC consumer guidance.
-
-It is important to note that ExpressRoute Gateways have restrictions on the address ranges, communities, and other specific configuration items exchanged through BGP.
-
-| Resource | Link |
-|||
-| ExpressRoute Gateway Overview | [https://docs.microsoft.com/azure/expressroute/expressroute-about-virtual-network-gateways](../expressroute/expressroute-about-virtual-network-gateways.md) |
-| Configure a virtual network gateway for ExpressRoute | [https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md)
-|
-
-#### VPN Gateway
-
-Azure VPN Gateway provides an egress network point from the virtual network to an external network for secure site-to-site connectivity. VPN Gateways provide a defined network bandwidth and incur usage costs after deployment. Commonwealth entities utilising VPN Gateway should ensure that it is configured in accordance with the ACSC consumer guidance. Virtual Networks can have only one VPN Gateway, but this can be configured with multiple tunnels and can be leveraged by multiple virtual networks through VNet Peering, allowing multiple virtual networks to share bandwidth and connectivity. VPN Gateways can be established over the Internet or over ExpressRoute through Microsoft Peering.
-
-| Resource | Link |
-| | |
-| VPN Gateway Overview| [https://docs.microsoft.com/azure/vpn-gateway](../vpn-gateway/index.yml)|
-| Planning and design for VPN Gateway | [https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-plan-design](../vpn-gateway/vpn-gateway-about-vpngateways.md)|
-| Azure VPN Gateway in Azure Australia | [Azure VPN Gateway in Azure Australia](vpn-gateway.md)
-|
-
-### Next hop of virtual appliance
-
-The next hop of virtual appliance provides the ability to process network traffic outside the Azure networking and routing topology applied to virtual networks. Virtual appliances can apply security rules, translate addresses, establish VPNs, proxy traffic, or a range of other capabilities. The next hop of virtual appliance is applied through UDRs in a route table and can be used to direct traffic to an Azure Firewall, individual NVA, or Azure Load Balancer providing availability across multiple NVAs. To use a virtual appliance for routing, the associated network interfaces must be enabled for IP forwarding.
-
-| Resource | Link |
-| | |
-| Routing concepts: Custom Routes | [https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-overview#custom-routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) |
-| Enable or Disable IP forwarding | [https://docs.microsoft.com/azure/virtual-network/virtual-network-network-interface#enable-or-disable-ip-forwarding](../virtual-network/virtual-network-network-interface.md#enable-or-disable-ip-forwarding)
-|
-
-### Next hop of VirtualNetworkServiceEndpoint
-
-Routes with a next hop type of VirtualNetworkServiceEndpoint are only added when a service endpoint is configured on a subnet and cannot be manually configured through route tables.
-
-### Next hop of Internet
-
-The next hop Internet is used to reach any resources that use a Public IP address, which includes the Internet as well as PaaS and Azure Services in Azure Regions. The next hop Internet does not require a default route (0.0.0.0/0) covering all networks, but can be used to enable routing paths to specific public services. The next hop of Internet should be used for adding routes to authorised services and capabilities required for system functionality such as Microsoft management addresses.
-
-Examples of services that can be added using the next hop of Internet are:
-
-1. Key Management Services for Windows activation
-2. App Service Environment management
-
-|Resource|Link|
-|||
-| Outbound connections in Azure | [https://docs.microsoft.com/azure/load-balancer/load-balancer-outbound-connections](../load-balancer/load-balancer-outbound-connections.md) |
-| Use Azure custom routes to enable KMS activation | [https://docs.microsoft.com/azure/virtual-machines/troubleshooting/custom-routes-enable-kms-activation](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation) |
-| Locking down an App Service Environment | [https://docs.microsoft.com/azure/app-service/environment/firewall-integration](../app-service/environment/firewall-integration.md) |
-|
-
-### Next hop of none
-
-The next hop of none can be used to prevent communication to a specific network. In contrast with an NSG, which controls whether the traffic is permitted or denied from traversing an available network path, using a next hop of none removes the network path completely. Care should be taken when creating routes with a next hop of none, especially when applying it to a default route of 0.0.0.0/0 as this can have unintended consequences and may make troubleshooting system issues complex and time consuming.
-
-## Security
-
-Implementing network segmentation and segregation controls on IaaS and PaaS capabilities is achieved through securing the capabilities themselves and by implementing controlled communication paths from the systems that will be communicating with the capability.
-
-Designing and building solutions in Azure is a process of creating a logical architecture to understand, control, and monitor network resources across the entire Azure presence. This logical architecture is software defined within the Azure platform and takes the place of a physical network topology that is implemented in traditional network environments.
-
-The logical architecture that is created must provide the functionality necessary for usability, but must also provide the visibility and control needed for security and integrity.
-
-Achieving this outcome is based on implementing the necessary network segmentation and segregation tools, but also in protecting and enforcing the network topology and the implementation of these tools.
-
-### Network Security Groups (NSGs)
-
-NSGs are used to specify the inbound and outbound traffic permitted for a subnet or a specific network interface. When configuring NSGs, commonwealth entities should use an approval list approach where rules are configured to permit the necessary traffic with a default rule configured to deny all traffic that does not match a specific permit statement. When planning and configuring NSGs, care must be taken to ensure that all necessary inbound and outbound traffic is captured appropriately. This includes identifying and understanding all private IP address ranges utilised within virtual networks and the on-premises environment, and specific Microsoft services such as Azure Load Balancer and PaaS management requirements. Individuals involved in the design and implementation of NSGs should also understand the use of Service Tags and Application Security Groups for creating fine-grained, service, and application-specific security rules.
-
-It is important to note that the default configuration for an NSG permits outbound traffic to all addresses within the virtual network and all public IP addresses.
-
-|Resource|Link|
-|||
-|Network Security Overview | [https://docs.microsoft.com/azure/virtual-network/security-overview](../virtual-network/network-security-groups-overview.md)|
-|Create, change, or delete a network security group | [https://docs.microsoft.com/azure/virtual-network/manage-network-security-group](../virtual-network/manage-network-security-group.md)|
-|
-
-### Azure Firewall
-
-Azure Firewall can be utilised to build a hub and spoke network topology and enforce centralised network security controls. Azure Firewall can be used to meet the necessary requirements of the ISM for egress traffic by implementing an allowlisting approach where only the IP addresses, protocols, ports, and FQDNs required for system functionality are authorised. Commonwealth entities should take a risk-based approach to determine whether the security capabilities provided by Azure Firewall are sufficient for their requirements. For scenarios where additional security capabilities beyond those provided by Azure Firewall are required, commonwealth entities should consider implementing NVAs.
-
-|Resource|Link|
-|||
-|*Azure Firewall Documentation* | [https://docs.microsoft.com/azure/firewall](../firewall/index.yml)|
-|*Tutorial: Deploy and configure Azure Firewall in a hybrid network using Azure PowerShell* | [https://docs.microsoft.com/azure/firewall/tutorial-hybrid-ps](../firewall/tutorial-hybrid-ps.md)|
-|
-
-### Network Virtual Appliances (NVAs)
-
-NVAs can be used to build a hub and spoke network topology, provide enhanced or complementary network capabilities or can be used as an alternative to Azure network mechanisms for familiarity and consistent support and management with on-premises network services. NVAs can be deployed to meet specific security requirements such as; scenarios where there is a requirement for identity awareness associated to network traffic, HTTPS decryption, content inspection, filtering, or other security capabilities. NVAs should be deployed in a highly available configuration and individuals involved in the design or implementation of NVAs should consult the appropriate vendor documentation for guidelines on deployment in Azure.
-
-|Resource|Link|
-|||
-|*Deploy highly available network virtual appliances* | [https://docs.microsoft.com/azure/architecture/reference-architectures/dmz/nva-ha](/azure/architecture/reference-architectures/dmz/nva-ha)|
-|
-
-### Service endpoint policies (Preview)
-
-Configure service endpoint policies based on availability of the service and a security risk assessment of the likelihood and impact of data exfiltration. Service endpoint policies should be considered for Azure Storage and managed on a case by case basis for other services based on the associated risk profile.
-
-| Resource | Link |
-| | |
-| *Service endpoint policies overview* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-service-endpoint-policies-overview](../virtual-network/virtual-network-service-endpoint-policies-overview.md) |
-| *Create, change, or delete service endpoint policy using the Azure portal* | [https://docs.microsoft.com/azure/virtual-network/virtual-network-service-endpoint-policies-portal](../virtual-network/virtual-network-service-endpoint-policies-portal.md)
-|
-
-### Azure Policy
-
-Azure Policy is a key component for enforcing and maintaining the integrity of the logical architecture of the Azure environment. There are a variety of services and egress network traffic paths available through Azure services. It is crucial that Commonwealth entities are aware of the resources that exist within their environment and the available network egress points. To ensure that unauthorised network egress points are not created in the Azure environment, Commonwealth entities should use Azure Policy to control the types of resources that can be deployed and the configuration of those resources. Practical examples include restricting resources to only those authorised and approved for use and requiring NSGs to be added to subnets.
-
-|Resource | Link|
-|||
-|*Azure Policy Overview* | [https://docs.microsoft.com/azure/governance/policy/overview](../governance/policy/overview.md)|
-|*Allowed Resource Types sample policy* | [https://docs.microsoft.com/azure/governance/policy/samples/allowed-resource-types](../governance/policy/samples/index.md)|
-|*Force NSG on a subnet sample policy*| [https://docs.microsoft.com/azure/governance/policy/samples/nsg-on-subnet](../governance/policy/samples/index.md)|
-|
-
-## PaaS egress capabilities
-
-PaaS capabilities provide opportunities for increased functionality and simplified management, but introduce complexities in addressing requirements for network segmentation and segregation. PaaS capabilities are typically configured with Public IP addresses and are accessible from the Internet. If you are using PaaS capabilities within your systems and solutions, care should be taken to identify the communication flows between components and create network security rules to only allow that communication. As part of a defence-in-depth approach to security, PaaS capabilities should be configured with encryption, authentication, and appropriate access controls and permissions.
-
-### Public IP for PaaS
-
-Public IP addresses for PaaS capabilities are allocated based on the region where the service is hosted or deployed. An understanding of Public IP address allocation and regions is required if you are going to build appropriate network security rules and routing topology for network segmentation and segregation covering Azure virtual networks, PaaS and ExpressRoute and Internet connectivity. Azure allocates IP addresses from a pool allocated to each Azure region. Microsoft makes the addresses used in each region available for download, which is updated in a regular and controlled manner. The services that are available in each region also frequently changes as new services are released or services are deployed more widely. Commonwealth entities should review these materials regularly and can use automation to maintain systems as required. Specific IP addresses for some services hosted in each region can be obtained by contacting Microsoft support.
-
-| Resource | Link |
-| | |
-| *Microsoft Azure Datacenter IP Ranges* | [https://www.microsoft.com/download/details.aspx?id=41653](https://www.microsoft.com/download/details.aspx?id=41653) |
-| *Azure Services per region* | [https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast&products=all](https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast&products=all) |
-| *Inbound and outbound IP addresses in Azure App Service* | [https://docs.microsoft.com/azure/app-service/overview-inbound-outbound-ips](../app-service/overview-inbound-outbound-ips.md)
-|
-
-## Next steps
-
-Compare your overall architecture and design to the published [PROTECTED Blueprints for IaaS and PaaS Web Applications](https://aka.ms/au-protected).
azure-australia Gateway Ingress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/gateway-ingress-traffic.md
- Title: Controlling ingress traffic in Azure Australia
-description: A guide for controlling ingress traffic in Azure Australia to meet Australian Government requirements for Secure Internet Gateways
--- Previously updated : 07/22/2019---
-# Controlling ingress traffic in Azure Australia
-
-A core element of securing ICT systems is controlling network traffic. Traffic should be restricted to only that necessary for a system to functions to reduce the potential for compromise.
-
-This guide gives details about how inbound (ingress) network traffic works within Azure, and recommendations for implementing network security controls for an internet connected system.
-
-The network controls align with the Australian Cyber Security Centre (ACSC) Consumer Guidance and the intent of the ACSC's Information Security Manual (ISM).
-
-## Requirements
-
-The overall security requirements for Commonwealth systems are defined in the ISM. To assist Commonwealth entities in implementing network security, the ACSC has published [ACSC Protect: Implementing Network Segmentation and Segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation), and to assist with securing systems in Cloud environments the ACSC has published [Cloud Computing Security for Tenants](https://www.cyber.gov.au/publications/cloud-computing-security-for-tenants).
-
-These guides outline the context for implementing network security and controlling traffic and include practical recommendations for network design and configuration.
-
-The Microsoft [Cloud Computing Security for Tenants of Microsoft Azure](https://aka.ms/au-irap) guide in the Australian page of the Service Trust Portal highlights specific Microsoft technologies that enable you to meet the advice in the ACSC publications.
-
-The following key requirements, identified in the publications from the ACSC, are important for controlling ingress traffic in Azure:
-
-|Description|Source|
-|||
-|**Implement Network Segmentation and Segregation, for example, n-tier architecture, using host-based firewalls and CSP's network access controls to limit inbound and outbound VM network connectivity to only required ports/protocols.**| _Cloud Computing for Tenants_|
-|**Implement adequately high bandwidth, low latency, reliable network connectivity** between the tenant (including the tenant's remote users) and the cloud service to meet the tenant's availability requirements | _Cloud Computing for Tenants_|
-|**Apply technologies at more than just the network layer**. Each host and network should be segmented and segregated, where possible, at the lowest level that can be practically managed. In most cases, segmentation and segregation apply from the data link layer up to and including the application layer; however, in sensitive environments, physical isolation may be appropriate. Host-based and network-wide measures should be deployed in a complementary manner and be centrally monitored. Using a firewall or security appliance as the only security measure is not sufficient. |_ACSC Protect: Implementing Network Segmentation and Segregation_|
-|**Use the principles of least privilege and needΓÇÉtoΓÇÉknow**. If a host, service or network doesn't need to communicate with another host, service, or network, it shouldn't be allowed to. If a host, service, or network only needs to talk to another host, service, or network using specific ports or protocols, then any other ports or protocols should be disabled. Adopting these principles across a network will complement the minimization of user privileges and significantly increase the overall security posture of the environment. |_ACSC Protect: Implementing Network Segmentation and Segregation_|
-|**Separate hosts and networks based on their sensitivity or criticality to business operations**. Separation can be achieved by using different hardware or platforms depending on different security classifications, security domains, or availability/integrity requirements for certain hosts or networks. In particular, separate management networks and consider physically isolating out-of-band management networks for sensitive environments. |_ACSC Protect: Implementing Network Segmentation and Segregation_|
-|**Identify, authenticate, and authorize access by all entities to all other entities**. All users, hosts, and services should have their access restricted to only the other users, hosts, and services required to do their designated duties or functions. All legacy or local services which bypass or downgrade the strength of identification, authentication, and authorization services should be disabled and their use should be closely monitored. |_ACSC Protect: Implementing Network Segmentation and Segregation_|
-|**Implement allow listing of network traffic instead of deny listing**. Only permit access for known good network traffic (that is, that which is identified, authenticated, and authorized), rather than denying access to known bad network traffic (for example, blocking a specific address or service). Using an accepted senders list results in a superior security policy to a block list, and significantly improves an organization's capacity to detect and assess potential network intrusions. |_ACSC Protect: Implementing Network Segmentation and Segregation_|
-|
-
-This article provides information and recommendations on how these requirements can be met for systems deployed in Azure using both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). You should also read the article on [Controlling egress traffic in Azure Australia](gateway-egress-traffic.md) to fully understand controlling network traffic within Azure.
-
-## Architecture
-
-If you are involved in the design or implementation of network security and ingress traffic controls, you must first understand how ingress network traffic works within Azure across both IaaS and PaaS. This section provides an overview of the possible entry points where network traffic could reach a resource hosted in Azure, and the security controls available to restrict and control that traffic. Each of these components is discussed in detail in the remaining sections of this guide.
-
-### Architecture components
-
-The architectural diagram shown here depicts the possible paths that network traffic can take to connect into a service hosted in Azure. These components are divided into Azure, IaaS Ingress, PaaS Ingress, and Security Control, depending on the function that they provide for ingress traffic.
-
-![Architecture](media/ingress-traffic.png)
-
-### Azure components
-
-|Component | Description|
-|||
-|**DDoS Protection** | Distributed denial of service (DDoS) attacks attempt to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet. Azure includes DDoS Protection automatically through the Azure platform and provides additional mitigation capabilities that can be enabled for specific applications for more granular control.|
-| **Traffic Manager** | Azure Traffic Manager is a Domain Name System (DNS) based traffic load balancer that can distribute traffic optimally to services across Azure regions, while providing high availability and responsiveness. Traffic Manager uses DNS to direct client requests to the most appropriate endpoint based on a traffic-routing method and the health of the endpoints.|
-| **ExpressRoute** | ExpressRoute is a dedicated network connection for consuming Microsoft cloud services. It is provisioned through a connectivity provider and offers more reliability, faster speeds, lower latencies, and higher security than typical connections over the Internet. An ExpressRoute circuit represents the logical connection between the on-premises infrastructure and Microsoft cloud services through a connectivity provider.|
-| **ExpressRoute Private Peering** | ExpressRoute Private Peering is a connection between the on-premises environment and private Azure virtual networks. Private Peering enables access to Azure services such as Virtual Machines, that are deployed within a virtual network. The resources and virtual networks accessed via private peering are considered an extension of an organization's core network. Private Peering provides bi-directional connectivity between the on-premises network and Azure virtual networks using private IP addresses.|
-| **ExpressRoute Microsoft Peering** | ExpressRoute Microsoft Peering is a connection between the on-premises environment and Microsoft and Azure public services. This includes connectivity to Microsoft 365, Dynamics 365, and Azure PaaS services. Peering is established over public IP addresses that are owned by the organization or connectivity provider. No services are accessible via ExpressRoute Microsoft Peering by default and an organization must opt in to the services that are required. This process then provides connectivity to the same endpoints that are available on the Internet.|
-|
-
-### IaaS ingress components
-
-|Component | Description|
-|||
-|**Network Interface** | A network interface is a resource that exists in Azure. It is attached to a Virtual Machine and assigned a private, non-Internet routable IP address from the subnet that it is associated with. This IP address is dynamically or statically assigned through Azure Resource Manager.|
-|**Subnet** | A subnet is an IP address range that is created within a VNet. Multiple subnets can be created within a VNet for network segmentation.|
-| **Virtual Network (VNet)** | A VNet is a foundational resource within Azure that provides a platform and boundary for deploying resources and enabling communication. The VNet exists within an Azure Region and defines the IP Address Space and Routing boundaries for VNet integrated resources such as Virtual Machines.|
-| **VNet Peering** | VNet Peering is an Azure configuration option that enables direct communication between two VNets without the need for a Virtual Network Gateway. Once peered, the two VNets can communicate directly and additional configuration can control the use of Virtual Network Gateways and other transit options.|
-| **Public IP** | A Public IP is a resource that reserves one of the Microsoft owned Public, Internet-Routable IP Addresses from the specified region for use within the virtual network. It can be associated with a specific Network Interface, which enables the resource to be accessible from the Internet, ExpressRoute and PaaS systems.|
-| **ExpressRoute Gateway** | An ExpressRoute Gateway is an object in a Virtual Network provides connectivity and routing from the Virtual Network to on-premises networks over Private Peering on an ExpressRoute Circuit.|
-| **VPN Gateway** | A VPN Gateway is an object in a Virtual Network that provides an encrypted tunnel from a Virtual Network to an external network. The encrypted tunnel can be Site-to-Site for bi-directional communication with an on-premises environment, other virtual network, or cloud environment or Point-to-Site for communication with a single end point.|
-| **PaaS VNet Integration** | Many PaaS capabilities can be deployed into, or integrated with, a Virtual Network. Some PaaS capabilities can be fully integrated with a VNet and be accessible via only private IP addresses. Others, such as Azure Load Balancer and Azure Application Gateway, can have an external interface with a public IP address and an internal interface with a private IP address inside the virtual network. In this instance, traffic can ingress into the Virtual Network via the PaaS capability.|
-|
-
-### PaaS ingress components
-
-|Component | Description|
-|||
-|**Hostname** | Azure PaaS capabilities are identified by a unique public hostname that is assigned when the resource is created. This hostname is then registered into a public DNS domain, where it can be resolved to a Public IP address.|
-|**Public IP** | Unless deployed in a VNet integrated configuration, Azure PaaS capabilities are accessed via a Public, Internet-routable IP address. This address can be dedicated to the specific resources, such as a Public Load Balancer, or could be associated with a specific capability that has a shared entry point for multiple instances, such as Storage or SQL. This Public IP addressed can be accessed from the Internet, ExpressRoute or from IaaS public IP addresses through the Azure backbone network.|
-|**Service endpoints** | Service endpoints provide a direct, private connection from a Virtual Network to a specific PaaS capability. Service endpoints, which are only available for a subset of PaaS capabilities, provide increased performance and security for resources in a VNet accessing PaaS.|
-|
-
-### Security controls
-
-|Component | Description|
-|||
-|**Network Security Groups (NSGs)** | NSGs control traffic into and out of virtual network resources in Azure. NSGs apply rules for the traffic flows that are permitted or denied, which includes traffic within Azure and between Azure and external networks such as on-premises or the Internet. NSGs are applied to subnets within a virtual network or to individual network interfaces.|
-|**PaaS Firewall** | Many PaaS capabilities, such as Storage and SQL have an inbuilt Firewall for controlling ingress network traffic to the specific resource. Rules can be created to allow or deny connections from specific IP Addresses and/or Virtual Networks.|
-|**PaaS Authentication and Access Control** | As part of a layered approach to security, PaaS capabilities provide multiple mechanisms for authenticating users and controlling privileges and access.|
-|**Azure Policy** | Azure Policy is a service in Azure for creating, assigning, and managing policies. These policies use rules to control the types of resources that can be deployed and the configuration of those resources. Policies can be used to enforce compliance by preventing resources from being deployed if they do not meet requirements or can be used for monitoring to report on compliance status.|
-|
-
-## General guidance
-
-To design and build secure solutions within Azure, it is critical to understand and control the network traffic so that only identified and authorized communication can occur. The intent of this guidance, and the specific component guidance in later sections, is to describe the tools and services that can be utilized to apply the principles outlined in the _ACSC Protect: Implementing Network Segmentation and Segregation_ across Azure workloads. This includes detailing how to create a virtual architecture for securing resources when it is not possible to apply the same traditional physical and network controls that are possible in an on-premises environment.
-
-### Specific focus areas
-
-* Limit the number of entry points to virtual networks
-* Limit the number of Public IP addresses
-* Consider utilizing a Hub and Spoke Network Design for Virtual Networks as discussed in the Microsoft Virtual Data Center (VDC) documentation
-* Utilize products with inbuilt security capabilities for inbound connections from the Internet (for example, Application Gateway, API Gateway, Network Virtual Appliances)
-* Restrict communication flows to PaaS capabilities to only those necessary for system functionality
-* Deploy PaaS in a VNet integrated configuration for increased segregation and control
-* Configure systems to use encryption mechanisms in line with the ACSC Consumer Guidance and ISM
-* Use identity-based protections such as authentication and Azure role-based access control in addition to traditional network controls
-* Implement ExpressRoute for connectivity with on-premises networks
-* Implement VPNs for administrative traffic and integration with external networks
-* Utilize Azure Policy to restrict the regions and resources to only those that are necessary for system functionality
-* Utilize Azure Policy to enforce baseline security configuration for internet-accessible resources
-
-### Additional resources
-
-|Resource | Link|
-|||
-|Australian Regulatory and Policy Compliance Documents including Consumer Guidance|[https://aka.ms/au-irap](https://aka.ms/au-irap)|
-|Azure Virtual Data Center|[https://docs.microsoft.com/azure/architecture/vdc/networking-virtual-datacenter](/azure/architecture/vdc/networking-virtual-datacenter)|
-|ACSC Network Segmentation|[https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation](https://www.cyber.gov.au/acsc/view-all-content/publications/implementing-network-segmentation-and-segregation)|
-|ACSC Cloud Security for Tenants| [https://www.cyber.gov.au/acsc/view-all-content/publications/cloud-computing-security-tenants](https://www.cyber.gov.au/acsc/view-all-content/publications/cloud-computing-security-tenants)|
-|ACSC Information Security Manual|[https://acsc.gov.au/infosec/ism/index.htm](https://acsc.gov.au/infosec/ism/index.htm)|
-
-## Component guidance
-
-This section provides further guidance on the individual components that are relevant to ingress traffic to systems deployed in Azure. Each section describes the intent of the specific component with links to documentation and configuration guides that can be used to assist with design and build activities.
-
-## Azure
-
-All communication to resources within Azure passes through the Microsoft maintained network infrastructure, which provides connectivity and security functionality. A range of protections are automatically put in place by Microsoft to protect the Azure platform and network infrastructure and additional capabilities are available as services within Azure to control network traffic and establish network segmentation and segregation.
-
-### DDoS Protection
-
-Internet accessible resources are susceptible to DDoS attacks. To protect against these attacks, Azure provides DDoS protections at a Basic and a Standard level.
-
-Basic is automatically enabled as part of the Azure platform including always-on traffic monitoring, and real-time mitigation of common network-level attacks, providing the same defenses utilized by Microsoft's online services. The entire scale of Azure's global network can be used to distribute and mitigate attack traffic across regions. Protection is provided for IPv4 and IPv6 Azure public IP addresses
-
-Standard provides additional mitigation capabilities over the Basic service tier that are tuned specifically to Azure Virtual Network resources. Protection policies are tuned through dedicated traffic monitoring and machine learning algorithms. Protection is provided for IPv4 Azure public IP addresses.
-
-|Resource|Link|
-|||
-|Azure DDoS Protection Overview|[https://docs.microsoft.com/azure/virtual-network/ddos-protection-overview](../ddos-protection/ddos-protection-overview.md)|
-|Azure DDoS Best Practices|[https://docs.microsoft.com/azure/ddos-protection/fundamental-best-practices](../ddos-protection/fundamental-best-practices.md)|
-|Managing DDoS Protection|[https://docs.microsoft.com/azure/virtual-network/manage-ddos-protection](../ddos-protection/manage-ddos-protection.md)|
-|
-
-### Traffic Manager
-
-Traffic Manager is used to manage ingress traffic by controlling which endpoints of an application receive connections. To protect against a loss of availability of systems or applications due to cyber security attack, or to recover services after a system compromise, Traffic Manager can be used to redirect traffic to functioning, available application instances.
-
-|Resource|Link|
-|||
-|Traffic Manager Overview | [https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview](../traffic-manager/traffic-manager-overview.md)|
-|Disaster recovery using Azure DNS and Traffic Manager Guide | [https://docs.microsoft.com/azure/networking/disaster-recovery-dns-traffic-manager](../networking/disaster-recovery-dns-traffic-manager.md)|
-|
-
-### ExpressRoute
-
-ExpressRoute can be used to establish a private path from an on-premises environment to systems hosted in Azure. This connection can provide greater reliability and guaranteed performance with enhanced privacy for network communications. Express Route allows commonwealth entities to control inbound traffic from the on-premises environment and define dedicated addresses specific to the organization to use for inbound firewall rules and access control lists.
-
-|Resource | Link|
-|||
-|ExpressRoute Overview | [https://docs.microsoft.com/azure/expressroute/](../expressroute/index.yml)|
-|ExpressRoute Connectivity Models | [https://docs.microsoft.com/azure/expressroute/expressroute-connectivity-models](../expressroute/expressroute-connectivity-models.md)|
-|
-
-### ExpressRoute Private Peering
-
-Private peering provides a mechanism for extending an on-premises environment into Azure using only private IP addresses. This enables commonwealth entities to integrate Azure Virtual Networks and address ranges with existing on-premises systems and services. Private Peering provides assurance that communication across ExpressRoute is only to Virtual Networks authorized by the organization. If you use Private Peering, Commonwealth entities must implement Network Virtual Appliances (NVA) instead of Azure VPN Gateway to establish the secure VPN communication to your on-premises networks as required by the ACSC consumer guidance.
-
-|Resource | Link|
-|||
-|ExpressRoute Private Peering Overview | [https://docs.microsoft.com/azure/expressroute/expressroute-circuit-peerings#routingdomains](../expressroute/expressroute-circuit-peerings.md#routingdomains)|
-|ExpressRoute Private Peering How-to Guide | [https://docs.microsoft.com/azure/expressroute/expressroute-howto-routing-portal-resource-manager#private](../expressroute/expressroute-howto-routing-portal-resource-manager.md#private)|
-|
-
-### ExpressRoute Microsoft Peering
-
-Microsoft Peering provides a high-speed, low latency connection to Microsoft Public Services without needing to traverse the Internet. This provides greater reliability, performance, and privacy for connections. By using Route Filters, commonwealth entities can restrict communications to only the Azure Regions that they require, but this includes services hosted by other organizations and may necessitate additional filtering or inspection capabilities between the on-premises environment and Microsoft.
-
-Commonwealth entities can use the dedicated Public IP addresses established through the peering relationship to uniquely identify the on-premises environment for use in firewalls and access control lists within PaaS capabilities.
-
-As an alternative, commonwealth entities can use ExpressRoute Microsoft peering as an underlay network for establishing VPN connectivity through Azure VPN Gateway. In this model, there is no active communication from the internal on-premises network to Azure public services over ExpressRoute, but secure connectivity through to private Virtual Networks is achieved in compliance with the ACSC consumer guidance.
-
-|Resource | Link|
-|||
-|ExpressRoute Microsoft Peering Overview | [https://docs.microsoft.com/azure/expressroute/expressroute-circuit-peerings#routingdomains](../expressroute/expressroute-circuit-peerings.md#routingdomains)|
-|ExpressRoute Microsoft Peering How-to Guide | [https://docs.microsoft.com/azure/expressroute/expressroute-howto-routing-portal-resource-manager#msft](../expressroute/expressroute-howto-routing-portal-resource-manager.md#msft)|
-|
-
-## IaaS ingress
-
-This section provides the component guidance for controlling Ingress traffic to IaaS components. IaaS includes Virtual Machines and other compute resources that can be deployed and managed within a Virtual Network in Azure. For traffic to arrive at systems deployed using IaaS it must have an entry point to the Virtual Network, which can be established through a Public IP address, Virtual Network Gateway or Virtual Network peering relationship.
-
-### Network interface
-
-Network interfaces are the ingress points for all traffic to a Virtual Machine. Network Interfaces enable the configuration of IP Addressing, and can be used to apply NSGs or for routing traffic through a Network Virtual Appliance. The Network Interfaces for Virtual Machines should be planned and configured appropriately to align with overall network segmentation and segregation objectives.
-
-|Resource | Link|
-|||
-|Create, Change, or Delete a Network Interface | [https://docs.microsoft.com/azure/virtual-network/virtual-network-network-interface](../virtual-network/virtual-network-network-interface.md)|
-|Network Interface IP Addressing | [https://docs.microsoft.com/azure/virtual-network/private-ip-addresses](../virtual-network/ip-services/private-ip-addresses.md)|
-|
-
-### Subnet
-
-Subnets are a crucial component for network segmentation and segregation within Azure. Subnets can be used similarly to provide separation between systems. NSGs can be applied to subnets to restrict ingress communication flows to only those necessary for system functionality. Subnets can be used as both source and destination addresses for firewall rules and access-control lists and can be configured for service endpoints to provide connectivity to PaaS capabilities.
-
-|Resource | Link|
-|||
-|Add, change, or delete a virtual network subnet | [https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-subnet](../virtual-network/virtual-network-manage-subnet.md)|
-|
-
-### Virtual Network (VNet)
-
-VNets are one of the fundamental building blocks for networking in Azure. Virtual Networks define an IP address space and routing boundary to be used across a variety of systems. Virtual Networks are divided into subnets and all subnets within a Virtual Network have a direct network route to each other. By using Virtual Network Gateways (ExpressRoute or VPN), systems within a Virtual Network can be made accessible to on-premises and external environments. Understanding Virtual Networks and the associated configuration parameters and routing is crucial in understanding and controlling ingress network traffic.
-
-|Resource | Link|
-|||
-|Virtual Networks Overview | [https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview](../virtual-network/virtual-networks-overview.md)|
-|Plan Virtual Networks How-to Guide | [https://docs.microsoft.com/azure/virtual-network/virtual-network-vnet-plan-design-arm](../virtual-network/virtual-network-vnet-plan-design-arm.md)|
-Create a Virtual Network Quickstart | [https://docs.microsoft.com/azure/virtual-network/quick-create-portal](../virtual-network/quick-create-portal.md)|
-|
-
-### VNet Peering
-
-VNet Peering is used to provide a direct communication path between two Virtual Networks. Once peering is established, hosts in one Virtual Network have a high-speed routing path directly to hosts in another Virtual Network. NSGs still apply to the traffic as normal and advanced configuration parameters can be used to define whether communication through Virtual Network Gateways or from other external systems is permitted.
-
-|Resource | Link|
-|||
-|Virtual Network Peering Overview | [https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview](../virtual-network/virtual-network-peering-overview.md)|
-|Create, change, or delete a virtual network peering | [https://docs.microsoft.com/azure/virtual-network/virtual-network-manage-peering](../virtual-network/virtual-network-manage-peering.md)|
-|
-
-### Public IP on VNET
-
-Public IP addresses are used to provide an ingress communication path to services deployed in a Virtual Network. Commonwealth entities should plan the allocation of Public IP addresses carefully and only assign them to resources where there is a genuine requirement. As a general design practice, Public IP addresses should be allocated to resources with inbuilt security capabilities such as Application Gateway or Network Virtual Appliances to provide a secure, controlled public entry point to a Virtual Network.
-
-|Resource | Link|
-|||
-|Public IP Addresses Overview | [https://docs.microsoft.com/azure/virtual-network/virtual-network-ip-addresses-overview-arm#public-ip-addresses](../virtual-network/ip-services/public-ip-addresses.md#public-ip-addresses)|
-|Create, change, or delete a public IP address | [https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address](../virtual-network/ip-services/virtual-network-public-ip-address.md)|
-|
-
-### ExpressRoute Gateway
-
-ExpressRoute Gateways provide an ingress point from the on-premises environment and should be deployed to meet security, availability, financial, and performance requirements. ExpressRoute Gateways provide a defined network bandwidth and incur usage costs after deployment. Virtual Networks can have only one ExpressRoute Gateway, but this can be connected to multiple ExpressRoute circuits and can be leveraged by multiple Virtual Networks through VNet Peering, allowing multiple Virtual Networks to share bandwidth and connectivity. Care should be taken to configure routing between on-premises environments and Virtual Networks using ExpressRoute Gateways to ensure end to end connectivity using known, controlled network ingress points. Commonwealth entities using ExpressRoute Gateway must also deploy Network Virtual Appliances to establish VPN connectivity to the on-premises environment for compliance with the ACSC consumer guidance.
-
-|Resource | Link|
-|||
-|ExpressRoute Gateway Overview | [https://docs.microsoft.com/azure/expressroute/expressroute-about-virtual-network-gateways](../expressroute/expressroute-about-virtual-network-gateways.md)|
-|Configure a virtual network gateway for ExpressRoute | [https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md)|
-|
-
-### VPN Gateway
-
-Azure VPN Gateway provides an ingress network point from an external network for secure site-to-site or point-to-site connections. VPN Gateways provide a defined network bandwidth and incur usage costs after deployment. Commonwealth entities utilizing VPN Gateway should ensure that it is configured in accordance with the ACSC consumer guidance. Virtual Networks can have only one VPN Gateway, but this can be configured with multiple tunnels and can be leveraged by multiple Virtual Networks through VNet Peering, allowing multiple Virtual Networks to share bandwidth and connectivity. VPN Gateways can be established over the Internet or over ExpressRoute through Microsoft Peering.
-
-|Resource | Link|
-|||
-|VPN Gateway Overview | [https://docs.microsoft.com/azure/vpn-gateway/](../vpn-gateway/index.yml)|
-|Planning and design for VPN Gateway | [https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-plan-design](../vpn-gateway/vpn-gateway-about-vpngateways.md)|
-|VPN Gateway configuration for Australian Government agencies|[IPSEC configuration required for Australian Government agencies](vpn-gateway.md)|
-|
-
-### PaaS VNet integration
-
-Leveraging PaaS can provide enhanced functionality and availability and reduce management overhead but must be secured appropriately. To increase control, enforce network segmentation, or to provide a secure ingress entry point for applications and services, many PaaS capabilities can be integrated with a Virtual Network.
-
-To provide a secure entry point, PaaS capabilities such as Application Gateway can be configured with an external, public facing interface and an internal, private interface for communicating with application services. This prevents the need to configure application servers with Public IP addresses and expose them to external networks.
-
-To use PaaS as an integrated part of system or application architecture, Microsoft provides multiple mechanisms to deploy PaaS into a Virtual Network. The deployment methodology restricts the inbound access from external networks such as the Internet while providing connectivity and integration with internal systems and applications. Examples include App Service Environments, SQL Managed Instance, and more.
-
-|Resource | Link|
-|||
-|Virtual network integration for Azure services | [https://docs.microsoft.com/azure/virtual-network/virtual-network-for-azure-services](../virtual-network/virtual-network-for-azure-services.md)|
-|Integrate your app with an Azure Virtual Network How-to guide | [https://docs.microsoft.com/azure/app-service/web-sites-integrate-with-vnet](../app-service/overview-vnet-integration.md)|
-|
-
-## PaaS ingress
-
-PaaS capabilities provide opportunities for increased capability and simplified management, but introduce complexities in addressing requirements for network segmentation and segregation. PaaS capabilities are typically configured with Public IP addresses and are accessible from the Internet. When building systems using PaaS capabilities, care should be taken to identify all the necessary communication flows between components within the system and network security rules created to allow only this communication. As part of a defence-in-depth approach to security, PaaS capabilities should be configured with encryption, authentication, and appropriate access controls and permissions.
-
-### Hostname
-
-PaaS capabilities are uniquely identified by hostnames to allow multiple instances of the same service to be hosted on the same Public IP address. Unique hostnames are specified when resources are created and exist within Microsoft owned DNS domains. The specific hostnames for authorized services can be used within security tools with application level filtering capabilities. Certain services can also be configured with custom domains as required.
-
-|Resource | Link|
-|||
-|Many public namespaces used by Azure services can be obtained through PowerShell by running the Get-AzureRMEnvironment command | [https://docs.microsoft.com/powershell/module/azurerm.profile/get-azurermenvironment](/powershell/module/azurerm.profile/get-azurermenvironment)|
-|Configuring a custom domain name for an Azure cloud service | App Services and others can have custom domains [https://docs.microsoft.com/azure/cloud-services/cloud-services-custom-domain-name-portal](../cloud-services/cloud-services-custom-domain-name-portal.md)|
-|
-
-### Public IP for PaaS
-
-Public IP addresses for PaaS capabilities are allocated based on the region where the service is hosted or deployed. To build appropriate network security rules and routing topology for network segmentation and segregation covering Azure Virtual Networks, PaaS and ExpressRoute and Internet connectivity, an understanding of Public IP address allocation and regions is required. Azure allocates IP addresses from a pool allocated to each Azure region. Microsoft makes the addresses used in each region available for download, which is updated in a regular and controlled manner. The services that are available in each region also frequently changes as new services are released or services are deployed more widely. Commonwealth entities should review these materials regularly and can leverage automation to maintain systems as required. Specific IP addresses for some services hosted in each region can be obtained by contacting Microsoft support.
-
-|Resource | Link|
-|||
-|Microsoft Azure Datacenter IP Ranges | [https://www.microsoft.com/download/details.aspx?id=41653](https://www.microsoft.com/download/details.aspx?id=41653)|
-|Azure Services per region | [https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast&products=all](https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast&products=all)|
-|
-
-### Service endpoints
-
-Virtual Network Service endpoints provide a high-speed, private ingress network connection for subnets within a Virtual Network to consume specific PaaS capabilities. For complete network segmentation and segregation of the PaaS capability, the PaaS capability must be configured to accept connections only from the necessary virtual networks. Not all PaaS Capabilities support a combination of Firewall rules that includes service endpoints and traditional IP address-based rules, so care should be taken to understand the flow of communications required for application functionality and administration so that the implementation of these security controls does not impact service availability.
-
-|Resource | Link|
-|||
-|Service endpoints overview | [https://docs.microsoft.com/azure/virtual-network/virtual-network-service-endpoints-overview](../virtual-network/virtual-network-service-endpoints-overview.md)
-|Tutorial |[https://docs.microsoft.com/azure/virtual-network/tutorial-restrict-network-access-to-resources](../virtual-network/tutorial-restrict-network-access-to-resources.md)|
-|
-
-## Security
-
-Implementing network segmentation and segregation controls on IaaS and PaaS capabilities is achieved through securing the capabilities themselves and by implementing controlled communication paths from the systems that will be communicating with the capability.
-
-Designing and building solutions in Azure is a process of creating a logical architecture to understand, control, and monitor network resources across the entire Azure presence. This logical architecture is software defined within the Azure platform and takes the place of a physical network topology that is implemented in traditional network environments.
-
-The logical architecture that is created must provide the functionality necessary for usability, but must also provide the visibility and control needed for security and integrity.
-
-Achieving this outcome is based on implementing the necessary network segmentation and segregation tools, but also in protecting and enforcing the network topology and the implementation of these tools.
-
-The information provided in this guide can be used to help identify the sources of ingress traffic that need to be permitted and the ways that the traffic can be further controlled or constrained.
-
-### Network Security Groups (NSGs)
-
-NSGs are used to specify the inbound and outbound traffic permitted for a subnet or a specific network interface. When configuring NSGs, commonwealth entities should use a approval list approach where rules are configured to permit the necessary traffic with a default rule configured to deny all traffic that does not match a specific permit statement. Care must be taken when planning and configuring NSGs to ensure that all necessary inbound and outbound traffic is captured appropriately. This includes identifying and understanding all private IP address ranges utilized within Azure Virtual Networks and the on-premises environment, and specific Microsoft services such as Azure Load Balancer and PaaS management requirements. Individuals involved in the design and implementation of Network Security Groups should also understand the use of Service Tags and Application Security Groups for creating fine-grained, service, and application-specific security rules.
-
-|Resource | Link|
-|||
-|Network Security Overview | [https://docs.microsoft.com/azure/virtual-network/security-overview](../virtual-network/network-security-groups-overview.md)
-|Create, change, or delete a network security group | [https://docs.microsoft.com/azure/virtual-network/manage-network-security-group](../virtual-network/manage-network-security-group.md)|
-|
-
-## PaaS firewall
-
-A PaaS firewall is a network access control capability that can be applied to certain PaaS services. It allows IP address filtering or filtering from specific virtual networks to be configured to restrict ingress traffic to the specific PaaS instance. For PaaS capabilities that include a Firewall, network access control policies should be configured to permit only the necessary ingress traffic based on application requirements.
-
-|Resource | Link|
-|||
-|Azure SQL Database and Azure Synapse Analytics IP firewall rules | [https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure](/azure/azure-sql/database/firewall-configure)|
-|Storage Network Security | [https://docs.microsoft.com/azure/storage/common/storage-network-security](../storage/common/storage-network-security.md)|
-|
-
-## PaaS authentication and access control
-
-Depending on the PaaS capability and its purpose, using network controls to restrict access may not be possible or practical. As part of the layered security model for PaaS, Azure provides a variety of authentication and access control mechanisms to restrict access to a service, even if network traffic is allowed. Typical authentication mechanisms for PaaS capabilities include Azure Active Directory, Application level authentication, and Shared Keys or access signatures. Once a user is securely identified, roles can be utilized to control the actions that the user can perform. These tools can be utilized as an alternative or as a complimentary measure to restrict access into services.
-
-|Resource | Link|
-|||
-|Controlling and granting database access to SQL Database and Azure Synapse Analytics | [https://docs.microsoft.com/azure/sql-database/sql-database-manage-logins](/azure/azure-sql/database/logins-create-manage)|
-|Authorization for the Azure Storage Services | [https://docs.microsoft.com/rest/api/storageservices/authorization-for-the-Azure-Storage-Services](/rest/api/storageservices/authorization-for-the-Azure-Storage-Services)|
-|
-
-## Azure Policy
-
-Azure Policy is a key component for enforcing and maintaining the integrity of the logical architecture of the Azure environment. Given the variety of services and ingress network traffic paths available through Azure services, it is crucial that Commonwealth entities are aware of the resources that exist within their environment and the available network ingress points. To ensure that unauthorized network ingress points are not created in the Azure environment, Commonwealth entities should leverage Azure Policy to control the types of resources that can be deployed and the configuration of those resources. Practical examples include restricting resources to only those authorized and approved for use, enforcing HTTPS encryption on Storage and requiring NSGs to be added to subnets.
-
-|Resource | Link|
-|||
-|Azure Policy Overview | [https://docs.microsoft.com/azure/governance/policy/overview](../governance/policy/overview.md)|
-|Allowed Resource Types sample policy | [https://docs.microsoft.com/azure/governance/policy/samples/allowed-resource-types](../governance/policy/samples/index.md)
-|Ensure HTTPS Storage Account sample policy|[https://docs.microsoft.com/azure/governance/policy/samples/ensure-https-storage-account](../governance/policy/samples/index.md)_
-|Force NSG on a subnet sample policy| [https://docs.microsoft.com/azure/governance/policy/samples/nsg-on-subnet](../governance/policy/samples/index.md)|
-|
-
-## Next steps
-
-Review the article on [Gateway Egress Traffic Management and Control](gateway-egress-traffic.md) for details on managing traffic flows from your Azure environment to other networks using your Gateway components in Azure.
azure-australia Gateway Log Audit Visibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/gateway-log-audit-visibility.md
- Title: Gateway logging, auditing, and visibility in Azure Australia
-description: How to configure Logging, Auditing, and Visibility within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# Gateway logging, auditing, and visibility in Azure Australia
-
-Detecting and responding to cyber security threats relies on generating, collecting and analyzing data related to the operation of a system.
-
-Microsoft has built-in tools in Azure to help you implement logging, auditing, and visibility to manage the security of your systems deployed in Azure. There is also a reference architecture that aligns with the Australian Cyber Security Centre (ACSC) Consumer Guidance and the intent of the Information Security Manual (ISM).
-
-Gateways act as information flow control mechanisms at the network layer and may also control information at the higher layers of the Open System Interconnect (OSI) model. Gateways are necessary to control data flows between security domains and prevent unauthorised access from external networks. Given the criticality of gateways in controlling the flow of information between security domains, any failure, particularly at higher classifications, may have serious consequences. As such, robust mechanisms for alerting personnel to situations that may cause cyber security incidents are especially important for gateways.
-
-Implementing logging and alerting capabilities for gateways can assist in detecting cyber security incidents, attempted intrusions, and unusual usage patterns. In addition, storing event logs on a separate secure log server increases the difficulty for an adversary to delete logging information in order to destroy evidence of a targeted cyber intrusion.
-
-## Australian Cyber Security Centre (ACSC) requirements
-
-The overall security requirements for Commonwealth systems are defined in the ACSC Information Security Manual (ISM). To assist Commonwealth entities to meet these requirements within Azure, the *ACSC CONSUMER GUIDE ΓÇô Microsoft Azure at PROTECTED* and *ACSC CERTIFICATION REPORT ΓÇô Microsoft Azure* publications detail the following specific requirements related to Logging, Auditing, and Visibility:
-
-1. To mitigate the risks arising from using shared underlying cloud resources, Commonwealth entities must opt in to Microsoft Azure provided capabilities including Azure Security Centre, Azure Monitor, Azure Policy, and Azure Advisor to assist entities to perform real-time monitoring of their Azure workloads
-
-2. The ACSC also recommends that Commonwealth entities forward all mandated security logs to the ACSC for whole of Australian Government monitoring
-
-3. To assist in risk mitigation, Commonwealth entities should configure within their Azure subscriptions:
-
- * Enable Azure Security Centre
- * Upgrade to the Standard Tier
- * Enable Automatic Provisioning of the Microsoft Monitoring Agent to supported Azure VMs
- * Regularly review, prioritise, and mitigate the security recommendations and alerts on the Security Centre dashboard
-
-4. Government entities must enable log and event forwarding from their Azure subscription to the ACSC to provide the ACSC with visibility of non-compliance with this guidance. Azure Event Hubs provides the capability to perform external log streaming to the ACSC or on-premises systems owned by the Commonwealth entity
-
-5. Commonwealth entities should align the logging they enable within Azure to the requirements specified in the ISM
-
-6. Microsoft keeps logs within Azure for 90 days. Customer entities must implement a log archival regime to ensure logs can be kept for the seven years required under the NAA AFDA
-
-7. Commonwealth entities that have on-premises or Azure-based Security Information and Event Management (SIEM) capabilities can also forward logs to those systems
-
-8. Commonwealth entities should implement Network Watcher flow logs for Network Security Groups (NSGs) and Virtual Machines. These logs should be stored in a dedicated storage account containing only security logs, and access to the storage account should be secured with Azure role-based access control (Azure RBAC)
-
-9. Commonwealth entities must implement ACSC Consumer Guidance to ensure Azure workloads meet the intent of the ISM for logging and monitoring. Commonwealth entities must also opt in to Azure capabilities that assist the ACSC to receive real-time monitoring, alerting, and logs associated with Australian Government usage of Azure
-
-## Architecture
-
-To confidently understand the network traffic entering and leaving your Azure environment, the necessary logging must be enabled on the right set of components. Doing this ensures complete visibility of the environment and provides the necessary data to do analysis.
-
-![Azure Monitoring Architecture](media/visibility.png)
-
-## Components
-
-The architecture shown above is made up of discrete components that provide the function of either Log Sources, Log Collection, Log Retention, Log Analysis or Incident Response. This architecture includes individual components that are typically involved in internet accessible Azure deployments.
-
-|Functions|Components|
-|||
-|Log Sources|<ul><li>Application Gateway</li><li>VPN Gateway</li><li>Azure Firewall</li><li>Network Virtual Appliances</li><li>Azure Load Balancer</li><li>Virtual Machines</li><li>Domain Naming System (DNS) Servers</li><li>Syslog and/or Log Collection Servers</li><li>NSGs</li><li>Azure Activity Log</li><li>Azure Diagnostic Log</li><li>Azure Policy</li></ul>|
-|Log Collection|<ul><li>Event Hubs</li><li>Network Watcher</li><li>Log Analytics</li></ul>|
-|Log Retention|<ul><li>Azure Storage</li></ul>|
-|Log Analysis|<ul><li>Microsoft Defender for Cloud</li><li>Azure Advisor</li><li>Log Analytics Solutions<ul><li>Traffic Analytics</li><li>DNS Analytics (Preview)</li><li>Activity Log Analytics</li></ul></li><li>SIEM</li><li>ACSC</li></ul>|
-|Incident Response|<ul><li>Azure Alerts</li><li>Azure Automation</li></ul>|
-|
-
-The architecture works by first generating logs from the necessary sources and then collecting them into centralised repositories. Once you've collected the logs, they can be:
-
-* used by Azure analysis services to get insight,
-* get forwarded to external systems, or
-* get archived to storage for long-term retention.
-
-To respond to key events or incidents identified by analysis tools, alerts can be configured, and automation developed to take necessary actions for proactive management and response.
-
-## General guidance
-
-When implementing the components listed in this article, the following general guidance applies:
-
-* Validate the region availability of services, ensuring that all data remains within authorised locations and deploy to AU Central or AU Central 2 as the first preference for PROTECTED workloads
-
-* Refer to the *Azure - ACSC Certification Report ΓÇô Protected 2018* publication for the certification status of individual services and perform self-assessments on any relevant components not included in the report as per the *ACSC CONSUMER GUIDE ΓÇô Microsoft Azure at PROTECTED*
-
-* For components not referenced in this article, Commonwealth entities should follow the principles included about generating, capturing, analysing, and keeping logs
-
-* Identify and prioritise the logging, auditing, and visibility on high value systems as well as all network ingress and egress points to systems hosted in Azure
-
-* Consolidate logs and minimise the number of instances of logging tools such as storage accounts, Log Analytics workspaces and Event Hubs
-
-* Restrict administrative privileges through Azure role-based access control (Azure RBAC)
-
-* Use Multi-Factor Authentication (MFA) for accounts administering or configuring resources in Azure
-
-* When centralising log collection across multiple subscriptions, ensure that administrators have the necessary privileges in each subscription
-
-* Ensure network connectivity and any necessary proxy configuration for Virtual Machines, including Network Virtual Appliances (NVAs), Log Collection Servers and DNS Servers, to connect to necessary Azure services such as the Log Analytics workspaces, Event Hubs, and Storage
-
-* Configure the Microsoft Monitoring Agent (MMA) to utilise TLS version 1.2
-
-* Use Azure Policy to monitor and enforce compliance with requirements
-
-* Enforce encryption on all data repositories such as Storage and Databases
-
-* Use Locally redundant storage (LRS) and snapshots for availability of Storage Accounts and associated data
-
-* Consider Geo-redundant storage (GRS) or off-site storage to align with Disaster Recovery strategies
-
-|Resource|URL|
-|||
-|Australian Regulatory and Policy Compliance Documents|[https://aka.ms/au-irap](https://aka.ms/au-irap)|
-|Azure products - Australian regions and non-regional|[https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast](https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast)|
-|Microsoft Azure Security and Audit Log Management Whitepaper|[https://download.microsoft.com/download/B/6/C/B6C0A98B-D34A-417C-826E-3EA28CDFC9DD/AzureSecurityandAuditLogManagement_11132014.pdf](https://download.microsoft.com/download/B/6/C/B6C0A98B-D34A-417C-826E-3EA28CDFC9DD/AzureSecurityandAuditLogManagement_11132014.pdf)|
-|Microsoft Monitoring Agent Configuration|[https://docs.microsoft.com/azure/azure-monitor/platform/log-analytics-agent](../azure-monitor/agents/log-analytics-agent.md)|
-|
-
-## Component guidance
-
-This section provides information on the purpose of each component and its role in the overall logging, auditing, and visibility architecture. Additional links are provided to access useful resources such as reference documentation, guides, and tutorials.
-
-## Log sources
-
-Before any analysis, alerting or reporting can be completed, the necessary logs must be generated. Azure logs are categorized into control/management logs, data plane logs, and processed events.
-
-|Type|Description|
-|||
-|Control/management logs|Provide information about Azure Resource Manager operations|
-|Data plane logs|Provide information about events raised as part of Azure resource usage, such as logs in a Virtual Machine and the diagnostics logs available through Azure Monitor|
-|Processed events|Provide information about analysed events/alerts that have been processed by Azure, such as where Microsoft Defender for Cloud has processed and analysed subscriptions to provide security alerts|
-|
-
-### Application Gateway
-
-Azure Application Gateway is one of the possible entry points into an Azure environment so you need to capture information related to incoming connections communicating with web applications. Application Gateway can provide crucial information relating to web application usage as well as assisting in detecting cyber security incidents. Application Gateway sends metadata to the Activity Log and Diagnostic Logs in Azure Monitor where it can be utilised in Log Analytics or distributed to an Event Hub or Storage Account.
-
-|Resources|Link|
-|||
-|Application Gateway Documentation|[https://docs.microsoft.com/azure/application-gateway/](../application-gateway/index.yml)|
-|Application Gateway quickstart Guide|[https://docs.microsoft.com/azure/application-gateway/quick-create-portal](../application-gateway/quick-create-portal.md)|
-|
-
-### VPN Gateway
-
-The VPN Gateway is a potential entry point for a wide range of communications into the Azure environment, such as the connection to an on-premises environment and administrative traffic. Logging on VPN Gateways provides insight and traceability for the connections made to the Azure environment. Logging can provide auditing and analysis as well as assist in the detection or investigation of malicious or anomalous connections. VPN Gateway logs are sent to the Azure Monitor Activity Log where they can be utilised in Log Analytics or distributed to an Event Hub or Storage Account.
-
-|Resources|Link|
-|||
-|VPN Gateway Documentation|[https://docs.microsoft.com/azure/vpn-gateway/](../vpn-gateway/index.yml)|
-|Australian Government specific VPN Gateway guidance|[Azure VPN Gateway configuration](vpn-gateway.md)|
-|
-
-### Azure Firewall
-
-Azure Firewall provides a controlled exit point from an Azure environment and the logs generated, which include information on attempted and successful outbound connections, are an important element in your logging strategy. These logs can validate that systems are operating as designed, as well as assist in detecting malicious code or actors attempting to connect to unauthorised external systems. Azure Firewall writes logs to the Activity Log and Diagnostic Logs in Azure Monitor where it can be used in Log Analytics, or distributed to an Event Hub or Storage Account.
-
-|Resources|Link|
-|||
-|Azure Firewall Documentation|[https://docs.microsoft.com/azure/firewall/](../firewall/index.yml)|
-|Tutorial: Monitor Azure Firewall logs and metrics|[https://docs.microsoft.com/azure/firewall/tutorial-diagnostics](../firewall/firewall-diagnostics.md)|
-|
-
-### Network Virtual Appliances (NVA)
-
-NVAs can be used to complement the security capabilities available natively in Azure. The logs generated on NVAs can be valuable resources in detecting cyber security incidents and are a key part of an overall logging, auditing, and visibility strategy. To capture logs from NVAs, utilise the Microsoft Monitoring Agent (MMA). For NVAs that don't support the installation of the MMA, consider using a Syslog or other log collection server to relay logs.
-
-|Resources|Link|
-|||
-|Overview of Network Virtual Appliances|[https://azure.microsoft.com/solutions/network-appliances](https://azure.microsoft.com/solutions/network-appliances)|
-|NVA Documentation|Refer to the vendor documentation on the implementation of the relevant NVA in Azure|
-|
-
-### Azure Load Balancer
-
-Azure Load Balancer logs are used to obtain useful information about the connections and usage related to systems deployed in Azure. This can be used for health and availability monitoring, but also forms another key component in gaining the necessary insight into communications traffic and detecting malicious or anomalous traffic patterns. Azure Load Balancer logs to the Activity Log and Diagnostic Logs in Azure Monitor where it can be utilised in Log Analytics or distributed to an Event Hub or Storage Account.
-
-|Resources|Link|
-|||
-|Azure Load Balancer Documentation|[https://docs.microsoft.com/azure/load-balancer](../load-balancer/index.yml)|
-|Metrics and health diagnostics for Standard Load Balancer|[https://docs.microsoft.com/azure/load-balancer/load-balancer-standard-diagnostics](../load-balancer/load-balancer-standard-diagnostics.md)|
-|
-
-### Virtual machines
-
-Virtual machines are endpoints that send and receive network communications, process data, and provide services. As Virtual machines can host data or crucial system services, ensuring that they're operating correctly and detecting cyber security incidents can be critical. Virtual machines collect various event and audit logs that can track the operation of the system and the actions done on that system. Logs collected on Virtual Machines can be forwarded to a Log Analytics workspace using the Log Analytics agent where they can be analyzed by Microsoft Defender for Cloud. Virtual machines can also integrate directly with Azure Event Hubs or with a SIEM solution, either directly or through a log collection server.
-
-|Resources|Link|
-|||
-|Virtual Machines|[https://docs.microsoft.com/azure/virtual-machines](../virtual-machines/index.yml)|
-|Collect Data from Virtual Machines|[https://docs.microsoft.com/azure/log-analytics/log-analytics-quick-collect-azurevm](../azure-monitor/vm/monitor-virtual-machine.md)|
-|Stream Virtual Machine Logs to Event Hubs|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/azure-diagnostics-streaming-event-hubs](../azure-monitor/agents/diagnostics-extension-stream-event-hubs.md)|
-|
-
-### Domain Name Services (DNS) servers
-
-DNS Server logs provide key information related to the services that systems are trying to access, either internally or externally. Capturing DNS logs can help identify a cyber security incident and provide insight into the type of incident, and the systems that may be affected. The Microsoft Management Agent (MMA) can be used on DNS Servers to forward the logs through to Log Analytics for use in DNS Analytics (Preview).
-
-|Resources|Link|
-|||
-|Azure Name Resolution for Virtual Networks|[https://docs.microsoft.com/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)|
-|
-
-### Syslog and log collection servers
-
-To receive logs from Network Virtual Appliances, or custom security logs from other systems for use within a SIEM, dedicated servers can be deployed within Azure VNets. Syslog logs can be collected on a Syslog server and relayed to Log Analytics for analysis. A Log Collection Server is a generic term for any log aggregation and distribution capability used by centralised monitoring systems or SIEMs. These can be used to simplify network architecture and security and to filter and aggregate logs before being distributed to the centralised capability.
-
-|Resources|Link|
-|||
-|Syslog data sources in Log Analytics|[https://docs.microsoft.com/azure/azure-monitor/platform/data-sources-syslog](../azure-monitor/agents/data-sources-syslog.md)|
-|Log Collection Server|Refer to vendor documentation for details on monitoring and SIEM architecture|
-|
-
-### Network Security Groups (NSGs)
-
-NSGs control traffic into and out of virtual networks in Azure. NSGs apply rules for the traffic flows that are permitted or denied, which includes traffic within Azure and between Azure and external networks such as on-premises or the Internet. NSGs are applied to subnets within a virtual network or to individual network interfaces. To capture information on the traffic entering and leaving systems in Azure, NSG logs can be enabled through the Network Watcher NSG Flow Logs feature. These logs are used to form a baseline for the standard operation of a system and are the data source for Traffic Analytics, which provides detailed insights into the traffic patterns of systems hosted in Azure.
-
-|Resources|Link|
-|||
-|Network Security Group Documentation|[https://docs.microsoft.com/azure/virtual-network/security-overview](../virtual-network/network-security-groups-overview.md)|
-|Introduction to flow logging for network security groups|[https://docs.microsoft.com/azure/network-watcher/network-watcher-nsg-flow-logging-overview](../network-watcher/network-watcher-nsg-flow-logging-overview.md)|
-|Tutorial: Log network traffic to and from a Virtual Machine using the Azure portal|[https://docs.microsoft.com/azure/network-watcher/network-watcher-nsg-flow-logging-portal](../network-watcher/network-watcher-nsg-flow-logging-portal.md)|
-|
-
-### Azure Activity Log
-
-Azure Activity Log, which is part of Azure Monitor, is a subscription log that provides insight into subscription-level events that have occurred in Azure. The Activity Log can help determine the 'what, who, and when' for any write operations (PUT, POST, DELETE) taken ***on*** the resources in a subscription. The Activity Log is crucial for tracking the configuration changes made within the Azure environment. Azure Activity Logs are automatically available for use in Log Analytics solutions and can be sent to Event Hubs or Azure Storage for processing or retention.
-
-|Resources|Link|
-|||
-|Azure Activity Log Documentation|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs](../azure-monitor/essentials/platform-logs-overview.md)|
-|Stream the Azure Activity Log to Event Hubs|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-stream-activity-logs-event-hubs](../azure-monitor/essentials/activity-log.md#legacy-collection-methods)|
-|
-
-### Azure Diagnostic Log
-
-Azure Monitor diagnostic logs are logs emitted by an Azure service that provide rich, frequent data about the operation of that service. Diagnostic logs provide insight into the operation of a resource at a detailed level and can be used for a range of requirements such as auditing or troubleshooting. Azure Diagnostic Logs are automatically available for use in Log Analytics solutions and can be sent to Event Hubs or Azure Storage for processing or retention.
-
-|Resources|Link|
-|||
-|Azure Diagnostic Log Documentation|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs](../azure-monitor/essentials/platform-logs-overview.md)|
-|Support services for Diagnostic Logs|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-diagnostic-logs-schema](../azure-monitor/essentials/resource-logs-schema.md)|
-|
-
-### Azure Policy
-
-Azure Policy enforces rules on how resources can be deployed, such as the type, location, and configuration. Azure Policy can be configured to ensure resources can only be deployed if they're compliant with requirements. Azure Policy is a core component to maintaining the integrity of an Azure environment. Events related to Azure Policy are logged to the Azure Activity Log and are automatically available for use in Log Analytics solutions or can be sent to Event Hubs or Azure Storage for processing or retention.
-
-|Resources|Link|
-|||
-|Azure Policy Documentation|[https://docs.microsoft.com/azure/governance/policy](../governance/policy/index.yml)|
-|Leveraging Azure Policy and Resource Manager templates using Azure Blueprints|[https://docs.microsoft.com/azure/governance/blueprints/overview](../governance/blueprints/overview.md)|
-|
-
-## Log collection
-
-Once generated from the multiple log sources, logs need to be stored in a centralised location for ongoing access and analysis. Azure provides multiple methods and options for Log Collection that can be utilised depending on the log type and requirements.
-
-### Event Hubs
-
-The purpose of an Event Hub is to aggregate the log data for the various sources for distribution. From the Event Hub, the log data can be sent on to a SIEM, to the ACSC for compliance and to Storage for long-term retention.
-
-|Resources|Link|
-|||
-|Event Hubs Documentation|[https://docs.microsoft.com/azure/event-hubs](../event-hubs/index.yml)|
-|Guidance on Event Hubs and External Tools|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitor-stream-monitoring-data-event-hubs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)|
-|
-
-### Log Analytics
-
-Log Analytics is part of Azure Monitor and is used for log analysis. Log Analytics uses a workspace as the storage mechanism where log data can be made available for a variety of analysis tools and solutions available within Azure. Log Analytics integrates with a wide range of Azure components directly, as well as Virtual Machines through the Microsoft Monitoring Agent.
-
-|Resources|Link|
-|||
-|Log Analytics Documentation|[https://docs.microsoft.com/azure/azure-monitor](../azure-monitor/index.yml)|
-|Tutorial: Analyze Data in Log Analytics|[https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-viewdata](../azure-monitor/logs/log-analytics-tutorial.md)|
-|
-
-### Network Watcher
-
-The use of Network Watcher is recommended by the ACSC to assist in understanding and capturing network traffic in an Azure subscription. NSG Flow logs provide the input to the Traffic Analytics solution in Log Analytics, which provides increased visibility, analysis and reporting natively through Azure. Network Watcher also provides a packet capture capability directly from the Azure portal without the need to sign in to the Virtual Machine. Packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine.
-
-|Resources|Link|
-|||
-|Network Watcher|[https://docs.microsoft.com/azure/network-watcher](../network-watcher/index.yml)|
-|Packet Capture Overview|[https://docs.microsoft.com/azure/network-watcher/network-watcher-packet-capture-overview](../network-watcher/network-watcher-packet-capture-overview.md)|
-|
-
-## Log retention
-
-For Australian Government organisations, the logs captured within Azure must be retained in accordance with the National Archives of Australia [Administrative Functions Disposal Authority (AFDA)](https://www.naa.gov.au/information-management/records-authorities/types-records-authorities/afda-express-version-2-functions), which specifies retaining logs up to seven years.
-
-|Log Location|Retention Period|
-|||
-|Azure Activity Log|Up to 90 days|
-|Log Analytics workspace|Up to two years|
-|Event Hub|Up to seven days|
-|
-
-It is your responsibility to ensure that logs are archived appropriately to adhere to AFDA and other legislative requirements.
-
-### Azure Storage
-
-Azure Storage is the repository for logs for long-term retention in Azure. Azure Storage can be used to archive logs from Azure including Event Hubs, Azure Activity Log, and Azure Diagnostic Logs. The period of retention of data in Storage can be set to zero, or can be specified as a number of days. A retention of zero days means logs are kept forever, otherwise, the value can be any number of days between 1 and 2147483647.
-
-|Resources|Link|
-|||
-|Azure Storage Documentation|[https://docs.microsoft.com/azure/storage](../storage/index.yml)|
-|Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage|[https://docs.microsoft.com/azure/event-hubs/event-hubs-capture-overview](../event-hubs/event-hubs-capture-overview.md)|
-|Tutorial: Archive Azure metric and log data using Azure Storage|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitor-tutorial-archive-monitoring-data](../azure-monitor/essentials/platform-logs-overview.md)|
-|Azure Storage Replication|[https://docs.microsoft.com/azure/storage/common/storage-redundancy](../storage/common/storage-redundancy.md)|
-|Creating a Snapshot of a Blob|[https://docs.microsoft.com/rest/api/storageservices/creating-a-snapshot-of-a-blob](/rest/api/storageservices/creating-a-snapshot-of-a-blob)|
-|
-
-## Log analysis
-
-Once generated and stored in a centralised location, the logs must be analysed to assist with detecting attempted or successful security incidents. When security incidents are detected, an agency needs the ability to respond to those incidents and to track, contain, and remediate any threats.
-
-### Microsoft Defender for Cloud
-
-Microsoft Defender for Cloud provides unified security management and advanced threat protection. Microsoft Defender for Cloud can apply security policies across workloads, limit exposure to threats, and detect and respond to attacks. Microsoft Defender for Cloud provides dashboards and analysis across a wide range of Azure components. The use of Microsoft Defender for Cloud is specified as a requirement in the ACSC consumer guidance.
-
-|Resources|Link|
-|||
-|Microsoft Defender for Cloud documentation|[https://docs.microsoft.com/azure/security-center](../security-center/index.yml)|
-|Quickstart: Enable Microsoft Defender for Cloud's enhanced security features|[https://docs.microsoft.com/azure/security-center/security-center-get-started](../security-center/enable-enhanced-security.md)|
-|||
-
-### Traffic Analytics
-
-Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyses Network Watcher NSG flow logs to provide insights into traffic flow in Azure. Traffic Analytics is used to provide dashboards, reports, analysis, and event response capabilities related to the network traffic seen across virtual networks. Traffic Analytics gives significant insight and helps in identifying and resolving cyber security incidents.
-
-|Resources|Link|
-|||
-|Traffic Analytics Documentation|[https://docs.microsoft.com/azure/network-watcher/traffic-analytics](../network-watcher/traffic-analytics.md)|
-|
-
-### Azure Advisor
-
-Azure Advisor analyses resource configuration and other data to recommend solutions to help improve the performance, security, and high availability of resources while looking for opportunities to reduce overall Azure spend. Azure Advisor is recommended by the ACSC and provides easily accessible and detailed advice on the configuration of the Azure environment.
-
-|Resources|Link|
-|||
-|Azure Advisor Documentation|[https://docs.microsoft.com/azure/advisor](../advisor/index.yml)|
-|Get started with Azure Advisor|[https://docs.microsoft.com/azure/advisor/advisor-get-started](../advisor/advisor-get-started.md)|
-|
-
-### DNS Analytics (Preview)
-
-DNS Analytics is a Log Analytics Solution that collects, analyses, and correlates Windows DNS analytic and audit logs and other related data. DNS Analytics identifies clients that try to resolve malicious domain names, stale resource records, frequently queried domain names, and talkative DNS clients. DNS Analytics also provides insight into request load on DNS servers and dynamic DNS registration failures. DNS Analytics is used to provide dashboards, reports, analysis, and event response capabilities related to the DNS queries made within an Azure environment. DNS Analytics gives significant insight and helps in identifying and resolving cyber security incidents.
-
-|Resources|Link|
-|||
-|DNS Analytics Documentation|[https://docs.microsoft.com/azure/azure-monitor/insights/dns-analytics](../azure-monitor/insights/dns-analytics.md)|
-|
-
-### Activity Log Analytics
-
-Activity Log Analytics is a Log Analytics Solution that helps analyse and search the Azure activity log across multiple Azure subscriptions. Activity Log Analytics is used to provide centralised dashboards, reports, analysis, and event response capabilities related to the actions that are performed on resources the whole Azure environment. Activity Log Analytics can assist with auditing and investigation.
-
-|Resources|Link|
-|||
-|Collect and analyze Azure activity logs in Log Analytics|[https://docs.microsoft.com/azure/azure-monitor/platform/collect-activity-logs](../azure-monitor/essentials/activity-log.md)|
-|
-
-### Security Information and Event Management (SIEM)
-
-A SIEM is a system that provides centralised storage, auditing and analysis of security logs, with defined mechanisms for ingesting a wide range of log data and intelligent tools for analysis, reporting and incident detection and response. You can use SIEM capabilities that include Azure logging information to supplement the security capabilities provided natively in Azure. Commonwealth entities can utilise a SIEM hosted on Virtual Machines in Azure, on-premises or as a Software as a Service (SaaS) capability depending on specific requirements.
-
-|Resources|Link|
-|||
-|Microsoft Sentinel (Preview)|[https://azure.microsoft.com/services/azure-Sentinel](https://azure.microsoft.com/services/azure-sentinel)|
-|SIEM Documentation|Refer to vendor documentation for SIEM architecture and guidance|
-|Use Azure Monitor to integrate with SIEM tools|[https://azure.microsoft.com/blog/use-azure-monitor-to-integrate-with-siem-tools](https://azure.microsoft.com/blog/use-azure-monitor-to-integrate-with-siem-tools)|
-|
-
-### Australian Cyber Security Centre
-
-The Australian Cyber Security Centre (ACSC) is the Australian Government's lead on national cyber security. It brings together cyber security capabilities from across the Australian Government to improve the cyber resilience of the Australian community and support the economic and social prosperity of Australia in the digital age. The ACSC recommends that Commonwealth entities forward all mandated system-generated log files, events, and logs to the ACSC for whole of Australian Government monitoring.
-
-|Resources|Link|
-|||
-|Australian Cyber Security Centre website|[https://www.acsc.gov.au](https://www.acsc.gov.au)|
-|
-
-## Incident response
-
-Generating the appropriate logs, collecting them into centralised repositories and performing analysis increases understanding of systems and provides mechanisms to detect cyber security incidents. After incidents or events have been detected, the next step is to react to those events and perform actions to maintain system health and protect services and data from compromise. Azure provides a combination of services to respond effectively to any events that occur.
-
-### Azure Alerts
-
-Azure Alerts can be used to notify support and security personnel in response to particular events. This allows a Commonwealth entity to proactively respond to the detection of relevant events raised by the analysis services listed in this article.
-
-|Resources|Link|
-|||
-|Overview of Alerts in Microsoft Azure|[https://docs.microsoft.com/azure/monitoring-and-diagnostics/monitoring-overview-alerts](../azure-monitor/alerts/alerts-overview.md)|
-|Managing and responding to security alerts in Microsoft Defender for Cloud|[https://docs.microsoft.com/azure/security-center/security-center-managing-and-responding-alerts](../security-center/security-center-managing-and-responding-alerts.md)|
-|Azure Monitor Log Alerts|[https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-response](../azure-monitor/alerts/alerts-log.md)|
-|
-
-### Azure Automation
-
-Azure Automation enables Commonwealth entities to trigger actions in response to events. This could be to start a packet capture on Virtual Machines, run a workflow, stop, or start Virtual Machines or services, or a range of other tasks. Automation enables rapid response to alerts without manual intervention thus reducing the response time and severity of an incident or event.
-
-|Resources|Link|
-|||
-|Azure Automation Documentation|[https://docs.microsoft.com/azure/automation](../automation/index.yml)|
-|How-to guide: Use an alert to trigger an Azure Automation runbook|[https://docs.microsoft.com/azure/automation/automation-create-alert-triggered-runbook](../automation/automation-create-alert-triggered-runbook.md)|
-|
-
-## Next steps
-
-Review the article on [Gateway Secure Remote Administration](gateway-secure-remote-administration.md) for details on securely managing your Gateway environment in Azure.
azure-australia Gateway Secure Remote Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/gateway-secure-remote-administration.md
- Title: Secure remote administration of gateway in Azure Australia
-description: Guidance on configuring secure remote administration within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# Secure remote administration of your Gateway in Azure Australia
-
-It's critical to the availability and integrity of any system that administrative activities are conducted securely and are controlled. Administrative activities should be done from a secure device, over a secure connection, and be backed by strong authentication and authorisation processes. Secure Remote Administration ensures that only authorised actions are performed and only by authorised administrators.
-
-This article provides information on implementing a secure remote administration capability for an internet accessible system hosted in Azure that aligns with the Australian Cyber Security Centre (ACSC) Consumer Guidance and the intent of the ACSC's Information Security Manual (ISM).
-
-## Australian Cyber Security Centre (ACSC) requirements
-
-The overall security requirements for Commonwealth systems are defined in the ISM. To assist Commonwealth entities in providing secure administration, the ACSC has published [ACSC Protect: Secure Administration](https://www.cyber.gov.au/acsc/view-all-content/publications/secure-administration)
-
-This document discusses the importance of secure administration and suggests one method of implementing a secure administration environment. The document describes the elements of a secure administration solution as follows:
-
-|Element |Description |
-|||
-|Privileged access control |Controlling access to privileged accounts is a fundamental security control that will protect privileged accounts from misuse. The access control methodology will encompass the concepts of 'least privilege' and 'need to have' as well as processes and procedures for managing service accounts and staff movements. |
-|Multi-factor authentication |Implementing additional factors of authentication beyond usernames and passphrases, such as physical tokens or smartcards, can help protect critical assets. If an adversary compromises credentials for privileged accounts, as all administrative actions would first need to go through some form of multi-factor authentication, the consequences can be greatly reduced.|
-|Privileged workstations|The use of a known secure environment for administrative tasks can result in a lesser risk of the network being compromised due to the implementation of additional security controls.|
-|Logging and auditing |Automated generation, collection, and analysis of security and administrative related events from workstations, servers, network devices, and jump boxes will enable detection of compromises and attempted compromises. Automation enables organisations to respond more quickly, reducing the implications of a compromise.|
-|Network segmentation and segregation|Segmenting a network into logical zones such as differing security domains, and further segregating these logical networks by restricting the types of data that flow from one zone to another, restricts lateral movement. Segmentation prevents an adversary from gaining access to additional resources.|
-|Jump boxes|A jump box is a hardened remote access server, commonly utilising Microsoft's Remote Desktop Services or Secure Shell (SSH) software. Jump boxes act as a stepping point for administrators accessing critical systems with all administrative actions performed from the dedicated host.|
-
-This article provides a reference architecture for how the elements above can be used for secure administration of systems deployed in Azure.
-
-## Architecture
-
-Providing a secure administration capability requires multiple components that all work together to form a cohesive solution. In the reference architecture provided, the components are mapped to the elements described in [ACSC Protect: Secure Administration](https://www.cyber.gov.au/acsc/view-all-content/publications/secure-administration)
-
-![Azure Secure Remote Administration Architecture](media/remote-admin.png)
-
-## Components
-
-The architecture is designed to ensure that a privileged account is granted only the necessary permissions, is securely identified, and then provided access to administrative interfaces only from an authorised device and through secure communications mechanisms that are controlled and audited.
-
-|Solution| Components|Elements|
-||||
-|Secure Devices |<ul><li>Privileged Workstation</li><li>Mobile Device</li><li>Microsoft Intune</li><li>Group Policy</li><li>Jump Server / Bastion Host</li><li>Just in Time (JIT) Administration</li></ul> |<ul><li>Privileged workstations</li><li>Jump boxes</li></ul>|
-|Secure Communication |<ul><li>Azure portal</li><li>Azure VPN Gateway</li><li>Remote Desktop (RD) Gateway</li><li>Network Security Groups (NSGs)</li></ul> |<ul><li>Network segmentation and segregation</li></ul>|
-|Strong Authentication |<ul><li>Domain Controller (DC)</li><li>Azure Active Directory (Azure AD)</li><li>Network Policy Server (NPS)</li><li>Azure AD MFA</li></ul> |<ul><li>Multi-factor authentication</li></ul> |
-|Strong Authorisation |<ul><li>Identity and Access Management (IAM)</li><li>Privileged Identity Management (PIM)</li><li>Conditional Access</li></ul>|<ul><li>Privileged access control</li></ul>|
-|||
-
->[!NOTE]
->For more information on the Logging and auditing element, see the article on [Gateway logging, auditing, and visibility](gateway-log-audit-visibility.md)
-
-## Administration workflow
-
-Administering systems deployed in Azure is divided into two distinct categories, administering the Azure configuration and administering workloads deployed in Azure. Azure configuration is conducted through the Azure portal and workload administration is completed through administrative mechanisms such as Remote Desktop Protocol (RDP), Secure Shell (SSH) or for PaaS capabilities, using tools such as SQL Management Studio.
-
-Gaining access for administration is a multi-step process involving the components listed in the architecture and requires access to the Azure portal and Azure configuration before access can be made to Azure workloads.
-
->[!NOTE]
-> The steps described here are the general process using the Graphical User Interface (GUI) components of Azure. These steps can also be completed using other interfaces such as PowerShell.
-
-### Azure configuration and Azure portal access
-
-|Step |Description |
-|||
-|Privileged Workstation sign in |The administrator signs in the privileged workstation using administrative credentials. Group Policy controls prevent non-administrative accounts from authenticating to the privileged workstation and prevents administrative accounts from authenticating to non-privileged workstations. Microsoft Intune manages the compliance of the privileged workstation to ensure that it is up-to-date with software patches, antimalware, and other compliance requirements. |
-|Azure portal sign in |The administrator opens a web browser to the Azure portal, which is encrypted using Transport Layer Security (TLS), and signs in on using administrative credentials. The authentication request is processed through Azure Active Directory directly or through authentication mechanisms such as Active Directory Federation Services (AD FS) or Pass-through authentication. |
-|Azure AD MFA |Azure AD MFA sends an authentication request to the registered mobile device of the privileged account. The mobile device is managed by Intune to ensure compliance with security requirements. The administrator must authenticate first to the mobile device and then to the Microsoft Authenticator App using a PIN or Biometric system before the authentication attempt is authorised to Azure AD MFA. |
-|Conditional Access |Conditional Access policies check the authentication attempt to ensure that it meets the necessary requirements such as the IP address the connection is coming from, group membership for the privileged account, and the management and compliance status of the privileged workstation as reported by Intune. |
-|Privileged Identity Management (PIM) |Through the Azure portal the administrator can now activate or request activation for the privileged roles for which they have authorisation through PIM. PIM ensures that privileged accounts do not have any standing administrative privileges and that all requests for privileged access are only for the time required to perform administrative tasks. PIM also provides logging of all requests and activations for auditing purposes. |
-|Identity and Access Management|Once the privileged account has been securely identified and roles activated, the administrator is provided access to the Azure subscriptions and resources that they have been assigned permissions to through Identity and Access Management.|
-
-Once the privileged account has completed the steps to gain administrative access to the Azure portal, access to the workloads can be configured and administrative connections can be made.
-
-### Azure workload administration
-
-|Step |Description|
-|||
-|Just in Time (JIT) Access|To obtain access to virtual machines, the Administrator uses JIT to request access to RDP to the Jump Server from the RD Gateway IP address and RDP or SSH from the Jump Server to the relevant workload virtual machines.|
-|Azure VPN Gateway|The administrator now establishes a Point-to-Site IPSec VPN connection from their privileged workstation to the Azure VPN Gateway, which performs certificate authentication to establish the connection.|
-|RD Gateway|The administrator now attempts an RDP connection to the Jump Server with the RD Gateway specified in the Remote Desktop Connection configuration. The RD Gateway has a private IP address that is reachable through the Azure VPN Gateway connection. Policies on the RD Gateway control whether the privileged account is authorised to access the requested Jump Server. The RD Gateway prompts the administrator for credentials and forwards the authentication request to the Network Policy Server (NPS).|
-|Network Policy Server (NPS)|The NPS receives the authentication request from the RD Gateway and validates the username and password against Active Directory before sending a request to Azure Active Directory to trigger an Azure AD MFA authentication request.|
-|Azure AD MFA|Azure AD MFA sends an authentication request to the registered mobile device of the privileged account. The mobile device is managed by Intune to ensure compliance with security requirements. The administrator must authenticate first to the mobile device and then to the Microsoft Authenticator App using a PIN or Biometric system before the authentication attempt is authorised to Azure AD MFA.|
-|Jump Server|Once successfully authenticated, the RDP connection is encrypted using Transport Layer Security (TLS) and then sent through the encrypted IPSec tunnel to the Azure VPN Gateway, through the RD Gateway and on to the Jump Server. From the Jump Server, the administrator can now RDP or SSH to workload virtual machines as specified in the JIT request.|
-
-## General guidance
-
-When implementing the components listed in this article, the following general guidance applies:
-
-* Validate the region availability of services, ensuring that all data remains within authorised locations and deploy to AU Central or AU Central 2 as the first preference for PROTECTED workloads
-
-* Refer to the *Azure - ACSC Certification Report ΓÇô Protected 2018* publication for the certification status of individual services and perform self-assessments on any relevant components not included in the report as per the *ACSC CONSUMER GUIDE ΓÇô Microsoft Azure at PROTECTED*
-
-* Ensure network connectivity and any necessary proxy configuration for access to necessary authentication components such as Azure AD, ADFS, and PTA
-
-* Use Azure Policy to monitor and enforce compliance with requirements
-
-* Ensure virtual machines, especially Active Directory Domain Controllers, are stored in encrypted storage accounts and utilise Azure Disk Encryption
-
-* Create and maintain robust identity and administrative privilege management processes and governance to underpin the technical controls listed in this article
-
-|Resource|URL|
-|||
-|Australian Regulatory and Policy Compliance Documents|[Australian Regulatory and Policy Compliance Documents](https://aka.ms/au-irap)|
-|Azure products - Australian regions and non-regional|[Azure products - Australian regions and non-regional](https://azure.microsoft.com/global-infrastructure/services/?regions=non-regional,australia-central,australia-central-2,australia-east,australia-southeast)|
-|Strategies to Mitigate Cyber Security Incidents|[Strategies to Mitigate Cyber Security Incidents](https://acsc.gov.au/infosec/mitigationstrategies.htm)|
-|ACSC Protect: Secure Administration|[ACSC Protect: Secure Administration](https://www.cyber.gov.au/acsc/view-all-content/publications//secure-administration)|
-|How To: Integrate your Remote Desktop Gateway infrastructure using the Network Policy Server (NPS) extension and Azure AD|[Integrate RD Gateway with NPS and Azure AD](../active-directory/authentication/howto-mfa-nps-extension-rdg.md)|
-
-## Component guidance
-
-This section provides information on the purpose of each component and its role in the overall Secure Remote Administration architecture. Additional links are provided to access useful resources such as reference documentation, guides, and tutorials.
-
-## Secure devices
-
-The physical devices used by privileged users to perform administrative functions are valuable targets for malicious actors. Maintaining the security and integrity of the physical devices and ensuring that they are free from malicious software and protecting them from compromise is a key part of providing a secure remote administration capability. This involves high priority security configuration as specified in the ACSC's Essential Eight Strategies to Mitigate Cyber Security Incidents such as application filtering, patching applications, application hardening, and patching operating systems. These capabilities must be installed, configured, audited, validated, and reported on to ensure the state of a device is compliant with organisation requirements.
-
-### Privileged workstation
-
-The privileged workstation is a hardened machine that can be used to perform administrative duties and is only accessible to administrative accounts. The privileged workstation should have policies and configuration in place to limit the software that can be run, its access to network resources and the internet and credentials should be protected in the event that the device is stolen or compromised.|
-
-|Resources|Link|
-|||
-|Privileged Access Workstations Architecture Overview|[https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/](/security/compass/privileged-access-deployment)|
-|Securing Privileged Access Reference Material|[https://docs.microsoft.com/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)|
-
-### Mobile device
-
-A mobile device is at greater risk of accidental loss or theft due to its portability and size and needs to be secured appropriately. The mobile device provides a strong additional factor for authentication given its ability to enforce authentication for device access, traceability through location services, encryption functions, and the ability to be remotely wiped. When using a mobile device as an additional authentication factor for Azure, the device should be configured to use the Microsoft Authenticator App with PIN or Biometric authentication and not through phone calls or text messages.
-
-|Resources|Link|
-|||
-|Azure AD Authentication Methods|[https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods](../active-directory/authentication/concept-authentication-methods.md)|
-|How to use the Microsoft Authenticator App|[https://support.microsoft.com/help/4026727/microsoft-account-how-to-use-the-microsoft-authenticator-app](https://support.microsoft.com/help/4026727/microsoft-account-how-to-use-the-microsoft-authenticator-app)|
-
-### Microsoft Intune
-
-Intune is the component of Enterprise Mobility + Security that manages mobile devices and apps. It integrates closely with other components like Azure Active Directory for identity and access control and Azure Information Protection for data protection. Intune provides policies for workstations and mobile devices to set compliance requirements for accessing resources and provides reporting and auditing capabilities for gaining insight into the status of administrative devices.
-
-|Resources|Link|
-|||
-|Microsoft Intune Documentation|[https://docs.microsoft.com/intune/](/intune/)|
-|Get started with Device Compliance in Intune|[https://docs.microsoft.com/intune/device-compliance-get-started](/intune/device-compliance-get-started)|
-
-### Group Policy
-
-Group Policy is used to control the configuration of operating systems and applications. Security policies control the authentication, authorisation, and auditing settings of a system. Group Policy is used to harden the privileged workstation, protect administrative credentials and restrict non-privileged accounts from accessing privileged devices.
-
-|Resources|Link|
-|||
-|Allow sign in locally Group Policy setting|[https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/allow-log-on-locally](/windows/security/threat-protection/security-policy-settings/allow-log-on-locally)|
-
-### Jump Server / Bastion Host
-
-The Jump Server / Bastion Host is a centralised point for administration. It has the tools required to perform administrative duties, but also has the network access necessary to connect to resources on administrative ports. The Jump Server is the central point for administering Virtual Machine workloads in this article, but it can also be configured as the authorised point for administering Platform as a Service (PaaS) capabilities such as SQL. Access to PaaS capabilities can be restricted on a per service basis using identity and network controls.
-
-|Resources|Link|
-|||
-|Implementing Secure Administrative Hosts|[https://docs.microsoft.com/windows-server/identity/ad-ds/plan/security-best-practices/implementing-secure-administrative-hosts](/windows-server/identity/ad-ds/plan/security-best-practices/implementing-secure-administrative-hosts)|
-
-### Just in Time (JIT) access
-
-JIT is a Microsoft Defender for Cloud capability that utilises Network Security Groups (NSGs) to block access to administrative protocols such as RDP and SSH on Virtual Machines. Applications hosted on Virtual Machines continue to function as normal, but for administrative access to be obtained it must be requested can only be granted for a set period of time. All requests are logged for auditing purposes.
-
-|Resources |Link |
-|||
-|Manage Just in Time (JIT) access|[https://docs.microsoft.com/azure/security-center/security-center-just-in-time](../security-center/security-center-just-in-time.md)|
-|Automating Azure Just In Time VM Access|[https://blogs.technet.microsoft.com/motiba/2018/06/24/automating-azure-just-in-time-vm-access](/archive/blogs/motiba/automating-azure-just-in-time-vm-access)|
-
-## Secure communication
-
-Communications traffic for administration activities can contain highly sensitive information, such as administrative credentials and must be managed and protected accordingly. Providing secure communication involves reliable encryption capabilities to prevent eavesdropping and network segmentation and restrictions that limit administrative traffic to authorised end points and controls lateral movement if a system is compromised.
-
-### Azure portal
-
-Communications to the Azure portal are encrypted using Transport Layer Security (TLS) and the use of the Azure portal has been certified by the ACSC. Commonwealth entities should follow the recommendations in the *ACSC Consumer Guide* and configure their web browsers to ensure that they are using the latest version of TLS and with supported cryptographic algorithms.
-
-|Resources |Link |
-|||
-|Azure Encryption Overview ΓÇô Encryption in transit|[https://docs.microsoft.com/azure/security/security-azure-encryption-overview#encryption-of-data-in-transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)|
-
-### Azure VPN Gateway
-
-The Azure VPN Gateway provides the secure encrypted connection from the privileged workstation to Azure. The Azure VPN Gateway has been certified by the ACSC for providing secure IPSec communication. Commonwealth entities should configure the Azure VPN Gateway in accordance with the ACSC Consumer Guide, ACSC Certification Report, and other specific guidance.
-
-|Resources |Link |
-|||
-|About Point-to-Site Connections|[https://docs.microsoft.com/azure/vpn-gateway/point-to-site-about](../vpn-gateway/point-to-site-about.md)|
-|Azure VPN Gateway Cryptographic Details|[https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-compliance-crypto](../vpn-gateway/vpn-gateway-about-compliance-crypto.md)|
-|Azure VPN Gateway Configuration|[Azure VPN Gateway configuration](vpn-gateway.md)|
-
-### Remote Desktop (RD) Gateway
-
-RD Gateway is a secure mechanism for controlling and authorising RDP connections to systems. It works by encapsulating RDP traffic in HyperText Transfer Protocol Secure (HTTPS) and encrypted using TLS. TLS provides an additional layer of security for administrative traffic.
-
-|Resources |Link |
-|||
-|Remote Desktop Services Architecture|[https://docs.microsoft.com/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture](/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture)|
-
-### Network Security Groups (NSGs)
-
-NSGs function as Access Control Lists (ACLs) for network traffic entering or leaving subnets or virtual machines. NSGs provide network segmentation and provide a mechanism for controlling and limiting the communications flows permitted between systems. NSGs are a core component of Just in Time Administration (JIT) for allowing or denying access to administrative protocols.
-
-|Resources |Link |
-|||
-|Azure Security Groups Overview|[https://docs.microsoft.com/azure/virtual-network/security-overview](../virtual-network/network-security-groups-overview.md)|
-|How to: Plan Virtual Networks|[https://docs.microsoft.com/azure/virtual-network/virtual-network-vnet-plan-design-arm](../virtual-network/virtual-network-vnet-plan-design-arm.md)|
-
-## Strong authentication
-
-Securely identifying privileged users before granting access to systems is a core component of secure administration. Mechanisms must be in place to protect the credentials associated with a privileged account and to prevent malicious actors from gaining access to systems through impersonation or credential theft.
-
-### Domain Controller (DC)
-
-At a high level, a DC hosts a copy of the Active Directory Database, which contains all the users, computers and groups within a Domain. DCs perform authentication for users and computers. The DCs in this architecture are hosted as virtual machines within Azure and provide authentication services for privileged accounts connecting to Jump Servers and workload virtual machines.
-
-|Resources |Link |
-|||
-|Active Directory Domain Services Overview|[https://docs.microsoft.com/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview)|
-
-### Azure Active Directory (Azure AD)
-
-Azure AD is the authentication service for Azure. It contains the cloud
-
-identities and provides authentication and authorisation for an Azure environment. Azure AD can be synchronised with Active Directory through Azure AD Connect and can provide federated authentication through Active Directory Federation Services (AD FS) and Azure AD Connect. Azure AD is a core component of secure administration.
-
-|Resources |Link |
-|||
-|Azure Active Directory Documentation|[https://docs.microsoft.com/azure/active-directory](../active-directory/index.yml)|
-|Hybrid Identity Documentation|[https://docs.microsoft.com/azure/active-directory/hybrid](../active-directory/hybrid/index.yml)|
-
-### Network Policy Server (NPS)
-
-An NPS is an authentication and policy server that provides advanced authentication and authorisation processes. The NPS server in this architecture is provided to integrate Azure AD MFA authentication with RD Gateway authentication requests. The NPS has a specific plug-in to support integration with Azure AD MFA in Azure AD.
-
-|Resources |Link |
-|||
-|Network Policy Server Documentation|[https://docs.microsoft.com/windows-server/networking/technologies/nps/nps-top](/windows-server/networking/technologies/nps/nps-top)|
-
-### Azure AD MFA
-
-Azure AD MFA is an authentication service provided within Azure Active Directory to enable authentication requests beyond a username and password for accessing cloud resources such as the Azure portal. Azure AD MFA supports a range of authentication methods and this architecture utilises the Microsoft Authenticator App for enhanced security and integration with the NPS.
-
-|Resources |Link |
-|||
-|How it works: Azure AD Multi-Factor Authentication|[https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks](../active-directory/authentication/concept-mfa-howitworks.md)|
-|How to: Deploy cloud-based Azure AD Multi-Factor Authentication|[https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-getstarted](../active-directory/authentication/howto-mfa-getstarted.md)|
-
-## Strong authorisation
-
-Once a privileged account has been securely identified, it can be granted access to resources. Authorisation controls and manages the privileges that are assigned to a specific account. Strong Authorisation processes align with the ACSC's Essential Eight strategy for mitigating cyber security incidents of restricting administrative privileges.
-
-### Identity and access management
-
-Access to perform privileged actions within Azure is based on roles that are assigned to that account. Azure includes an extensive and granular range of roles with specific permissions to undertaken specific tasks. These roles can be granted at multiple levels such as a subscription or resource group. Role assignment and permission management are based on accounts and groups in Azure Active Directory and is managed through Access Control (IAM) within Azure.
-
-|Resources |Link |
-|||
-|Azure role-based access control (Azure RBAC)|[https://docs.microsoft.com/azure/role-based-access-control](../role-based-access-control/index.yml)|
-|Understand Role Definitions|[https://docs.microsoft.com/azure/role-based-access-control/role-definitions](../role-based-access-control/role-definitions.md)|
-
-### Privileged Identity Management (PIM)
-
-PIM is an Azure Active Directory component that controls access to privileged roles. Privileged accounts do not require permanent or standing privileged access, but can instead be granted the ability to request privileged access for a period of time in order to complete privileged activities. PIM provides additional controls around maintaining and restricting privileged access as well as logging and auditing to track instances of privilege use.
-
-|Resources |Link |
-|||
-|Privileged Identity Management (PIM) Documentation|[https://docs.microsoft.com/azure/active-directory/privileged-identity-management](../active-directory/privileged-identity-management/index.yml)|
-|Start using PIM|[https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-getting-started](../active-directory/privileged-identity-management/pim-getting-started.md)|
-
-### Conditional access
-
-Conditional access is a component of Azure Active Directory that allows or denies access to resources based on conditions. These conditions can be network location based, device type, compliance status, group membership and more. Conditional Access is used to enforce MFA, device management, and compliance through Intune and group membership of administrative accounts.
-
-|Resources |Link |
-|||
-|Conditional Access Documentation|[https://docs.microsoft.com/azure/active-directory/conditional-access](../active-directory/conditional-access/index.yml)|
-|How to: Require Managed Devices for cloud app access with conditional access|[https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices](../active-directory/conditional-access/require-managed-devices.md)|
-
-## Next steps
-
-Review the article on [Gateway Ingress Traffic Management and Control](gateway-ingress-traffic.md) for details on controlling traffic flows through your Gateway components in Azure.
azure-australia Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/identity-federation.md
- Title: Identity federation in Azure Australia
-description: Guidance on configuring identity federation within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019----
-# Identity federation in Azure Australia
-
-Identity Management and Federation with Public Cloud offerings is one of the most crucial first-steps for using the cloud. Microsoft's Azure Active Directory service stores user information to enable access to cloud services and is a pre-requisite for consuming other Azure services.
-
-This article covers the key design points for implementing Azure Active Directory, synchronizing users from an Active Directory Domain Services domain, and implementing secure authentication. Specific focus is placed on the recommendations in the Australian Cyber Security Center's Information Security Manual (ISM) and Azure Certification Reports.
-
-The classification of information stored within Azure Active Directory should inform decisions about how it is designed. The following excerpt is provided from the [ACSC Certification Report ΓÇô Microsoft Azure](https://aka.ms/au-irap):
-
->**ACSC Certification Report ΓÇô Microsoft Azure**
->Azure Active Directory (Azure AD) must be configured with Active Directory Federation services when Commonwealth entities classify the use and data content of their Active Directory at PROTECTED. While Active Directory data at the UNCLASSIFIED Dissemination Limiting Markings (UDLM) classification does not require federation, Commonwealth entities can still implement federation to mitigate risks associated with the service being provided from outside of Australia.
-
-As such, what information is synchronised, and the mechanism by which users are authenticated, are the two key concerns covered here.
-
-## Key design considerations
-
-### User synchronisation
-
-When deploying Azure AD Connect, there are several decisions that must be made about the data that will be synchronised. Azure AD Connect is based upon Microsoft Identity Manager and provides a robust feature-set for [transforming](../active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration.md) data between directories.
-
-Microsoft Consulting Services can be engaged to do an ADRAP evaluation of your existing Windows Server Active Directory. The ADRAP assists in determining any issues that may need to be corrected before synchronising with Azure Active Directory. Microsoft Premier Support Agreements will generally include this service.
-
-The [IDFix tool](/office365/enterprise/install-and-run-idfix) scans your on-premises Active Directory domain for issues before synchronising with Azure AD. IDFix is a key first step before implementing Azure AD Connect. Although an IDFix scan can identify a large number of issues, many of these issues can either be resolved quickly with scripts, or worked-around using data transforms in Azure AD Connect.
-
-Azure AD requires that users have an externally routable top-level domain to enable authentication. If your domain has a UPN suffix that is not externally routable, the you need to set the [alternative sign in ID](../active-directory/hybrid/plan-connect-userprincipalname.md) in AD Connect to the user's mail attribute. Users then sign in to Azure services with their email address rather than their domain sign in.
-
-The UPN suffix on user accounts can also be altered using tools such as PowerShell however; it can have unforeseen consequences for other connected systems and is no longer considered best practice.
-
-In deciding which attributes to synchronise to Azure Active Directory, it's safest to assume that all attributes are required. It is rare for a directory to contain actual PROTECTED data, however conducting an audit is recommended. If PROTECTED data is found within the directory, assess the impact of omitting or transforming the attribute. As a helpful guide, there is a list of attributes which Microsoft Cloud Services [require](../active-directory/hybrid/reference-connect-sync-attributes-synchronized.md).
-
-### Authentication
-
-It's important to understand the options that are available, and how they can be used to keep end-users secure.
-Microsoft offers [three native solutions](../active-directory/hybrid/plan-connect-user-signin.md) to authenticate users against Azure Active Directory:
-
-* Password hash synchronization - The hashed passwords from Active Directory Domain Services are synchronised by Azure AD Connect into Azure Active Directory.
-* [Pass-through authentication](../active-directory/hybrid/how-to-connect-pta.md) - Passwords remain within Active Directory Domain Services. Users are authenticated against Active Directory Domain Services via an agent. No passwords are stored within Azure AD.
-* [Federated SSO](../active-directory/hybrid/how-to-connect-fed-whatis.md) - Azure Active Directory is federated with Active Directory Federation Services, during sign in, Azure directs users to Active Directory Federation Services to authenticate. No passwords are stored within Azure AD.
-
-Password hash synchronisation can be used in scenarios where OFFICIAL:Sensitive and below data is being stored within the directory. Scenarios where PROTECTED data is being stored will require one of the two remaining options.
-
-All three of these options support [Password Write-Back](../active-directory/authentication/concept-sspr-writeback.md), which the [ACSC Consumer Guide](https://aka.ms/au-irap) recommends being disabled. However; organisations should evaluate the risk of disabling Password Writeback against the productivity gains and reduced support effort of using self-service password resets.
-
-#### Pass-Through Authentication (PTA)
-
-Pass-Through Authentication was released after the IRAP assessment was completed and therefore; should be individually evaluated to determine how the solution fits your organisation's risk profile. Pass-Through Authentication is preferred over Federation by Microsoft due to the improved security posture.
-
-![Pass-Through Authentication](media/pta1.png)
-
-Pass-Through Authentication presents several design factors to be considered:
-
-* Pass-Through Authentication Agent must be able to establish outgoing connections to Microsoft Cloud Services.
-* Installing more than one agent to ensure that the service will be Highly Available. It is best practice to deploy at least three agents, and up to a maximum of 12 agents.
-* Best Practice is to avoid installing the agent directly onto an Active Directory Domain Controllers. By default when deploying Azure AD Connect with Pass-Through authentication it will install the agent on the AD Connect server.
-* Pass-Through Authentication is a lower maintenance option than Active Directory Federation Services because it does not require dedicated server infrastructure, certificate management, or inbound firewall rules.
-
-#### Active Directory Federation Services (ADFS)
-
-Active Directory Federation Services was included within the IRAP assessment and is approved for use in PROTECTED environments.
-
-![Federation](media/federated-identity.png)
-
-Active Directory Federation Services presents several design factors to be considered:
-
-* Federation Services will require network ingress for HTTPS traffic from the internet or at minimum Microsoft's service endpoints.
-* Federation Services uses PKI and certificates, which require ongoing management and renewal.
-* Federation Services should be deployed on dedicated servers, and will require the relevant network infrastructure to make it securely accessible externally.
-
-### Multi-Factor Authentication (MFA)
-
-The ISM section on multi-factor authentication recommends implementing it in the following scenarios based on your risk profile:
-
-* Authenticating Standard Users
-* Authenticating Privileged accounts
-* Authenticating Users Remote access
-* Users doing privileged actions
-
-Azure Active Directory provides Multi-Factor Authentication that can be enabled for either all, or a subset of users (for example, only Privileged Accounts). Microsoft also provides a solution called Conditional Access, which allows more granular control over how Multi-Factor Authentication is applied (for example, only when users sign in from remote IP address ranges).
-
-Azure AD Multi-Factor Authentication supports the following ISM acceptable forms of validation:
-
-* Phone call
-* SMS message
-* Microsoft Authenticator Application
-* Supported hardware tokens
-
-Privileged Identity Management, a component of Azure Active Directory, can be used to enforce the use of Multi-Factor authentication when users elevate their permissions to meet the fourth recommendation.
-
-## Next steps
-
-Review the article on [Azure role-based access control (Azure RBAC) and Privileged Identity Management](role-privileged.md).
azure-australia Recovery Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/recovery-backup.md
- Title: Backup and disaster recovery in Azure Australia
-description: Backup and disaster recovery in Microsoft Azure for Australian Government agencies as it relates to the ASD Essential 8
--- Previously updated : 07/22/2019---
-# Backup and disaster recovery in Azure Australia
-
-Having backup and disaster recovery plans with the supporting infrastructure in place is critical for all organisations. The importance of having a backup solution is highlighted by its inclusion in the [Australian Cyber Security Center's Essential 8](https://acsc.gov.au/publications/protect/essential-eight-explained.htm).
-
-Microsoft Azure provides two services that enable resilience: Azure Backup and Azure Site Recovery. These services enable you to protect your data, both on-premises and in the cloud, for a variety of design scenarios. Azure Backup and Azure Site Recovery both use a common storage and management resource: the Azure Recovery Services Vault. This vault is used to manage, monitor, and segregate Azure Backup and Azure Site Recovery Data.
-
-This article details the key design elements for implementing Azure Backup and Azure Site Recovery in line with the [Australian Signals Directorate's (ASD) Information Security Manual (ISM) Controls](https://acsc.gov.au/infosec/ism/index.htm).
-
-## Azure Backup
-
-![Azure Backup](media/backup-overview.png)
-
-Azure Backup resembles a traditional on-premises backup solution and provides the ability to backup both on-premises and Azure hosted data. Azure Backup can be used to back up the following data types to Azure:
-
-* Files and folders
-* Supported Windows and Linux operating systems hosted on:
- * Hyper-V and VMWare Hypervisors
- * Physical hardware
-* Supported Microsoft applications
-
-### Azure Site Recovery
-
-![Azure Site Recovery](media/asr-overview.png)
-
-Azure Site Recovery replicates workloads consisting of either a single virtual machine or multi-tier applications. Replication is supported from on-premises into Azure, between Azure regions, or between on-premises locations orchestrated by Azure Site Recovery. On-premises virtual machines can be replicated to Azure or to a supported on-premises hypervisor. Once configured, Azure Site Recovery orchestrates replication, fail-over, and fail-back.
-
-## Key design considerations
-
-When implementing a backup or disaster recovery solution, the proposed solution needs to consider:
-
-* The scope and volume of data to be captured
-* How long the data will be retained
-* How this data is securely stored and managed
-* The geographical locations where the data is stored
-* Routine system testing procedures
-
-The ISM provides guidance on the security considerations that should be made when designing a solution. Microsoft Azure provides means to address these security considerations.
-
-### Data sovereignty
-
-Organisations need to ensure that data sovereignty is maintained when utilising cloud based storage locations. Azure Policy provides the means to restrict the permitted locations where an Azure resource can be created. The built-in Azure Policy "Allowed Locations" is used to ensure that any Azure resources created under the scope of an assigned Azure Policy can only be created in the nominated geographical locations.
-
-The Azure Policy items for geographic restriction for Azure resources are:
-
-* allowedLocations
-* allowedSingleLocation
-
-These policies allow Azure administrators to restrict creation to a list of nominated locations or even as single geographic location.
-
-### Redundant and geographically dispersed storage
-
-Data stored in the Azure Recovery Service Vault is always stored on redundant storage. By default the Recovery Service Vault uses Azure Geographically Redundant Storage (GRS). Data stored using GRS is replicated to other Azure data centres in the Recovery Service Vault's [secondary paired region](../availability-zones/cross-region-replication-azure.md). This replicated data is stored as read-only and is only made writeable if there is an Azure failover event. Within the Azure data centre, the data is replicated between separate fault domains and upgrade domains to minimise the chance of hardware or maintenance-based outage. GRS provides at least 99.99999999999999% availability annually.
-
-The Azure Recovery Services Vault can be configured to utilise Locally Redundant Storage (LRS). LRS is a lower-cost storage option with the trade-off of reduced availability. This redundancy model employs the same replication between separate fault domains and upgrade domains but is not replicated between geographic regions. Data located on LRS storage, while not as resilient as GRS, still provides at least 99.999999999% availability annually.
-
-Unlike traditional offsite storage technologies like tape media, the additional copies of the data are created automatically and do not require any additional administrative overhead.
-
-### Restricted access and activity monitoring
-
-Backup data must be protected from corruption, modification, and unapproved deletion. Both Azure Backup and Azure Site Recovery make use of the common Azure management fabric. This fabric provides detailed auditing, logging, and Azure role-based access control (Azure RBAC) to resources located within Azure. Access to backup data can be restricted to select administrative staff and all actions involving backup data can be logged and audited.
-
-Both Azure Backup and Azure Site Recovery have built-in logging and reporting features. Any issues that occur during backup or replication are reported to administrators using the Azure management fabric.
-
-Azure Recovery Services Vault also has the following additional data security measures in place:
-
-* Backup data is retained for 14 days after a delete operation has occurred
-* Alerts and Notifications for critical operations such as "Stop Backup with delete data"
-* Security PIN requirements for critical operations
-* Minimum retention range checks are in place
-
-These minimum retention range checks include:
-
-* For daily retention, a minimum of seven days of retention
-* For weekly retention, a minimum of four weeks of retention
-* For monthly retention, a minimum of three months of retention
-* For yearly retention, a minimum of one year of retention
-
-All backup data stored within Azure is encrypted at rest using Azure's Storage Service Encryption (SSE). This is enabled for all new and existing storage accounts by default and cannot be disabled. The encrypted data is automatically decrypted during retrieval. By default, data encrypted using SSE is encrypted using a key provided by and managed by Microsoft. Organisations can choose to provide and manage their own encryption key for use with SSE. This provides an optional additional layer of security for the encrypted data. This key can be stored by the customer on-premises or securely within the Azure Key vault.
-
-### Secure data transport
-
-Azure Backup data encrypted in transit using AES 256. This data is secured via the use of a passphrase created by administrative staff when the backup is first configured. Microsoft does not have access to this passphrase meaning the customer must ensure this passphrase is stored securely. The data transfer then takes place between the on-premises environment and the Azure Recovery Services Vault via a secure HTTPS connection. The data within the Recovery Services Vault is then encrypted at rest using Azure SSE.
-
-Azure Site Recovery data is also always encrypted in transit. All replicated data is securely transported to Azure using HTTPS and TLS. When an Azure customer connects to Azure using an ExpressRoute connection, Azure Site Recovery data is sent via this private connection. When an Azure customer is connecting to Azure using a VPN connection, the data is replicated between on-premises and the Recovery Services vault securely over the internet.
-
-This secure network data transfer removes the security risk and mitigation requirements for managing traditional offsite backup storage solutions such as tape media.
-
-### Data retention periods
-
-A minimum backup retention period of three months is recommended, however, longer retention periods are commonly required. Azure Backup can provide up to 9999 copies of a backup. If a single Azure Backup of a protected instance was taken daily, this would allow for the retention of 27 years of daily backups. Individual monthly backups of a protected instance allow for 833 years of retention. As backup data is aged out and less granular backups are retained over time, the total retention window for backup data grows. Azure doesn't limit the length of time data can remain in an Azure Recovery Services Vault, only the total number of backups per instance. There is also no performance difference between restoring from old or new backups, each restore takes the same amount of time to occur.
-
-The Azure Recovery Services Vault has a number of default backup and retention policies in place. Administrative staff can also create custom backup and retention policies.
-
-![Azure Backup Policy](media/create-policy.png)
-
-A balance between backup frequency and long-term retention requirements needs to be found when configuring Azure Backup and retention policies.
-
-### Backup and restore testing
-
-The ISM recommends testing of backup data to ensure that the protected data is valid when a restore or failover is required. Azure Backup and Azure Site Recovery also provide the capability to test protected data once it has been backed up or replicated. Data managed by Azure Backup can be restored to a nominated location and the consistency of the data can then be validated.
-
-Azure Site Recovery has inbuilt capability to perform failover testing. Workloads replicated to the Recovery Services Vault can be restored to a nominated Azure environment. The target restore environment can be fully isolated from any production environment to ensure there is no impact on production systems while performing a test. Once the test is complete, the test environment and all resources can be immediately deleted to reduce operational costs.
-
-Failover testing and validation can be automated using the Azure Automation service built into the Azure platform. This allows for failover testing to be scheduled to occur automatically to ensure that data is being successfully replicated to Azure.
-
-## Next steps
-
-Review the article on [Ensuring Security with Azure Policy](azure-policy.md).
azure-australia Reference Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/reference-library.md
- Title: Additional documentation and resources-
-description: Additional documentation, tutorials or references relevant to Australian Government agencies operating securely in Azure.
--- Previously updated : 07/29/2019---
-# Additional documentation and resources by focus area
-
-This resource library contains additional links and references that are relevant to the secure implementation of Australian Government workloads in Azure Australia.
-
-## General references for all security and governance in Azure Australia
-
-* [Microsoft Service Trust Portal Australia Page](https://aka.ms/au-irap)
-* [Microsoft Trust Center CCSL Page](https://www.microsoft.com/trustcenter/compliance/ccsl)
-* [Azure Security and Compliance Blueprints for PROTECTED](https://aka.ms/au-protected)
-* [Tenant Isolation in Microsoft Azure](../security/fundamentals/isolation-choices.md)
-* [Australian Information Security Manual](https://www.cyber.gov.au/ism)
-* [Australian Cyber Security Centre (ACSC) Certified Cloud List](https://www.cyber.gov.au/irap/cloud-services)
-
-## Azure Key Vault
-
-* [Azure Key Vault Overview](../key-vault/general/overview.md)
-* [About keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md)
-* [Configure Azure Key Vault firewalls and virtual networks](../key-vault/general/network-security.md)
-* [Secure access to a key vault](../key-vault/general/security-features.md)
-* [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md)
-* [How to use Azure Key Vault with Azure Windows Virtual Machines in .NET](../key-vault/general/tutorial-net-virtual-machine.md)
-* [Azure Key Vault managed storage account - PowerShell](../key-vault/general/tutorial-net-virtual-machine.md)
-* [Setup key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md)
-
-## Identity federation
-
-* [Azure AD Connect - Installation Guide](../active-directory/hybrid/how-to-connect-install-roadmap.md)
-* [Password Write-Back](../active-directory/authentication/concept-sspr-writeback.md)
-* [Install and Run the IDFix Tool](/office365/enterprise/install-and-run-idfix)
-* [Azure AD UPN Population](../active-directory/hybrid/plan-connect-userprincipalname.md)
-* [Azure AD Connect - Synchronised Attributes](../active-directory/hybrid/reference-connect-sync-attributes-synchronized.md)
-* [Azure AD Connect - Best-Practice Configuration Guide](../active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration.md)
-* [Azure AD Connect - User Sign-In Options](../active-directory/hybrid/plan-connect-user-signin.md)
-* [Azure AD Connect and Federation](../active-directory/hybrid/how-to-connect-fed-whatis.md)
-* [Pass-Through Authentication Documentation](../active-directory/hybrid/how-to-connect-pta.md)
-* [Deploying Azure AD Multi-Factor Authentication](../active-directory/authentication/howto-mfa-getstarted.md)
-* [Azure Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md)
-
-## Azure Backup and Azure Site Recovery
-
-* [Introduction to Azure Backup](../backup/backup-overview.md)
-* [Azure Backup Overview](../backup/backup-overview.md)
-* [Azure Site Recovery Overview](../site-recovery/site-recovery-overview.md)
-* [Azure Governance](../governance/index.yml)
-* [Azure Paired Regions](../availability-zones/cross-region-replication-azure.md)
-* [Azure Policy](../governance/policy/overview.md)
-* [Azure Storage Service Encryption](../storage/common/storage-service-encryption.md)
-* [Azure Backup Tutorials](../backup/index.yml)
-* [Azure Site Recovery Tutorials](../site-recovery/index.yml)
-
-## Azure role-based access control (Azure RBAC) and Privileged Identity Management (PIM)
-
-* [Azure RBAC Overview](../role-based-access-control/overview.md)
-* [Azure Privileged Identify Management Overview](../active-directory/privileged-identity-management/pim-configure.md)
-* [Azure Management Groups Overview](../governance/management-groups/index.yml)
-* [Azure Identity and Access Control Best Practices](../security/fundamentals/identity-management-best-practices.md)
-* [Managing Azure AD Groups](../active-directory/fundamentals/active-directory-manage-groups.md)
-* [Hybrid Identity](../active-directory/hybrid/whatis-hybrid-identity.md)
-* [Azure Custom Roles](../role-based-access-control/custom-roles.md)
-* [Azure Built-in Roles](../role-based-access-control/built-in-roles.md)
-* [Securing Privileged Access in Hybrid Cloud Environments](../active-directory/roles/security-planning.md)
-* [Azure Enterprise Scaffold](/azure/architecture/cloud-adoption/appendix/azure-scaffold)
-
-## System monitoring for security
-
-* [Azure Governance](../governance/index.yml)
-* [Azure Security Best Practices](../security/fundamentals/best-practices-and-patterns.md)
-* [Platforms and features supported by Microsoft Defender for Cloud](../security-center/security-center-os-coverage.md)
-* [Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md)
-* [Azure Diagnostic Logs](../azure-monitor/essentials/platform-logs-overview.md)
-* [Microsoft Defender for Cloud Alerts](../security-center/security-center-managing-and-responding-alerts.md)
-* [Azure Log Integration](/previous-versions/azure/security/fundamentals/azure-log-integration-overview)
-* [Analyze Log Data in Azure Monitor](../azure-monitor/logs/log-query-overview.md)
-* [Stream Azure Monitor Logs to an Event Hub](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
-* [Event Hub Security and Authentication](../event-hubs/authenticate-shared-access-signature.md)
-
-## Azure Policy and Azure Blueprints
-
-* [Azure Policy Overview](../governance/policy/overview.md)
-* [Azure Blueprints Overview](https://azure.microsoft.com/services/blueprints/)
-* [Azure Policy Samples](../governance/policy/samples/index.md)
-* [Azure Policy Samples Repository](https://github.com/Azure/azure-policy)
-* [Azure Policy Definition Structure](../governance/policy/concepts/definition-structure.md)
-* [Azure Policy Effects](../governance/policy/concepts/effects.md)
-* [Azure Governance](../governance/index.yml)
-* [Azure Management Groups](../governance/management-groups/index.yml)
-* [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)
-
-## Next steps
-
-Login to the [Azure portal](https://portal.azure.com) and start configuring your resources securely in Azure Australia.
azure-australia Role Privileged https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/role-privileged.md
- Title: Azure role-based access control (Azure RBAC) and Privileged Identity Management-
-description: Guidance on Implementing Azure role-based access control (Azure RBAC) and Privileged Identity Management within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019----
-# Azure role-based access control (Azure RBAC) and Privileged Identity Management (PIM)
-
-Managing administrative privilege is a critical step in ensuring security within any IT environment. Restricting administrative privilege via the use of Least Privilege Security is a requirement of the [ACSC ISM](https://acsc.gov.au/infosec/ism/index.htm) and forms part of the [ACSC Essential 8](https://www.acsc.gov.au/infosec/mitigationstrategies.htm) list of security recommendations.
-
-Microsoft provides a suite of controls to implement Just-in-Time and Just-Enough-Access within Microsoft Azure. Understanding these controls is essential for an effective security posture in the Cloud. This guide will provide an overview of the controls themselves and the key design considerations when implementing them.
-
-## Azure RBAC
-
-Azure role-based access control (Azure RBAC) is central to the management of access to all resources within Microsoft Azure and the management of Azure Active Directory (Azure AD). Azure RBAC can be implemented alongside a number of complementary features available in Azure. This article focuses on implementing effective RBAC using Azure Management Groups, Azure Active Directory Groups, and Azure Privileged Identity Management (PIM).
-
-At a high level, implementing Azure RBAC requires three components:
-
-![Diagram shows the three components necessary for implementing R B A C, which are security principal, role definition, and scope, which all feed into role assigment.](media/rbac-overview.png)
-
-* **Security Principals**: A security principal can be any one of the following; a user, a group, [Service Principals](../active-directory/develop/app-objects-and-service-principals.md), or a [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md). Security Principals should be assigned privileges using Azure Active Directory Groups.
-
-* **Role Definitions**: A Role Definition, also referred to as a Role, is a collection of permissions. These permissions define the operations that can be performed by the Security Principals assigned to the Role Definition. This functionality is provided by Azure Resource Roles and Azure Active Directory Administrator Roles. Azure comes with a set of built-in roles (link) which can be augmented with custom roles.
-
-* **Scope**: The scope is the set of Azure resources that a Role Definition applies to. Azure Roles can be assigned to Azure Resources using Azure Management Groups.
-
-These three components combine to grant Security Principals the access defined in the Role Definitions to all of the resources that fall under the Azure Management Groups' Scope, this is called a Role Assignment. Multiple Role Definitions can be assigned to a Security Principal, and multiple Security Principals can be assigned to a single Scope.
-
-### Azure Active Directory Groups
-
-When assigning privileges to individuals or teams, whenever possible the assignment should be linked to an Azure Active Directory Group and not assigned directly to the user in question. This is the same recommended practice inherited from on-premises Active Directory implementations. Where possible Azure Active Directory Groups should be created per team, complementary to the logical structure of the Azure Management Groups you have created.
-
-In a hybrid cloud scenario, on-premises Windows Server Active Directory Security Groups can be synchronized to your Azure Active Directory instance. If you have already implemented Azure RBAC on-premises using these Windows Server Active Directory Security Groups, these groups, once synchronized, can then be used to implement Azure RBAC for your Azure Resources. Otherwise, your cloud environment can be seen as a clean slate to design and implement a robust privilege management plan built around your Azure Active Directory implementation.
-
-### Azure resource roles versus Azure Active Directory Administrator roles
-
-Microsoft Azure offers a wide variety of built-in roles for [Azure Resources](../role-based-access-control/built-in-roles.md) and [Azure Active Directory Administration](../active-directory/roles/permissions-reference.md). Both types of Role provide specific granular access to either Azure Resources or for Azure AD administrators. It is important to note that Azure Resource roles cannot be used to provide administrative access to Azure AD and Azure AD roles do not provide specific access to Azure resources.
-
-Some examples of the types of access that can be assigned to an Azure resource using a built-in role are:
-
-* Allow one user to manage virtual machines in a subscription and another user to manage virtual networks
-* Allow a DBA group to manage SQL databases in a subscription
-* Allow a user to manage all resources in a resource group, such as virtual machines, websites, and subnets
-* Allow an application to access all resources in a resource group
-
-Examples of the types of access that can be assigned for Azure AD administration are:
-
-* Allow helpdesk staff to reset user passwords
-* Allow staff to invite external users to an Azure AD instance for B2B collaboration
-* Allow administrative staff read access to sign in and audit reports
-* Allow staff to manage all users and groups, including resetting passwords.
-
-It is important to take the time to understand the full list of allowed actions a built-in role provides to ensure that undue access to isn't granted. The list of built-in roles and the access they provide is constantly evolving, the full list of the Roles and their definitions can be viewed by reviewing the documentation linked above or by using the Azure PowerShell cmdlet:
-
-```PowerShell
-Get-AzRoleDefinition
-```
-
-```output
-Name : AcrDelete
-Id : <<RoleID>>
-IsCustom : False
-Description : acr delete
-Actions : {Microsoft.ContainerRegistry/registries/artifacts/delete}
-NotActions : {}
-DataActions : {}
-NotDataActions : {}
-AssignableScopes : {/}
-...
-```
-
-or the Azure CLI command:
-
-```azurecli-interactive
-az role definition list
-```
-
-```output
-[
- {
- "assignableScopes": [
- "/"
- ],
- "description": "acr delete",
- "id": "/subscriptions/49b12d1b-4030-431c-8448-39056021c4ab/providers/Microsoft.Authorization/roleDefinitions/c2f4ef07-c644-48eb-af81-4b1b4947fb11",
- "name": "c2f4ef07-c644-48eb-af81-4b1b4947fb11",
- "permissions": [
- {
- "actions": [
- "Microsoft.ContainerRegistry/registries/artifacts/delete"
- ],
- "dataActions": [],
- "notActions": [],
- "notDataActions": []
- }
- ],
- "roleName": "AcrDelete",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
- },
-...
-```
-
-It is also possible to create custom Azure Resource Roles as required. These custom roles can be created in the Azure portal, via PowerShell, or the Azure CLI. When creating custom Roles, it is vital to ensure the purpose of the Role is unique and that its function is not already provided by an existing Azure Resource Role. This reduces ongoing management complexity and reduces the risk of Security Principals receiving unnecessary privileges. An example would be creating a custom Azure Resource Role that sits between the built-in Azure Resource Roles, "Virtual Machine Contributor" and "Virtual Machine Administrator Login".
-
-The custom Role could be based on the existing Contributor Role, which grants the following access:
-
-| Azure Resource | Access Level |
-| | |
-| Virtual Machines | Can Manage but cannot access |
-| Virtual Network attached to VM | Cannot access |
-| Storage attached to VM | Cannot access |
-|
-
-The custom role could preserve this basic access, but allow the designated users some basic additional privileges to modify the network configuration of the virtual machines.
-
-Azure Resource Roles also have the benefit of being able to be assigned to resources via Azure Management Groups.
-
-### Azure Management Groups
-
-Azure Management Groups can be used by an organisation to manage the assignment of Roles to all of the subscriptions and their resources within an Azure Tenancy. Azure Management Groups are designed to allow you to create management hierarchies, including the ability to map your organisational structure hierarchically, within Azure. Creating organisational business units as separate logical entities allows permissions to be applied within an organisation based on each team's specific requirements. Azure Management Groups can be used to define a management hierarchy up to six levels deep.
-
-![Management Groups](media/management-groups.png)
-
-Azure Management Groups are mapped to Azure Subscriptions within an Azure Tenancy. This allows an organisation to segregate Azure Resources belonging to specific business units and provide a level of granular control over both cost management and privilege assignment.
-
-## Privileged Identity Management (PIM)
-
-Microsoft has implemented Just-In-Time (JIT) and Just-Enough-Access (JEA) through Azure Privileged Identity Management. This service enables administrative staff to control, manage, and monitor privileged access to Azure Resources. PIM allows Security Principals to be made "eligible" for a Role by administrative staff, allowing users to request the activation of the Role through the Azure portal or via PowerShell cmdlets. By default, Role assignment can be activated for a period of between 1 and 72 hours. If necessary, the user can request an extension to their Role assignment and the option to make Role assignment permanent does exist. Optionally, the requirement for Multi-factor Authentication can be enforced when users request the activation of their eligible roles. Once the allocated period of the Role activation expires, the Security Principal no longer has the privileged access granted by the Role.
-
-The use of PIM prevents the common privilege assignment issues that can occur in environments that don't use Just-In-Time access or don't conduct routine audits of privilege assignment. One common issue is the assignment of elevated privileges being forgotten and remaining in place long after the task requiring elevated privileges has been completed. Another issue is the proliferation of elevated privileges within an environment through the cloning of the access assigned to a Security Principal when configuring other similar Security Principals.
-
-## Key design considerations
-
-When designing an Azure RBAC strategy with the intention of enforcing Least Privilege Security, the following security requirements should be considered:
-
-* Requests for privileged access are validated
-* Administrative privileges are restricted to the minimum access required to perform the specific duties
-* Administrative privileges are restricted to the minimum period of time required to perform the specific duties
-* Regular reviews of granted administrative privileges are undertaken
-
-The process of designing an Azure RBAC strategy will necessitate a detailed review of business functions to understand the difference in access between distinct business roles, and the type and frequency of work that requires elevated privileges. The difference in function between a Backup Operator, a Security Administrator, and an Auditor will require different levels of access at different times with varying levels of ongoing review.
-
-## Validate requests for access
-
-Elevated privileges must be explicitly approved. To support this, an approval process must be developed and appropriate staff made responsible for validating that all requests for additional privileges are legitimate. Privileged Identity Management provides multiple options for approving Role assignment. A role activation request can be configured to allow for self-approval or be gated and require nominated approvers to manually review and approve all Role activation requests. Activation requests can also be configured to require additional supporting information is included with the activation request, such as ticket numbers.
-
-### Restrict privilege based on duties
-
-Restricting the level of privilege granted to Security Principals is critical, as the over assignment of privileges is a common IT Security attack vector. The types of resources being managed, and the teams responsible, must be assessed so the minimum level of privileges required for daily duties can be assigned. Additional privileges that go beyond those required for daily duties should only ever be granted for the period of time required to perform a specific task. An example of this would be providing "Contributor" access to a customer's administrator, but allowing them to request "Owner" permissions for an Azure Resource for a specific task requiring temporary high-level access.
-
-This ensures that each individual administrator only has elevated access for the shortest period of time. Adherence to these practices reduces the overall attack surface for any organisations IT infrastructure.
-
-### Regular evaluation of administrative privilege
-
-It is vital that Security Principals within an environment are routinely audited to ensure that the correct level of privilege is currently assigned. Microsoft Azure provides a number of means to audit and evaluate the privileges assigned to Azure Security Principals. Privileged Identity Management allows administrative staff to periodically perform "Access Reviews" of the Roles granted to Security Principals. An Access Review can be undertaken to audit both Azure Resource Role assignment and Azure Active Directory Administrative Role assignment. An Access Review can be configured with the following properties:
-
-* **Review name and review start and end dates**: Reviews should be configured to be long enough for the nominated users to complete them.
-
-* **Role to be reviewed**: Each Access Review focuses on a single Azure Role.
-
-* **Nominated reviewers**: There are three options for performing a review. You can assign the review to someone else to complete, you can do it yourself, or you can have each user review their own access.
-
-* **Require users to provide a reason for access**: Users can be required to enter a reason for maintaining their level of privilege when completing the access review.
-
-The progress of pending Access Reviews can be monitored at any time via a dashboard in the Azure portal. Access to the role being reviewed will remain unchanged until the Access Review has been completed. It is also possible to [audit](../active-directory/privileged-identity-management/pim-how-to-use-audit-log.md) all PIM user assignments and activations within a nominated time period.
-
-## Next steps
-
-Review the article on [System Monitoring in Azure Australia](system-monitor.md).
azure-australia Secure Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/secure-your-data.md
- Title: Data security in Azure Australia
-description: Configuring Azure within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# Data security in Azure Australia
-
-The overarching principles for securing customer data are:
-
-* Protecting data using encryption
-* Managing secrets
-* Restricting data access
-
-## Encrypting your data
-
-The encryption of data can be applied at the disk level (at-rest), in databases (at-rest and in-transit), in applications (in-transit), and while on the network (in-transit). There are several ways of achieving encryption in Azure:
-
-|Service/Feature|Description|
-|||
-|Storage Service Encryption|Azure Storage Service Encryption is enabled at the storage account level, resulting in block blobs and page blobs being automatically encrypted when written to Azure Storage. When you read the data from Azure Storage, it will be decrypted by the storage service before being returned. Use SSE to secure your data without having to modify or add code to any applications.|
-|Azure Disk Encryption|Use Azure Disk Encryption to encrypt the OS disks and data disks used by an Azure Virtual Machine. Integration with Azure Key Vault gives you control and helps you manage disk encryption keys.|
-|Client-Side Application Level Encryption|Client-Side Encryption is built into the Java and the .NET storage client libraries, which can utilize Azure Key Vault APIs, making it straightforward to implement. Use Azure Key Vault to gain access to the secrets in Azure Key Vault for specific individuals using Azure Active Directory.|
-|Encryption in transit|The basic encryption available for connectivity to Azure Australia supports Transport Level Security (TLS) 1.2 protocol, and X.509 certificates. Federal Information Processing Standard (FIPS) 140-2 Level 1 cryptographic algorithms are also used for infrastructure network connections between Azure Australia data centers. Windows Server 2016, Windows 10, Windows Server 2012 R2, Windows 8.1, and Azure File shares can use SMB 3.0 for encryption between the VM and the file share. Use Client-Side Encryption to encrypt the data before it's transferred into storage in a client application, and to decrypt the data after it's transferred out of storage.|
-|IaaS VMs|Use Azure Disk Encryption. Turn on Storage Service Encryption to encrypt the VHD files that are used to back up those disks in Azure Storage, but this only encrypts newly written data. This means that, if you create a VM and then enable Storage Service Encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.|
-|Client-Side Encryption|This is the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPS for your data in transit, and Storage Service Encryption to encrypt the data at rest. Client-Side Encryption also involves more load on the clientΓÇöyou have to account for this in your scalability plans, especially if you're encrypting and transferring large amounts of data.|
-|
-
-For more information on the encryption options in Azure, see the [Storage Security Guide](../storage/blobs/security-recommendations.md).
-
-## Protecting data by managing secrets
-
-Secure key management is essential for protecting data in the cloud. Customers should strive to simplify key management and maintain control of keys used by cloud applications and services to encrypt data.
-
-### Managing secrets
-
-* Use Key Vault to minimize the risks of secrets being exposed through hard-coded configuration files, scripts, or in source code. Azure Key Vault encrypts keys (such as the encryption keys for Azure Disk Encryption) and secrets (such as passwords), by storing them in FIPS 140-2 Level 2 validated hardware security modules (HSMs). For added assurance, you can import or generate keys in these HSMs.
-* Application code and templates should only contain URI references to the secrets (which means the actual secrets are not in code, configuration, or source code repositories). This prevents key phishing attacks on internal or external repos, such as harvest-bots in GitHub.
-* Utilize strong Azure RBAC controls within Key Vault. If a trusted operator leaves the company or transfers to a new group within the company, they should be prevented from being able to access the secrets.
-
-For more information, see [Azure Key Vault](azure-key-vault.md)
-
-## Isolation to restrict data access
-
-Isolation is all about using boundaries, segmentation, and containers to limit data access to only authorized users, services, and applications. For example, the separation between tenants is an essential security mechanism for multi-tenant cloud platforms such as Microsoft Azure. Logical isolation helps prevent one tenant from interfering with the operations of any other tenant.
-
-#### Per-customer isolation
-
-Azure implements network access control and segregation through layer 2 VLAN isolation, access control lists, load balancers, and IP filters.
-
-Customers can further isolate their resources across subscriptions, resource groups, virtual networks, and subnets.
-
-For more information on isolation in Microsoft Azure, see the [Isolation in the Azure Public Cloud](../security/fundamentals/isolation-choices.md).
-
-## Next steps
-
-Review the article on [Azure VPN Gateway](vpn-gateway.md)
azure-australia Security Explained https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/security-explained.md
- Title: Azure Australia security explained
-description: Information most asked about the Australian regions and meeting the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# Azure Australia security explained
-
-This article addresses some of the common questions and areas of interest for Australian Government agencies that investigate with, design for, and deploy to Microsoft Azure Australia.
-
-## IRAP and Certified Cloud Services List documents
-
-The Australian Cyber Security Centre (ACSC) provides a Letter of Certification, a Certification Report, and a Consumer Guide for the service when it's added to the Certified Cloud Services List (CCSL).
-
-Microsoft is currently listed on the CCSL for Azure, Office 365, and Dynamics 365 CRM.
-
-Microsoft makes our audit, assessment, and ACSC certification documents available to customers and partners on an Australia-specific page of the [Microsoft Service Trust Portal](https://aka.ms/au-irap).
-
-## Dissemination Limiting Markers and PROTECTED certification
-
-The process of having systems, including cloud services, approved for use by government organisations is defined in the [Information Security Manual (ISM)](https://www.cyber.gov.au/acsc/view-all-content/ism) that's produced and published by the ACSC. The ACSC is the entity within the Australian Signals Directorate (ASD) that's responsible for cyber security and cloud certification.
-
-There are two steps to the approval process:
-
-1. **Security Assessment (IRAP)**: A process in which registered professionals assess systems, services, and solutions against the technical controls in the ISM and evaluate whether the controls were implemented effectively. The assessment also identifies any specific risks for the approval authority to consider prior to issuing an Approval to Operate (ATO).
-
-1. **Approval to Operate**: The process in which a senior officer of a government agency formally recognises and accepts the residual risk of a system to the information it processes, stores, and communicates. An input to this process is the Security Assessment.
-
-The assessment of Azure services at the PROTECTED level identifies that the implementation of the security controls required for the storage and processing of PROTECTED and below data were confirmed to be in place and are operating effectively.
-
-## Australian data classification changes
-
-On October 1, 2018, the Attorney General's Department publicly announced changes to the Protective Security Policy Framework (PSPF), specifically a new [sensitive and classified information system](https://www.protectivesecurity.gov.au/information/sensitive-classified-information/Pages/default.aspx).
-
-![Revised PSPF classifications](media/pspf-classifications.png)
-
-All Australian agencies and organisations that operate under the PSPF are affected by these changes. The primary change that affects our current IRAP/CCSL certifications is that the current Dissemination Limiting Markings (DLMs) were retired. The OFFICIAL: Sensitive marking replaces the various DLMs used for the protection of sensitive information. The change also introduced three information management markers that can be applied to all official information at all levels of sensitivity and classification. The PROTECTED classification remains unchanged.
-
-The term "Unclassified" is removed from the new system and the term "Unofficial" is applied to non-Government information, although it doesn't require a formal marking.
-
-## Choose an Azure region for OFFICIAL: Sensitive and PROTECTED workloads
-
-The Azure OFFICIAL: Sensitive and PROTECTED certified services are deployed to all four Australian Data Centre regions: Australia East, Australia South East, Australia Central, and Australia Central 2. The certification report issued by the ACSC recommends that PROTECTED data be deployed to the Azure Government regions in Canberra if a service is available there. For more information about the PROTECTED certified Azure services, see the [Australia page on the Service Trust Portal](https://aka.ms/au-irap).
-
->[!NOTE]
->Microsoft recommends that government data of all sensitivities and classifications should be deployed to the Australia Central and Australia Central 2 regions because they're designed and operated specifically for the needs of government.
-
-For more information on the special nature of the Azure Australian regions, see [Azure Australia Central regions](https://azure.microsoft.com/global-infrastructure/australia/).
-
-## How Microsoft separates classified and official data
-
-Microsoft operates Azure and Office 365 in Australia as if all data is sensitive or classified, which raises our security controls to that high bar.
-
-The infrastructure that supports Azure potentially serves data of multiple classifications. Because we assume that the customer data is classified, the appropriate controls are in place. Microsoft has adopted a baseline security posture for our services that complies with the PSPF requirements to store and process PROTECTED classified information.
-
-To assure our customers that one tenant in Azure isn't at risk from other tenants, Microsoft implements comprehensive defence-in-depth controls.
-
-Beyond the capabilities implemented within the Microsoft Azure platform, additional customer configurable controls, such as encryption with customer-managed keys, nested virtualisation, and just-in-time administrative access, can further reduce the risk. Within the Azure Government Australia regions in Canberra, a process for formal approving only Australian and New Zealand government and national critical infrastructure organisations is in place. This community cloud provides additional assurance to organisations that are sensitive to cotenant risks.
-
-The Microsoft Azure PROTECTED Certification Report confirms that these controls are effective for the storage and processing of PROTECTED classified data and their isolation.
-
-## Relevance of IRAP/CCSL to state government and critical infrastructure providers
-
-Many state government and critical infrastructure providers incorporate federal government requirements into their security policy and assurance framework. These organisations also handle OFFICIAL, OFFICIAL: Sensitive, and some amount of PROTECTED classified data, either from their interaction with the federal government or in their own right.
-
-The Australian Government is increasingly focusing policy and legislation on the protection of non-Government data that fundamentally affect the security and economic prosperity of Australia. As such, the Azure Australia regions and the CCSL certification are relevant to all of those industries.
-
-![Critical infrastructure sectors](media/nci-sectors.png)
-
-The Microsoft certifications demonstrate that Azure services were subjected to a thorough, rigorous, and formal assessment of the security protections in place and they were approved for handling such highly sensitive data.
-
-## Location and control of Microsoft data centres
-
-It's a mandatory requirement of government and critical infrastructure to explicitly know the data centre location and ownership for cloud services processing their data. Microsoft is unique as a hyperscale cloud provider in providing extensive information about these locations and ownership.
-
-Microsoft's Azure Australia regions (Australia Central and Australia Central 2) operate within the facilities of CDC Datacentres. The ownership of CDC Datacentres is Australian controlled with 48% ownership from the Commonwealth Superannuation Corporation, 48% ownership from Infratil (a New Zealand-based, dual Australian and New Zealand Stock Exchange listed long-term infrastructure asset fund), and 4% Australian management.
-
-The management of CDC Datacentres has contractual assurances in place with the Australian Government that restrict future transfer of ownership and control. This transparency of supply chain and ownership via Microsoft's partnership with CDC Datacentres is in line with the principles of the [Whole-of-Government Hosting Strategy](https://www.dta.gov.au/our-projects/whole-government-hosting-strategy) and the definition of a Certified Sovereign Datacentre.
-
-## Azure services that are included in the current CCSL certification
-
-In June 2017, the ACSC certified 41 Azure services for the storage and processing of data at the Unclassified: DLM level. In April 2018, 24 of those services were certified for PROTECTED classified data.
-
-The availability of ACSC-certified Azure services across our Azure regions in Australia are as follows (services shown in bold are certified at the PROTECTED level).
-
-|Azure Australia Central regions|Non-regional services and other regions|
-|||
-|API Management, App Gateway, Application Services, **Automation**, **Azure portal**, **Backup**, **Batch**, **Cloud Services**, Cosmos DB, Event Hubs, **ExpressRoute**, HDInsight, **Key Vault**, Load Balancer, Log Analytics, **Multi-factor Authentication**, Redis Cache, **Resource Manager**, **Service Bus**, **Service Fabric**, **Site Recovery**, **SQL Database**, **Storage**, Traffic Manager, **Virtual Machines**, **Virtual Network**, **VPN Gateway**|**Azure Active Directory**, CDN, Data Catalog, **Import Export**, **Information Protection**, **IOT Hub**, Machine Learning, Media Services, **Notification Hubs**, Power BI, **Scheduler**, **Security Centre**, Search, Stream Analytics|
-|
-
-Microsoft publishes the [Overview of Microsoft Azure Compliance](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/44/Microsoft%20Azure%20Compliance%20Offerings.pdf) that lists all in-scope services for all of the Global, Government, Industry, and Regional compliance and assessment processes that Azure goes through, which includes IRAP/CCSL.
-
-## Azure service not listed or assessed at a lower level than needed
-
-Services that aren't certified, or that have been certified at the OFFICIAL: Sensitive but not the PROTECTED level, can be used alongside or as part of a solution hosting PROTECTED data provided the services are either:
--- Not storing or processing PROTECTED data unencrypted, or-- You've completed a risk assessment and approved the service to store PROTECTED data yourself.-
-You can use a service that isn't included on the CCSL to store and process OFFICIAL data, but the ISM requires you to notify the ACSC in writing that you're doing so before you enter into or renew a contract with a cloud service provider.
-
-Any service that's used by an agency for PROTECTED workloads must be security assessed and approved in line with the processes outlined in the ISM and the Agency-managed IRAP Assessments process in the [DTA Secure Cloud Strategy](https://www.dta.gov.au/files/cloud-strategy/secure-cloud-strategy.pdf).
-
-![DTA Secure Cloud Strategy Certification Process](media/certification.png)
-
-Microsoft continually assesses our services to ensure the platform is secure and fit-for-purpose for Australian Government use. Contact Microsoft if you require assistance with a service that isn't currently on the CCSL at the PROTECTED level.
-
-Because Microsoft has a range of services certified on the CCSL at both the Unclassified DLM and PROTECTED classifications, the ISM requires that we undertake an IRAP assessment of our services at least every two years. Microsoft undertakes an annual assessment, which is also an opportunity to include additional services for consideration.
-
-## Certified PROTECTED gateway in Azure
-
-Microsoft doesn't operate a government-certified Secure Internet Gateway (SIG) because of restrictions on the number of SIGs permissible under the Gateway Consolidation Program. But the expected and necessary capabilities of a SIG can be configured within Microsoft Azure.
-
-Through the PROTECTED certification of Azure services, the ACSC has specific recommendations to agencies for connecting to Azure and when implementing network segmentation between security domains, for example, between PROTECTED and the Internet. These recommendations include the use of network security groups, firewalls, and virtual private networks. The ACSC recommends the use of a virtual gateway appliance. There are several virtual appliances available in Azure that have a physical equivalent on the ASD Evaluated Products List or have been evaluated against the Common Criteria Protection Profiles and are listed on the Common Criteria portal. These products are mutually recognised by ASD as a signatory to the Common Criteria Recognition Arrangement.
-
-Microsoft has produced guidance on implementing Azure-based capabilities that provide the security functions required to protect the boundary between different security domains, which, when combined, form the equivalent to a certified SIG. A number of partners can assist with design and implementation of these capabilities, and a number of partner solutions are available that do the same.
-
-## Security clearances and citizenship of Microsoft support personnel
-
-Microsoft operates our services globally with screened and trained security personnel. Personnel that have unescorted physical access to facilities in Sydney and Melbourne have Australian Government Baseline security clearances. Personnel within the Australia Central and Australia Central 2 regions have minimum Negative Vetting 1 (NV1) clearances (as appropriate for SECRET data). These clearance requirements provide additional assurance to customers that personnel within data centres operating Azure are highly trustworthy.
-
-Microsoft has a zero standing access policy with access granted through a system of just in time and just enough administration based on Azure role-based access control (Azure RBAC). In the vast majority of cases, our administrators don't require access or privileges to customer data in order to troubleshoot and maintain the service. High degrees of automation and scripting of tasks for remote execution negate the need for direct access to customer data.
-
-The Attorney General's Department has confirmed that Microsoft's personnel security policies and procedures within Azure are consistent with the intent of the PSPF Access to Information provisions in INFOSEC-9.
-
-## Store International Traffic of Arms Regulations (ITAR) or Export Administration Regulations (EAR) data
-
-The Azure technical controls that assist customers with meeting their obligations for export-controlled data are the same globally in Azure. Importantly, there's no formal assessment and certification framework for export-controlled data.
-
-For Azure Government and Office 365 US Government for Defense, we've put additional contractual and process measures in place to support customers subject to export controls. Those additional contractual clauses and the guaranteed U.S. national support and administration of the Azure regions isn't in place for Australia.
-
-That doesn't mean that Azure in Australia can't be used for ITAR/EAR, but you need to clearly understand the restrictions imposed on you through your export license. You also must implement additional protections to meet those obligations before you use Azure to store that data. For example, you might need to:
--- Build nationality as an attribute into Azure Active Directory.-- Use Azure Information Protection to enforce encryption rules over the data and limit it to only U.S. and whatever other nationalities are included on the export license.-- Encrypt all data on-premises before you store it in Azure by using a customer key or Hold Your Own Key for ITAR data.-
-Because ITAR isn't a formal certification, you need to understand what the restrictions and limitations associated with the export license are. Then you can work through whether there are sufficient controls in Azure to meet those requirements. In this case, one of the issues to closely consider is the access by our engineers who might not be a nationality approved on the export license.
-
-## Next steps
-
- For ISM-compliant configuration and implementation of VPN connectivity to Azure Australia, see [Azure VPN Gateway](vpn-gateway.md).
azure-australia System Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/system-monitor.md
- Title: System monitoring for security in Azure Australia
-description: Guidance on configuring System Monitoring within the Australian regions to meet the specific requirements of Australian Government policy, regulations, and legislation.
--- Previously updated : 07/22/2019---
-# System monitoring for security in Azure Australia
-
-Having robust security strategies that include real-time monitoring and routine security assessments are critical for you to enhance the day to day operational security of your IT environments, including cloud.
-
-Cloud security is a joint effort between the customer and the cloud provider. There are four services which Microsoft Azure provides to facilitate these requirements with consideration to the recommendations contained within the [Australian Cyber Security Centre's (ACSC) Information Security Manual Controls](https://acsc.gov.au/infosec/ism/index.htm) (ISM), specifically, the implementation of centralised event logging, event log auditing, and security vulnerability assessment and management. The Microsoft Azure services are:
-
-* Microsoft Defender for Cloud
-* Azure Monitor
-* Azure Advisor
-* Azure Policy
-
-The ACSC recommends that you use these services for **PROTECTED** data. By using these services, you can proactively monitor and analyse your IT environments, and make informed decisions on where to best allocate resources to enhance your security. Each of these services is part of a combined solution to provide you with the best insight, recommendations, and protection possible.
-
-## Microsoft Defender for Cloud
-
-[Microsoft Defender for Cloud](../security-center/security-center-introduction.md) provides a unified security management console that you use to monitor and enhance the security of Azure resources and your hosted data. Microsoft Defender for Cloud includes Secure Score, a score based on an analysis of the state of best practice configuration from Azure Advisor and the overall compliance of Azure Policy.
-
-Microsoft Defender for Cloud provides Azure customers with the following features:
-
-* Security policy, assessment, and recommendations
-* Security event collection and search
-* Access and application controls
-* Advanced Threat Detection
-* Just-in-time Virtual Machines access control
-* Hybrid Security
-
-The scope of resources monitored by Microsoft Defender for Cloud can be expanded to include supported on-premises resources in a hybrid-cloud environment. This includes on-premises resources currently being monitored by a supported version of System Center Operations Manager.
-
-Defender for Cloud's enhanced security features provide cloud-based security controls required by the [ASD Essential 8](https://acsc.gov.au/publications/protect/essential-eight-explained.htm). These include application filtering and restriction of administrative privilege via just-in-time access.
-
-### Azure Monitor
-
-[Azure Monitor](../azure-monitor/overview.md) is the centralized logging solution for all Azure Resources, and includes Log Analytics and Application Insights. Two key data types are collected from Azure resources: logs and metrics. Once collected in Azure Monitor, logging information can be used by a wide range of tools and for a variety of purposes.
-
-![Azure Monitor Overview](media/overview.png)
-
-Azure Monitor also includes the "Azure Activity Log". The SActivity Log stores all subscription level events that have occurred within Azure. It allows Azure customers to see the "who, what and when" behind operations undertaken on their Azure resources. Both resource based logging sent to Azure Monitor and Azure Activity Log events can be analysed using the in-built Kusto query language. These logs can then be exported, used to create custom dashboards and views, and configured to trigger alerts and notifications.
-
-### Azure Advisor
-
-[Azure Advisor](../advisor/advisor-overview.md) analyses supported Azure resources, system-generated log files, and current resource configurations within your Azure subscription. The analysis provided in Azure Advisor is generated in real time and based upon Microsoft's recommended best practices. Any supported Azure resources added to your environment will be analysed and appropriate recommendations will be provided. Azure Advisor recommendations are categorised into four best practice categories:
-
-* Security
-* High Availability
-* Performance
-* Cost
-
-Security recommendations generated by Azure Advisor form part of the overall security analysis provided by Microsoft Defender for Cloud.
-
-The information gathered by Azure Advisor provides administrators with:
-
-* Insight into resource configuration that does not meet recommended best practice
-* Guidance on specific remediation actions to undertake
-* Rankings indicating which remediation actions should be undertaken as a high priority
-
-### Azure Policy
-
-[Azure Policy](../governance/policy/overview.md) provides the ability to apply rules that govern the types of Azure resources and their allowed configuration. Policy can be used to control resource creation and configuration, or it can be used to audit configuration settings across an environment. These audit results can be used to form the basis of remediation activities. Azure Policy differs from Azure role-based access control (Azure RBAC); Azure Policy is used to restrict resources and their configuration, Azure RBAC is used to restrict privileged access to Azure users.
-
-Whether the specific policy is being enforced or the effect of the policy is being audited, policy compliance is continually monitored, and overall and resource-specific compliance information is provided to administrators. Azure Policy compliance data is provided to Microsoft Defender for Cloud and forms part of the Secure Score.
-
-## Key design considerations
-
-When implementing an event log strategy, the ACSC ISM highlights the following considerations:
-
-* Centralised logging facilities
-* Specific events to be logged
-* Event log protection
-* Event log retention
-* Event log auditing
-
-In additional to collecting and managing logs, the ISM also recommends routine vulnerability assessment of an organisation's IT environment.
-
-### Centralised logging
-
-Any logging solution should, wherever possible, consolidate captured logs into a single data repository. This not only reduces operational complexity and prevents the creation of multiple data silos, it enables data collected from multiple sources to be analysed together allowing any correlating events to be identified. This is critical for detecting and managing the scope of any cyber security incidents.
-
-This requirement is met for all Azure customers with Azure Monitor. This offering not only provides a centralised logging repository in Azure for all Azure resources, it also enables you to stream your data to an Azure Event Hub. Azure Event Hubs provides a fully managed, real-time data ingestion service. Once Azure Monitor data is streamed to an Azure Event Hub, the data can also be easily connected to existing supported Security information and event management (SIEM) repositories and additional third party monitoring tools.
-
-Microsoft also offers its own Azure native SIEM solution, Microsoft Sentinel. Microsoft Sentinel supports a wide variety of data connectors and can be used to monitor security events across an entire enterprise. By combining the data from supported [data connectors](../sentinel/connect-data-sources.md), Microsoft Sentinel's built-in machine learning, and the Kusto query language, security administrators are provided with a single solution for alert detection, threat visibility, proactive hunting, and threat response. Microsoft Sentinel also provides a hunting and notebook feature that allows security administrators to record all the steps undertaken as part of a security investigation in a reuseable playbook that can be shared within an organisation. Security Administrators can even use the built-in [User Analytics](../sentinel/overview.md) to investigate the actions of a single nominated user.
-
-### Logged events and log detail
-
-The ISM provides a detailed list of event log types that should be included in any logging strategy. Any captured logs must contain sufficient detail to be of any practical use in conducting analysis and investigations.
-
-The logs collected in Azure fall under one of following three categories:
-
-* **Control and Management Logs**: These logs provide information about Azure Resource Manager CREATE, UPDATE, and DELETE operations.
-
-* **Data Plane Logs**: These contain events raised as part of Azure resource usage. Includes sources such as Windows event logs including System, Security, and Application logs.
-
-* **Processed Events**: These events contain information about events and alerts that have been automatically processed on the customer's behalf by Azure. An example of a Processed Event is a Microsoft Defender for Cloud Alert.
-
-Azure virtual machine monitoring is enhanced by the deployment of the virtual machine agent for both Windows and Linux. This markedly increases the breadth of logging information gathered. Deployment of this agent can be configured to occur automatically via the Microsoft Defender for Cloud.
-
-Microsoft provides detailed information about Azure resource-specific logs and their [schemas](../security/fundamentals/log-audit.md).
-
-### Log retention and protection
-
- Event logs must be stored securely for the required retention period. The ISM advises that logs are retained for a minimum of seven years. Azure provides a number of means to ensure the long life of your collected logs. By default, the Azure Log events are stored for 90 days. Log data captured by Azure Monitor can be moved and stored on an Azure Storage account as required for long-term retention. Activity logs stored on an Azure Storage Account can be retained for a set number of days, or indefinitely if necessary.
-
-Azure Storage Accounts used to store Azure Log events can be made geo-redundant and can be backed up using Azure Backup. Once captured by Azure Backup, any deletion of backups containing logs requires administrative approval and backups marked for deletion are still held for 14 days allowing for recovery. Azure Backup allows for 9999 copies of a protected instance, providing over 27 years of daily backups.
-
-Azure role-based access control (Azure RBAC) should be used to control access to resources used for Azure logging. Azure Monitor, Azure Storage accounts, and Azure Backups should be configured with Azure RBAC to ensure the security of the data contained within the logs.
-
-### Log auditing
-
-The true value of logs is realised once they are analysed. Using both automated and manual analysis, and being familiar with the available tools, will assist you to detect and manage breaches of organisational security policy, and cyber security incidents. Azure Monitor provides a rich set of tools to analyse collected logs. The result of this analysis can then be shared between systems, visualised, or disseminated in multiple formats.
-
-Log data stored in Azure Monitor is kept in a Log Analytics Workspace. All analysis begins with a query. Azure Monitor queries are written in the Kusto query language. Queries form the basis of all outputs from Azure Monitor, from Azure Dashboards to Alert Rules.
-
-![Azure Log Queries Overview](media/queries-overview.png)
-
-Auditing of logs can be enhanced through the use of Monitoring Solutions. These are pre-packaged solutions that contain collection logic, queries, and data visualisation views. Microsoft [provide](../azure-monitor/monitor-reference.md) a number of Monitoring Solutions and additional solutions from product vendors can be found in the Azure Marketplace.
-
-### Vulnerability assessment and management
-
-The ISM notes that routine vulnerability assessment and management are essential. Your IT environment is constantly evolving, and the external security threat is endlessly changing. With Microsoft Defender for Cloud you can do automated vulnerability assessments and get guidance on how to plan and perform remediation activities.
-
-Secure Score in Microsoft Defender for Cloud gives you a list of recommendations that, when applied, will improve the security of your environment. The list is sorted by the impact on the overall Secure Score from highest to lowest. Ordering the list by impact allows you to focus on the highest priority recommendations that present the most value in enhancing your security.
-
-Azure Policy also plays a key part in the ongoing vulnerability assessment. The types of policy available in Azure Policy range from enforcing resource tags and values, to restricting the Azure regions in which resources can be created, to blocking the creation of particular resource types altogether. A set of Azure policies can be grouped into Initiatives. Initiatives are used to apply related Azure policies that, when applied together as a group, form the basis of a specific security or compliance objective.
-
-Azure Policy has a library of built-in policy definitions which is constantly growing. Azure portal also gives you the option to author your own custom Azure Policy definitions. Once you find a policy in the existing library or create a new one, you can then assign the policy to Azure resources. These assignments can be [scoped](../governance/policy/tutorials/create-and-manage.md) at various levels in the resource management hierarchy. Policy assignment is inherited, meaning all child resources within a scope receive the same policy assignment. Resources can also be excluded from scoped policy assignment as required.
-
-All deployed Azure policies contribute to an organisation's Secure Score. In a highly bespoke environment, custom Azure Policy definitions can be created and deployed to provide audit information tailored to specific workloads.
-
-## Getting started
-
-To start with Microsoft Defender for Cloud and make full use of Azure Monitor, Advisor and Policy, Microsoft recommends the following initial steps:
-
-* Enable Microsoft Defender for Cloud
-* Enable Microsoft Defender for Cloud's enhanced security features
-* Enable automatic provisioning of the Log Analytics agent to supported machines
-* Review, prioritize, and mitigate the security recommendations and alerts shown on the Defender for Cloud dashboards
-
-## Next steps
-
-Read [Azure Policy and Azure Blueprints](azure-policy.md) for details on implementing governance and control over your Azure Australia resources to ensure policy and regulatory compliance.
azure-australia Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-australia/vpn-gateway.md
- Title: Azure VPN Gateway in Azure Australia
-description: Implementing VPN Gateway in Azure Australia to be compliant with the ISM and effectively protect Australian Government agencies
--- Previously updated : 07/22/2019----
-# Azure VPN Gateway in Azure Australia
-
-A critical service with any public cloud is the secure connection of cloud resources and services to existing on-premises systems. The service that provides this capability in Azure is Azure VPN Gateway. This article outlines the key points to consider when you configure a VPN gateway to comply with the Australian Signals Directorate's (ASD) [Information Security Manual (ISM) controls](https://acsc.gov.au/infosec/ism/).
-
-A VPN gateway is used to send encrypted traffic between a virtual network in Azure and another network. Three scenarios are addressed by VPN gateways:
--- Site-to-site (S2S)-- Point-to-site (P2S)-- Network-to-network-
-This article focuses on S2S VPN gateways. Diagram 1 shows an example S2S VPN gateway configuration.
-
-![VPN gateway with multi-site connections](media/vpngateway-multisite-connection-diagram.png)
-
-*Diagram 1 ΓÇô Azure S2S VPN Gateway*
-
-## Key design considerations
-
-There are three networking options to connect Azure to Australian Government customers:
--- ICON-- Azure ExpressRoute-- Public internet-
-The Australian Cyber Security Centre's [Consumer Guide for Azure](https://servicetrust.microsoft.com/viewpage/Australia) recommends that VPN Gateway (or an equivalent PROTECTED certified third-party service) is used in conjunction with the three networking options. This recommendation is to ensure that the connections comply with the ISM controls for encryption and integrity.
-
-### Encryption and integrity
-
-By default, the VPN negotiates the encryption and integrity algorithms and parameters during the connection establishment as part of the IKE handshakes. During the IKE handshake, the configuration and order of preference depends on whether the VPN gateway is the initiator or the responder. This designation is controlled via the VPN device. The final configuration of the connection is controlled by the configuration of the VPN device. For more information on validated VPN devices and their configuration, see [About VPN services](../vpn-gateway/vpn-gateway-about-vpn-devices.md).
-
-VPN gateways can control encryption and integrity by configuring a custom IPsec/IKE policy on the connection.
-
-### Resource operations
-
-VPN gateways create a connection between Azure and non-Azure environments over the public internet. The ISM has controls that relate to the explicit authorization of connections. By default, it's possible to use VPN gateways to create unauthorized tunnels into secure environments. It's critical that organizations use Azure role-based access control (Azure RBAC) to control who can create and modify VPN gateways and their connections. Azure has no built-in role to manage VPN gateways, so a custom role is required.
-
-Access to Owner, Contributor, and Network Contributor roles is tightly controlled. We also recommend that you use Azure Active Directory Privileged Identity Management for more granular access control.
-
-### High availability
-
-Azure VPN gateways can have multiple connections and support multiple on-premises VPN devices to the same on-premises environment. See Diagram 1.
-
-Virtual networks in Azure can have multiple VPN gateways that can be deployed in independent, active-passive, or active-active configurations.
-
-We recommend that you deploy all VPN gateways in a [highly available configuration](../vpn-gateway/vpn-gateway-highlyavailable.md). An example is two on-premises VPN devices connected to two VPN gateways in either active-passive or active-active mode. See Diagram 2.
-
-![VPN gateway redundant connections](media/dual-redundancy.png)
-
-*Diagram 2 ΓÇô Active-active VPN gateways and two VPN devices*
-
-### Forced tunneling
-
-Forced tunneling redirects, or forces, all Internet-bound traffic back to the on-premises environment via the VPN gateway for inspection and auditing. Without forced tunneling, Internet-bound traffic from VMs in Azure traverses the Azure network infrastructure directly out to the public internet, without the option to inspect or audit the traffic. Forced tunneling is critical when an organization is required to use a Secure Internet Gateway (SIG) for an environment.
-
-## Detailed configuration
-
-### Service attributes
-
-VPN gateways for S2S connections configured for the Australian Government must have the following attributes:
-
-|Attribute | Must|
-| | |
-|gatewayType | "VPN"|
-|
-
-Attribute settings required to comply with the ISM controls for PROTECTED are:
-
-|Attribute | Must|
-| ||
-|vpnType |"RouteBased"|
-|vpnClientConfiguration/vpnClientProtocols | "IkeV2"|
-|
-
-Azure VPN gateways support a range of cryptographic algorithms from the IPsec and IKE protocol standards. The default policy sets maximum interoperability with a wide range of third-party VPN devices. As a result, it's possible that during the IKE handshake a noncompliant configuration might be negotiated. We highly recommend that you apply [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md) parameters to vpnClientConfiguration in VPN gateways to ensure the connections meet the ISM controls for on-premises environment connections to Azure. The key attributes are shown in the following table.
-
-|Attribute|Should|Must|
-||||
-|saLifeTimeSeconds|<14,400 secs|>300 secs|
-|saDataSizeKilobytes| |>1,024 KB|
-|ipsecEncryption| |AES256-GCMAES256|
-|ipsecIntegrity| |SHA256-GCMAES256|
-|ikeEncryption| |AES256-GCMAES256|
-|ikeIntegrity| |SHA256-GCMAES256|
-|dhGroup|DHGroup14, DHGroup24, ECP256, ECP384|DHGroup2|
-|pfsGroup|PFS2048, PFS24, ECP256, ECP384||
-|
-
-For dhGroup and pfsGroup in the previous table, ECP256 and ECP384 are preferred even though other settings can be used.
-
-### Related services
-
-When you design and configure an Azure VPN gateway, a number of related services must also exist and be configured.
-
-|Service | Action required|
-| | |
-|Virtual network | VPN gateways are attached to a virtual network. Create a virtual network before you create a new VPN gateway.|
-|Public IP address | S2S VPN gateways need a public IP address to establish connectivity between the on-premises VPN device and the VPN gateway. Create a public IP address before you create a S2S VPN gateway.|
-|Subnet | Create a subnet of the virtual network for the VPN gateway.|
-|
-
-## Implementation steps using PowerShell
-
-### Azure role-based access control
-
-1. Create a custom role. An example is virtualNetworkGateway Contributor. Create a role to be assigned to users who will be allowed to create and modify VPN gateways. The custom role should allow the following operations:
-
- Microsoft.Network/virtualNetworkGateways/*
- Microsoft.Network/connections/*
- Microsoft.Network/localnetworkgateways/*
- Microsoft.Network/virtualNetworks/subnets/*
- Microsoft.Network/publicIPAddresses/*
- Microsoft.Network/publicIPPrefixes/*
- Microsoft.Network/routeTables/*
-
-2. Add the custom role to users who are allowed to create and manage VPN gateways and connections to on-premises environments.
-
-### Create a VPN gateway
-
-These steps assume that you already created a virtual network.
-
-1. Create a new public IP address.
-2. Create a VPN gateway subnet.
-3. Create a VPN gateway IP config file.
-4. Create a VPN gateway.
-5. Create a local network gateway for the on-premises VPN device.
-6. Create an IPsec policy. This step assumes that you're using custom IPsec/IKE policies.
-7. Create a connection between the VPN gateway and a local network gateway by using the IPsec policy.
-
-### Enforce tunneling
-
-If forced tunneling is required, before you create the VPN gateway:
-
-1. Create a route table and route rules.
-2. Associate a route table with the appropriate subnets.
-
-After you create the VPN gateway:
--- Set GatewayDefaultSite to the on-premises environment on the VPN gateway.-
-### Example PowerShell script
-
-An example of PowerShell script used to create a custom IPsec/IKE policy complies with ISM controls for Australian PROTECTED security classification.
-
-It assumes that the virtual network, VPN gateway, and local gateways exist.
-
-#### Create an IPsec/IKE policy
-
-The following sample script creates an IPsec/IKE policy with the following algorithms and parameters:
--- IKEv2: AES256, SHA256, DHGroup ECP256-- IPsec: AES256, SHA256, PFS ECP256, SA Lifetime 14,400 seconds, and 102,400,000 KB-
-```powershell
-$custompolicy = New-AzIpsecPolicy `
- -IkeEncryption AES256 `
- -IkeIntegrity SHA256 `
- -DhGroup ECP256 `
- -IpsecEncryption AES256 `
- -IpsecIntegrity SHA256 `
- -PfsGroup ECP256 `
- -SALifeTimeSeconds 14400 `
- -SADataSizeKilobytes 102400000
-```
-
-#### Create a S2S VPN connection with the custom IPsec/IKE policy
-
-```powershell
-$vpngw = Get-AzVirtualNetworkGateway `
- -Name "<yourVPNGatewayName>" `
- -ResourceGroupName "<yourResourceGroupName>"
-$localgw = Get-AzLocalNetworkGateway `
- -Name "<yourLocalGatewayName>" `
- -ResourceGroupName "<yourResourceGroupName>"
-
-New-AzVirtualNetworkGatewayConnection `
- -Name "ConnectionName" `
- -ResourceGroupName "<yourResourceGroupName>" `
- -VirtualNetworkGateway1 $vpngw `
- -LocalNetworkGateway2 $localgw `
- -Location "Australia Central" `
- -ConnectionType IPsec `
- -IpsecPolicies $custompolicy `
- -SharedKey "AzureA1b2C3"
-```
-
-## Next steps
-
-This article covered the specific configuration of VPN Gateway to meet the requirements specified in the Information Security Manual for securing Australian Government PROTECTED data while in transit. For steps on how to configure your VPN gateway, see:
--- [Azure virtual network gateway overview](../vpn-gateway/index.yml) -- [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) -- [Create a virtual network with a site-to-site VPN connection by using PowerShell](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) -- [Create and manage a VPN gateway](../vpn-gateway/tutorial-create-gateway-portal.md)
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
To use managed identity, you must have a premium-tier cache.
## Enable managed identity using the Azure CLI
-Use the Azure CLI for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [az redis create](/cli/azure/redis?view=azure-cli-latest.md) or [az redis identity](/cli/azure/redis/identity?view=azure-cli-latest).
+Use the Azure CLI for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [az redis create](/cli/azure/redis?view=azure-cli-latest.md&preserve-view=true) or [az redis identity](/cli/azure/redis/identity?view=azure-cli-latest&preserve-view=true).
For example, to update a cache to use system-managed identity use the following CLI command:
az redis identity assign \--mi-system-assigned \--name MyCacheName \--resource-g
## Enable managed identity using Azure PowerShell
-Use Azure PowerShell for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache?view=azps-7.1.0) or [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache?view=azps-7.1.0).
+Use Azure PowerShell for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache?view=azps-7.1.0&preserve-view=true) or [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache?view=azps-7.1.0&preserve-view=true).
For example, to update a cache to use system-managed identity, use the following PowerShell command:
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
$resource.Properties.functionAppScaleLimit = <SCALE_LIMIT>
$resource | Set-AzResource -Force ```
+## Scale-in behaviors
+
+Event-driven scaling automatically reduces capacity when demand for your functions is reduced. It does this by shutting down worker instances of your function app. Before an instance is shut down, new events stop being sent to the instance. Also, functions that are currently executing are given time to finish executing. This behavior is logged as drain mode. This shut-down period can extend up to 10 minutes for Consumption plan apps and up to 60 minutes for Premium plan apps. Event-driven scaling and this behavior don't apply to Dedicated plan apps.
+
+The following considerations apply for scale-in behaviors:
+
+* For Consumption plan function apps running on Windows, only apps created after May 2021 have drain mode behaviors enabled by default.
+* To enable graceful shutdown for functions using the Service Bus trigger, use version 4.2.0 or a later version of the [Service Bus Extension](functions-bindings-service-bus.md).
+ ## Event Hubs trigger This section describes how scaling behaves when your function uses an [Event Hubs trigger](functions-bindings-event-hubs-trigger.md) or an [IoT Hub trigger](functions-bindings-event-iot-trigger.md). In these cases, each instance of an event triggered function is backed by a single [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance. The trigger (powered by Event Hubs) ensures that only one [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance can get a lease on a given partition.
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
|Node 6|30 April 2019|28 February 2022| |Node 8|31 December 2019|28 February 2022| |Node 10|30 April 2021|30 September 2022|
+|Node 12|30 Apr 2022|TBA|
|PowerShell Core 6| 4 September 2020|30 September 2022| |Python 3.6 |23 December 2021|30 September 2022|
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 5/25/2022 Last updated : 6/6/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li></ul> | 1.5.0.0 | Coming soon |
| April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 | | February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 |
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Title: Azure Diagnostics extension overview
description: Use Azure diagnostics for debugging, measuring performance, monitoring, traffic analysis in cloud services, virtual machines and service fabric Last updated 04/06/2022
+ms.reviwer: dalek
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
ms.devlang: csharp, java, javascript Last updated 05/06/2020+ # Dependency auto-collection
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Application Insights data
description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data Last updated 05/20/2019-
-ms.pmowner: vitalyg
+ # Automate custom reports with Application Insights data
azure-monitor Availability Multistep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-multistep.md
Title: Monitor with multi-step web tests - Azure Application Insights
description: Set up multi-step web tests to monitor your web applications with Azure Application Insights Last updated 07/21/2021+ # Multi-step web tests
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability tests
description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Last updated 07/13/2021+ # Application Insights availability tests
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
Title: Private availability testing - Azure Monitor Application Insights
description: Learn how to use availability tests on internal servers that run behind a firewall with private testing. Last updated 05/14/2021+ # Private testing
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
description: Learn how to enable Azure Active Directory (Azure AD) authenticatio
Last updated 08/02/2021 ms.devlang: csharp, java, javascript, python+ # Azure AD authentication for Application Insights (Preview)
azure-monitor Azure Functions Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-functions-supported-features.md
description: Application Insights Supported Features for Azure Functions
Last updated 4/23/2019 ms.devlang: csharp+ # Application Insights for Azure Functions supported features
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Last updated 08/26/2019 ms.devlang: csharp, java, javascript, python + # Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Last updated 08/05/2021 ms.devlang: java + # Application Monitoring for Azure App Service and Java
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Last updated 08/05/2021 ms.devlang: csharp + # Application Monitoring for Azure App Service and ASP.NET Core
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Last updated 08/05/2021 ms.devlang: javascript + # Application Monitoring for Azure App Service and ASP.NET
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Last updated 08/05/2021 ms.devlang: javascript + # Application Monitoring for Azure App Service and Node.js
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/cloudservices.md
description: Monitor your web and worker roles effectively with Application Insi
ms.devlang: csharp Previously updated : 09/05/2018 Last updated : 06/02/2022+ # Application Insights for Azure cloud services
If there is no data, do the following:
1. In the app, open various pages so that it generates some telemetry. 1. Wait a few seconds, and then click **Refresh**.
-For more information, see [Troubleshooting](https://docs.microsoft.com/azure/azure-monitor/faq#application-insights).
- ## View Azure Diagnostics events You can find the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) information in Application Insights in the following locations:
-* Performance counters are displayed as custom metrics.
+* Performance counters are displayed as custom metrics.
* Windows event logs are shown as traces and custom events. * Application logs, ETW logs, and any diagnostics infrastructure logs appear as traces.
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Monitor your apps without code changes - auto-instrumentation for Azure M
description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management Last updated 08/31/2021+ # What is auto-instrumentation for Azure Monitor application insights?
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Last updated 05/22/2019 ms.devlang: csharp
-ms.pmowner: casocha
+
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
Last updated 05/21/2020 ms.devlang: csharp -+ # Application Insights for .NET console applications
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
Title: Continuous monitoring of your DevOps release pipeline with Azure Pipeline
description: Provides instructions to quickly set up continuous monitoring with Application Insights Last updated 05/01/2020+ # Add continuous monitoring to your release pipeline
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
description: Learn about the steps required to upgrade your Azure Monitor Applic
Last updated 09/23/2020 + # Migrate to workspace-based Application Insights resources
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Last updated 06/07/2019 ms.devlang: csharp, java, javascript, python + # Telemetry correlation in Application Insights
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
description: Manually set up Application Insights monitoring for a new live appl
Last updated 02/10/2021 + # Create an Application Insights resource
azure-monitor Custom Data Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-data-correlation.md
Title: Azure Application Insights | Microsoft Docs
description: Correlate data from Application Insights to other datasets, such as data enrichment or lookup tables, non-Application Insights data sources, and custom data. Last updated 08/08/2018+ # Correlating Application Insights data with custom data sources
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
Last updated 07/26/2019 ms.devlang: csharp, java, javascript, python + # Application Insights overriding default endpoints
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
ms.devlang: csharp Last updated 11/26/2019+ # Track custom operations with Application Insights .NET SDK
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Title: Azure Application Insights Telemetry Data Model - Telemetry Context | Mic
description: Application Insights telemetry context data model Last updated 05/15/2017+ # Telemetry context: Application Insights data model
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Title: Azure Application Insights Telemetry Data Model - Event Telemetry | Micro
description: Application Insights data model for event telemetry Last updated 04/25/2017+ # Event telemetry: Application Insights data model
azure-monitor Data Model Exception Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md
Title: Azure Application Insights Exception Telemetry Data model
description: Application Insights data model for exception telemetry Last updated 04/25/2017+ # Exception telemetry: Application Insights data model
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Title: Data model for metric telemetry - Azure Application Insights
description: Application Insights data model for metric telemetry Last updated 04/25/2017+ # Metric telemetry: Application Insights data model
azure-monitor Data Model Pageview Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-pageview-telemetry.md
Title: Azure Application Insights Data Model - PageView Telemetry
description: Application Insights data model for page view telemetry Last updated 03/24/2022-+ # PageView telemetry: Application Insights data model
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
Title: Data model for request telemetry - Azure Application Insights
description: Application Insights data model for request telemetry Last updated 01/07/2019+ # Request telemetry: Application Insights data model
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
ibiza Last updated 10/14/2019+ # Application Insights telemetry data model
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
description: Retention and privacy policy statement
Last updated 06/30/2020 + # Data collection, retention, and storage in Application Insights
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
Title: Web app performance monitoring - Azure Application Insights
description: How Application Insights fits into the DevOps cycle Last updated 12/21/2018+ # Deep diagnostics for web apps and services with Application Insights
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Using Search in Azure Application Insights | Microsoft Docs
description: Search and filter raw telemetry sent by your web app. Last updated 07/30/2019++ # Using Search in Application Insights
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
description: Provides information about Microsoft's support for distributed trac
Last updated 09/17/2018+ # What is Distributed Tracing?
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
description: Monitor system and custom .NET/.NET Core EventCounters in Applicati
Last updated 09/20/2019 + # EventCounters introduction
azure-monitor Export Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-data-model.md
Title: Azure Application Insights Data Model | Microsoft Docs
description: Describes properties exported from continuous export in JSON, and used as filters. Last updated 01/08/2019+ # Application Insights Export Data Model
azure-monitor Export Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-power-bi.md
Title: Export to Power BI from Azure Application Insights | Microsoft Docs
description: Analytics queries can be displayed in Power BI. Last updated 08/10/2018+ # Feed Power BI from Application Insights
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
description: Export diagnostic and usage data to storage in Microsoft Azure, and
Last updated 02/19/2021 + # Export telemetry from Application Insights
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Last updated 04/28/2020 ms.devlang: csharp+ # Custom metric collection in .NET and .NET Core
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
description: Learn how to use Application Insights with the ILogger interface in
Last updated 05/20/2021 ms.devlang: csharp+ # Application Insights logging with .NET
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor
description: Server firewall exceptions required by Application Insights Last updated 01/27/2020+ # IP addresses used by Azure Monitor
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
description: Understand how Application Insights handles IP addresses and geoloc
Last updated 09/23/2020 + # Geolocation and IP address handling
azure-monitor Java 2X Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-agent.md
Last updated 01/10/2019 ms.devlang: java + # Monitor dependencies, caught exceptions, and method execution times in Java web apps
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Last updated 03/14/2019 ms.devlang: java + # collectd: Linux performance metrics in Application Insights [Deprecated]
azure-monitor Java 2X Filter Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-filter-telemetry.md
Last updated 3/14/2019 ms.devlang: java + # Filter telemetry in your Java web app
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Last updated 11/22/2020 ms.devlang: java + # Get started with Application Insights in a Java web project
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
ms.devlang: java Last updated 11/01/2018+ # How to use Micrometer with Azure Application Insights Java SDK (not recommended)
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
Last updated 05/18/2019 ms.devlang: java + # Explore Java trace logs in Application Insights
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
Last updated 03/14/2019 ms.devlang: java + # Troubleshooting and Q and A for Application Insights for Java SDK
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md
Last updated 03/16/2021 ms.devlang: java + # Configuring JMX metrics
azure-monitor Java On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-on-premises.md
ms.devlang: java Last updated 04/16/2020+ # Java codeless application monitoring on-premises - Azure Monitor Application Insights
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Last updated 04/16/2020 ms.devlang: java + # Tips for updating your JVM args - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Last updated 11/04/2020 ms.devlang: java + # Configuration options - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Last updated 03/22/2021 ms.devlang: java + # Sampling overrides (preview) - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Last updated 12/29/2020 ms.devlang: java + # Telemetry processor examples - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Last updated 10/29/2020 ms.devlang: java + # Telemetry processors (preview) - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
Last updated 11/30/2020 ms.devlang: java + # Troubleshooting guide: Azure Monitor Application Insights for Java
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Last updated 11/25/2020 ms.devlang: java + # Upgrading from Application Insights Java 2.x SDK
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
ibiza
Last updated 10/07/2020 ms.devlang: javascript+ # Angular plugin for Application Insights JavaScript SDK
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
ibiza
Last updated 01/14/2021 ms.devlang: javascript+ # Click Analytics Auto-collection plugin for Application Insights JavaScript SDK
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
ibiza
Last updated 08/06/2020 ms.devlang: javascript+ # React Native plugin for Application Insights JavaScript SDK
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
ibiza
Last updated 07/28/2020 ms.devlang: javascript+ # React plugin for Application Insights JavaScript SDK
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
Last updated 06/05/2020 ms.devlang: javascript + # Troubleshooting SDK load failure for JavaScript web apps
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Last updated 08/06/2020 ms.devlang: javascript + # Application Insights for web pages
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
Title: Monitor applications on Azure Kubernetes Service (AKS) with Application I
description: Azure Monitor seamlessly integrates with your application running on Kubernetes, and allows you to spot the problems with your apps in no time. Last updated 05/13/2020+ # Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights
azure-monitor Legacy Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/legacy-pricing.md
Title: Application Insights legacy enterprise (per node) pricing tier
description: Describes the legacy pricing tier for Application Insights. Last updated 02/18/2022+ # Application Insights legacy enterprise (per node) pricing tier
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection stri
description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings Last updated 02/14/2022+ # Migrate from Application Insights instrumentation keys to connection strings
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights
description: Azure Monitor seamlessly integrates with your application running on Azure Functions, and allows you to monitor the performance and spot the problems with your apps in no time. Last updated 08/27/2021+ # Monitoring Azure Functions with Azure Monitor Application Insights
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Last updated 10/12/2021 ms.devlang: javascript + # Monitor your Node.js services and apps with Application Insights
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Last updated 10/15/2019 ms.devlang: python + # Track incoming requests with OpenCensus Python
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Last updated 10/12/2021 ms.devlang: python + # Set up Azure Monitor for your Python application
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
description: This article provides guidance on how to enable Azure Monitor on ap
Last updated 10/11/2021 ms.devlang: csharp, javascript, python+ # Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)
-This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Preview offering. After you finish the instructions in this article, you'll be able to send OpenTelemetry traces to Azure Monitor Application Insights. To learn more about OpenTelemetry, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+The Azure Monitor OpenTelemetry Exporter is a component that sends traces (and eventually all application telemetry) to Azure Monitor Application Insights. To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+
+This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Preview offering. After you finish the instructions in this article, you'll be able to send OpenTelemetry traces to Azure Monitor Application Insights.
> [!IMPORTANT] > Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications is currently in preview.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview
description: Provides an overview of how to use OpenTelemetry with Azure Monitor. Last updated 10/11/2021+ # OpenTelemetry overview
Telemetry, the data collected to observe your application, can be broken into th
Initially the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter **preview** offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) **only include Distributed Tracing**.
-There are several sources that explain the three pillars in detail including the [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-sources/), [OpenTelemetry Specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md), and [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan.
+There are several sources that explain the three pillars in detail including the [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/), [OpenTelemetry Specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md), and [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan.
In the following sections, we'll cover some telemetry collection basics.
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Last updated 12/13/2018 ms.devlang: csharp + # System performance counters in Application Insights
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: languages, platforms, and integrations | Microsoft
description: Languages, platforms, and integrations available for Application Insights Last updated 10/29/2021-+ # Supported languages
azure-monitor Powershell Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell-azure-diagnostics.md
description: Automate configuring Azure Diagnostics to pipe data to Application
Last updated 08/06/2019
+ms.reviwer: cogoodson
# Using PowerShell to set up Application Insights for Azure Cloud Services
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
description: Automate creating and managing resources, alerts, and availability
Last updated 05/02/2020 + # Manage Application Insights resources using PowerShell
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Azure Application Insights | Micr
description: Why to use log-based versus pre-aggregated metrics in Azure Application Insights Last updated 09/18/2018+ # Log-based and pre-aggregated metrics in Application Insights
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-application-security-detection-pack.md
Title: Security detection Pack with Azure Application Insights
description: Monitor application with Azure Application Insights and smart detection for potential security issues. Last updated 12/12/2017+ # Application security detection pack (preview)
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-arm-config.md
Title: Smart detection rule settings - Azure Application Insights
description: Automate management and configuration of Azure Application Insights smart detection rules with Azure Resource Manager Templates Last updated 02/14/2021+ # Manage Application Insights smart detection rules using Azure Resource Manager templates
azure-monitor Proactive Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-cloud-services.md
Title: Alert on issues in Azure Cloud Services using the Azure Diagnostics integ
description: Monitor for issues like startup failures, crashes, and role recycle loops in Azure Cloud Services with Azure Application Insights Last updated 06/07/2018-+ # Alert on issues in Azure Cloud Services using the Azure diagnostics integration with Azure Application Insights
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-diagnostics.md
Title: Smart detection in Azure Application Insights | Microsoft Docs
description: Application Insights performs automatic deep analysis of your app telemetry and warns you of potential problems. Last updated 02/07/2019+ # Smart detection in Application Insights
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-email-notification.md
Title: Smart Detection notification change - Azure Application Insights
description: Change to the default notification recipients from Smart Detection. Smart Detection lets you monitor application traces with Azure Application Insights for unusual patterns in trace telemetry. Last updated 02/14/2021+ # Smart Detection e-mail notification change
azure-monitor Proactive Exception Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-exception-volume.md
Title: Abnormal rise in exception volume - Azure Application Insights
description: Monitor application exceptions with smart detection in Azure Application Insights for unusual patterns in exception volume. Last updated 12/08/2017+ # Abnormal rise in exception volume (preview)
azure-monitor Proactive Potential Memory Leak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-potential-memory-leak.md
Title: Detect memory leak - Azure Application Insights smart detection
description: Monitor applications with Azure Application Insights for potential memory leaks. Last updated 12/12/2017+ # Memory leak detection (preview)
azure-monitor Proactive Trace Severity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-trace-severity.md
Title: Degradation in trace severity ratio - Azure Application Insights
description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with smart detection. Last updated 11/27/2017+ # Degradation in trace severity ratio (preview)
azure-monitor Remove Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/remove-application-insights.md
Title: Remove Application Insights in Visual Studio - Azure Monitor
description: How to remove Application Insights SDK for ASP.NET and ASP.NET Core in Visual Studio. Last updated 04/06/2020+ # How to remove Application Insights in Visual Studio
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md
description: Sample Azure Resource Manager templates to deploy Application Insig
Last updated 04/27/2022 + # Resource Manager template samples for creating Application Insights resources
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
description: Sample Azure Resource Manager templates to deploy an Azure App Serv
Last updated 04/27/2022+ # Resource Manager template samples for creating Azure App Services web apps with Application Insights monitoring
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
description: Owners, contributors and readers of your organization's insights.
Last updated 02/14/2019 + # Resources, roles, and access control in Application Insights
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs
description: How to keep the volume of telemetry under control. Last updated 08/26/2021- + # Sampling in Application Insights
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
description: How to use connection strings.
Last updated 04/13/2022 + # Connection strings
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resource
description: Direct telemetry to different resources for development, test, and production stamps. Last updated 05/11/2020+ # How many Application Insights resources should I deploy
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Title: Monitor a SharePoint site with Application Insights
description: Start monitoring a new application with a new instrumentation key Last updated 09/08/2020+ # Monitor a SharePoint site with Application Insights
azure-monitor Sla Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sla-report.md
Title: Downtime, SLA, and outage workbook - Application Insights
description: Calculate and report SLA for Web Test through a single pane of glass across your Application Insights resources and Azure subscriptions. Last updated 05/4/2021
+ms.reviwer: casocha
# Downtime, SLA, and outages workbook
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
Title: Release Notes for Microsoft.ApplicationInsights.SnapshotCollector NuGet p
description: Release notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package used by the Application Insights Snapshot Debugger. Last updated 11/10/2020+ # Release notes for Microsoft.ApplicationInsights.SnapshotCollector
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-function-app.md
Title: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions |
description: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions Last updated 12/18/2020+ # Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
Title: Troubleshoot Azure Application Insights Snapshot Debugger
description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger. Last updated 03/07/2019+ # <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-upgrade.md
Title: Upgrading Azure Application Insights Snapshot Debugger
description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages Last updated 03/28/2019+ # Upgrading the Snapshot Debugger
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Ser
description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines Last updated 03/07/2019+ # Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger.md
description: Debug snapshots are automatically collected when exceptions are thr
Last updated 10/12/2021-+ # Debug snapshots on exceptions in .NET apps
azure-monitor Source Map Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/source-map-support.md
description: Learn how to upload source maps to your own storage account Blob co
Last updated 06/23/2020 + # Source map support for JavaScript applications
azure-monitor Standard Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md
description: This article lists Azure Application Insights metrics with supporte
Last updated 07/03/2019+ # Application Insights standard metrics
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
description: Statistics about Application Insights SDKs and Auto-Instrumentation
Last updated 09/20/2021
+ms.reviwer: heya
# Statsbeat in Azure Application Insights
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
description: A quickstart guide for Application Insights Agent. Monitor website
Last updated 01/22/2021 + # Get started with Azure Monitor Application Insights Agent for on-premises servers
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Azure Application Insights Agent overview | Microsoft Docs
description: An overview of Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Last updated 09/16/2019+ # Deploy Azure Monitor Application Insights Agent for on-premises servers
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
Title: Azure Application Insights Agent troubleshooting and known issues | Micro
description: The known issues of Application Insights Agent and troubleshooting examples. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Last updated 04/23/2019+ # Troubleshooting Application Insights Agent (formerly named Status Monitor v2)
azure-monitor Telemetry Channels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md
Last updated 05/14/2019 ms.devlang: csharp + # Telemetry channels in Application Insights
azure-monitor Troubleshoot Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/troubleshoot-availability.md
Title: Troubleshoot your Azure Application Insights availability tests
description: Troubleshoot web tests in Azure Application Insights. Get alerts if a website becomes unavailable or responds slowly. Last updated 02/14/2021-+ # Troubleshooting
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
description: Tutorial to send alerts in response to errors in your application u
Last updated 04/10/2019 + # Monitor and alert on application health with Azure Application Insights
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
description: Tutorial to create custom KPI dashboards using Azure Application In
Last updated 09/30/2020 + # Create custom KPI dashboards using Azure Application Insights
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
description: Tutorial to find and diagnose performance issues in your applicatio
Last updated 06/15/2020 + # Find and diagnose performance issues with Azure Application Insights
azure-monitor Tutorial Runtime Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-runtime-exceptions.md
description: Tutorial to find and diagnose run-time exceptions in your applicati
Last updated 09/19/2017 + # Find and diagnose run-time exceptions with Azure Application Insights
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-users.md
description: Tutorial on using Application Insights to understand how customers
Last updated 07/30/2021 + # Use Azure Application Insights to understand how customers are using your application
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
Title: Application Insights User Flows analyzes navigation flows
description: Analyze how users navigate between the pages and features of your web app. Last updated 07/30/2021+ # Analyze user navigation patterns with User Flows in Application Insights
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels
description: Learn how you can use Funnels to discover how customers are interacting with your application. Last updated 07/30/2021+ # Discover how customers are using your application with Application Insights Funnels
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
Title: HEART analytics workbook
description: Product teams use the HEART Workbook to measure success across five user-centric dimensions to deliver better software. Last updated 11/11/2021+ # Analyzing product usage with HEART
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor
description: Understand your users and what they do with your app. Last updated 07/30/2021+ # Usage analysis with Application Insights
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-retention.md
Title: Analyze web app user retention with Application Insights
description: How many users return to your app? Last updated 07/30/2021+ # User retention analysis for web applications with Application Insights
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
Title: User, session, and event analysis in Application Insights
description: Demographic analysis of users of your web app. Last updated 07/30/2021+ # Users, sessions, and events analysis in Application Insights
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-troubleshoot.md
Title: Troubleshoot user analytics tools - Application Insights
description: Troubleshooting guide - analyzing site and app usage with Application Insights. Last updated 07/30/2021+ # Troubleshoot user behavior analytics tools in Application Insights
azure-monitor Visual Studio Codelens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio-codelens.md
description: Quickly access your Application Insights request and exception tele
Last updated 03/17/2017 + # Application Insights telemetry in Visual Studio CodeLens
azure-monitor Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio.md
description: Web app performance analysis and diagnostics during debugging and i
Last updated 03/17/2017 + # Debug your applications with Azure Application Insights in Visual Studio
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
Title: Release Notes for Azure web app extension - Application Insights
description: Releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights. Last updated 06/26/2020+ # Release notes for Azure Web App extension for Application Insights
azure-monitor Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/windows-desktop.md
Last updated 06/11/2020 ms.devlang: csharp + # Monitoring usage and performance in Classic Windows Desktop apps
azure-monitor Work Item Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md
Title: Work Item Integration - Application Insights
description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them. Last updated 06/27/2021+ # Work Item Integration
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
ms.devlang: csharp Last updated 05/12/2022+ # Application Insights for Worker Service applications (non-HTTP applications)
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
description: Autoscale patterns in Azure for Web Apps, Virtual Machine Scale set
Last updated 04/22/2022 + # Best practices for Autoscale Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
Last updated 04/22/2022 ++ # Azure Monitor autoscaling common metrics
azure-monitor Autoscale Common Scale Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md
description: Learn some of the common patterns to auto scale your resource in Az
Last updated 04/22/2022 + # Overview of common autoscale patterns This article describes some of the common patterns to scale your resource in Azure.
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
description: Learn how to scale your resource by custom metric in Azure.
Last updated 05/07/2017 + # Get started with auto scale by custom metric in Azure This article describes how to scale your resource by a custom metric in Azure portal.
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
description: "Learn how to scale your resource web app, cloud service, virtual m
Last updated 04/05/2022 + # Get started with Autoscale in Azure This article describes how to set up your Autoscale settings for your resource in the Microsoft Azure portal.
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
description: "Autoscale in Microsoft Azure"
Last updated 04/22/2022+
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Last updated 01/24/2022 + # Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)
azure-monitor Autoscale Resource Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-resource-log-schema.md
description: Format of logs for monitoring and troubleshooting autoscale actions
Last updated 11/14/2019 + # Azure Monitor autoscale actions resource log schema
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
description: Tracking down problems with Azure Monitor autoscaling used in Servi
Last updated 11/4/2019 +
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
description: "A detailed breakdown of autoscale settings and how they work. Appl
Last updated 12/18/2017 + # Understand Autoscale settings Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure Autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time. This article takes a detailed look at the anatomy of an Autoscale setting. The article begins with the schema and properties of a setting, and then walks through the different profile types that can be configured. Finally, the article discusses how the Autoscale feature in Azure evaluates which profile to execute at any given time.
azure-monitor Autoscale Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-virtual-machine-scale-sets.md
Last updated 06/25/2020-+
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
description: Learn how to use autoscale actions to call web URLs or send email n
Last updated 04/03/2017 + # Use autoscale actions to send email and webhook alert notifications in Azure Monitor This article shows you how set up triggers so that you can call specific web URLs or send emails based on autoscale actions in Azure.
azure-monitor Tutorial Autoscale Performance Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/tutorial-autoscale-performance-schedule.md
Last updated 12/11/2017
+ # Create an Autoscale Setting for Azure resources based on performance data or a schedule
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
ms.contributor: cawa
Last updated 03/21/2022 + # Troubleshoot Azure Monitor's Change Analysis (preview)
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
ms.contributor: cawa
Last updated 04/18/2022 + # Visualizations for Change Analysis in Azure Monitor (preview)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa
Last updated 05/20/2022 + # Use Change Analysis in Azure Monitor (preview)
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
description: Cost details for data stored in a Log Analytics workspace in Azure
Last updated 03/24/2022
+ms.reviwer: dalek git
# Azure Monitor Logs pricing details
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
ms.devlang: csharp Last updated 02/23/2018+ # Profile ASP.NET Core Azure Linux web apps with Application Insights Profiler
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
Title: Profile Azure Functions app with Application Insights Profiler description: Enable Application Insights Profiler for Azure Functions app.- ms.contributor: charles.weininger Last updated 05/03/2022+ # Profile live Azure Functions app with Application Insights
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger
description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger Last updated 01/14/2021+ # Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
description: Enable Application Insights Profiler for Azure Containers.
ms.contributor: charles.weininger Last updated 05/26/2022+ # Profile live Azure containers with Application Insights
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-overview.md
description: Identify the hot path in your web server code with a low-footprint
ms.contributor: charles.weininger Last updated 05/26/2022-+ # Profile production applications in Azure with Application Insights
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
description: Enable Profiler for a Service Fabric application
Last updated 08/06/2018+ # Profile live Azure Service Fabric applications with Application Insights
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
description: Use the Azure Application Insights Profiler settings pane to see Pr
ms.contributor: Charles.Weininger Last updated 04/26/2022-+ # Configure Application Insights Profiler
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
description: Write code to track requests with Application Insights so you can g
Last updated 08/06/2018+ # Write code to track requests with Application Insights
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
Title: Troubleshoot problems with Azure Application Insights Profiler
description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Profiler. Last updated 08/06/2018+ # Troubleshoot problems enabling or viewing Application Insights Profiler
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
Title: Profile web apps on an Azure VM - Application Insights Profiler
description: Profile web apps on an Azure VM by using Application Insights Profiler. Last updated 11/08/2019+ # Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
Title: Enable Profiler for Azure App Service apps | Microsoft Docs
description: Profile live apps on Azure App Service with Application Insights Profiler. Last updated 05/11/2022+ # Enable Profiler for Azure App Service apps
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 06/02/2022 Last updated : 06/07/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
## June 2022
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) (Preview)
+
+ [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity and performance beyond the limits of native vSAN built on top of the AVS nodes and lower your overall total cost of ownership.
+
+ Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West US. Regional coverage will expand as the preview progresses.
+ * [Azure Policy built-in definitions for Azure NetApp](azure-policy-definitions.md#built-in-policy-definitions) Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
azure-portal Get Subscription Tenant Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md
Follow these steps to retrieve the ID for a subscription in the Azure portal.
1. Copy the **Subscription ID**. You can paste this value into a text document or other location. > [!TIP]
-> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?view=latest) (Azure PowerShell) or [az account list](/cli/azure/account?view=azure-cli-latest) (Azure CLI).
+> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?view=latest&preserve-view=true) (Azure PowerShell) or [az account list](/cli/azure/account?view=azure-cli-latest&preserve-view=true) (Azure CLI).
## Find your Azure AD tenant
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.ApiManagement/service/namedValues | [listValue](/rest/api/apimanagement/current-ga/named-value/list-value) | | Microsoft.ApiManagement/service/openidConnectProviders | [listSecrets](/rest/api/apimanagement/current-ga/openid-connect-provider/list-secrets) | | Microsoft.ApiManagement/service/subscriptions | [listSecrets](/rest/api/apimanagement/current-ga/subscription/list-secrets) |
-| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/configurationstores/listkeys) |
+| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/stable/configuration-stores/list-keys) |
| Microsoft.AppPlatform/Spring | [listTestKeys](/rest/api/azurespringapps/services/list-test-keys) | | Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) | | Microsoft.Batch/batchAccounts | [listkeys](/rest/api/batchmanagement/batchaccount/getkeys) |
azure-resource-manager Tutorial Resource Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
In this tutorial, there are two pieces that need to be deployed: **the custom pr
The template will use these resources:
-* [Microsoft.CustomProviders/resourceProviders](/azure/templates/microsoft.customproviders/resourcproviders)
+* [Microsoft.CustomProviders/resourceProviders](/azure/templates/microsoft.customproviders/resourceproviders)
* [Microsoft.Logic/workflows](/azure/templates/microsoft.logic/workflows) * [Microsoft.CustomProviders/associations](/azure/templates/microsoft.customproviders/associations)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. | > | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name<br><br>1-64 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
> | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.ApiManagement/service/namedValues | [listValue](/rest/api/apimanagement/current-ga/named-value/list-value) | | Microsoft.ApiManagement/service/openidConnectProviders | [listSecrets](/rest/api/apimanagement/current-ga/openid-connect-provider/list-secrets) | | Microsoft.ApiManagement/service/subscriptions | [listSecrets](/rest/api/apimanagement/current-ga/subscription/list-secrets) |
-| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/configurationstores/listkeys) |
+| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/stable/configuration-stores/list-keys) |
| Microsoft.AppPlatform/Spring | [listTestKeys](/rest/api/azurespringapps/services/list-test-keys) | | Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) | | Microsoft.Batch/batchAccounts | [listKeys](/rest/api/batchmanagement/batchaccount/getkeys) |
azure-signalr Signalr Howto Scale Multi Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-multi-instances.md
private class CustomRouter : EndpointRouterDecorator
## Dynamic Scale ServiceEndpoints
-From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NET Core version first. So you don't have to restart app server when you need to add/remove a ServiceEndpoint. As ASP.NET Core is supporting default configuration like `appsettings.json` with `reloadOnChange: true`, you don't need to change a code and it's supported by nature. And if you'd like to add some customized configuration and work with hot-reload, please refer to [this](/aspnet/core/fundamentals/configuration/?view=aspnetcore-3.1).
+From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NET Core version first. So you don't have to restart app server when you need to add/remove a ServiceEndpoint. As ASP.NET Core is supporting default configuration like `appsettings.json` with `reloadOnChange: true`, you don't need to change a code and it's supported by nature. And if you'd like to add some customized configuration and work with hot-reload, please refer to [this](/aspnet/core/fundamentals/configuration/?view=aspnetcore-3.1&preserve-view=true).
> [!NOTE] >
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
This tag contains the IP addresses of Azure Video Indexer services for all regio
## Using Azure CLI
-You can also use Azure CLI to create a new or update an existing NSG rule and add the **AzureVideoAnalyzerForMedia** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest)
+You can also use Azure CLI to create a new or update an existing NSG rule and add the **AzureVideoAnalyzerForMedia** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest&preserve-view=true)
Example of a security rule using service tags. For more details, visit https://aka.ms/servicetags
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview) description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. + Last updated 05/10/2022
To attach an Azure NetApp Files volume to your private cloud using Portal, follo
1. Search for **Microsoft.AVS** and select it. 1. Select **Register**. 1. Under **Settings**, select **Preview features**.
- 1. Verify you're registered for both the `CloudSanExperience` and `AfnDatstoreExperience` features.
+ 1. Verify you're registered for both the `CloudSanExperience` and `AnfDatstoreExperience` features.
1. Navigate to your Azure VMware Solution. Under **Manage**, select **Storage (preview)**. 1. Select **Connect Azure NetApp Files volume**.
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
Title: Attach Azure disk pools to Azure VMware Solution hosts (Preview) description: Learn how to attach an Azure disk pool surfaced through an iSCSI target as the VMware vSphere datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and consume them from your Azure VMware Solution private cloud. + Last updated 11/02/2021 #Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware vSphere environment to disk storage for the secondary site.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
Title: Integrate Microsoft Defender for Cloud with Azure VMware Solution description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the workload protection dashboard. + Last updated 06/14/2021
azure-vmware Azure Vmware Solution Citrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-citrix.md
Title: Deploy Citrix on Azure VMware Solution description: Learn how to deploy VMware Citrix on Azure VMware Solution. + Last updated 11/02/2021
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md
Title: Deploy Horizon on Azure VMware Solution description: Learn how to deploy VMware Horizon on Azure VMware Solution. + Last updated 04/11/2022
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution
description: Learn about the platform updates to Azure VMware Solution. + Last updated 12/22/2021
Last updated 12/22/2021
Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management). +
+## June 7, 2022
+
+All new Azure VMware Solution private clouds in regions (East US2, Canada Central, North Europe, and Japan East), are now deployed in with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+
+Any existing private clouds in the above mentioned regions will also be upgraded to these versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+ ## May 23, 2022 All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
Title: Back up Azure VMware Solution VMs with Azure Backup Server description: Configure your Azure VMware Solution environment to back up virtual machines by using Azure Backup Server. + Last updated 04/06/2022
azure-vmware Bitnami Appliances Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/bitnami-appliances-deployment.md
Title: Deploy Bitnami virtual appliances description: Learn about the virtual appliances packed by Bitnami to deploy in your Azure VMware Solution private cloud. + Last updated 04/11/2022- # Bitnami appliance deployment
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-api-management.md
Title: Concepts - API Management description: Learn how API Management protects APIs running on Azure VMware Solution virtual machines (VMs) + Last updated 04/28/2021
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
Title: Concept - Internet connectivity design considerations (Preview) description: Options for Azure VMware Solution Internet Connectivity. + Last updated 5/12/2022 + # Internet connectivity design considerations (Preview) There are three primary patterns for creating outbound access to the Internet from Azure VMware Solution and to enable inbound Internet access to resources on your Azure VMware Solution private cloud.
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md
Title: Concept - Integrate an Azure VMware Solution deployment in a hub and spoke architecture description: Learn about integrating an Azure VMware Solution deployment in a hub and spoke architecture on Azure. + Last updated 10/26/2020
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution + Last updated 06/06/2022
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Title: Concepts - Network design considerations description: Learn about network design considerations for Azure VMware Solution + Last updated 03/04/2022
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. + Last updated 06/28/2021
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. + Last updated 08/25/2021
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Title: Concepts - Run command in Azure VMware Solution (Preview) description: Learn about using run commands in Azure VMware Solution. + Last updated 09/17/2021
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-security-recommendations.md
Title: Concepts - Security recommendations for Azure VMware Solution description: Learn about tips and best practices to help protect Azure VMware Solution deployments from vulnerabilities and malicious actors. + Last updated 01/10/2022
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
description: Learn about storage capacity, storage policies, fault tolerance, and storage integration in Azure VMware Solution private clouds. + Last updated 05/02/2022
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
Title: Configure alerts and work with metrics in Azure VMware Solution description: Learn how to use alerts to receive notifications. Also learn how to work with metrics to gain deeper insights into your Azure VMware Solution private cloud. + Last updated 07/23/2021
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Title: Configure DHCP for Azure VMware Solution
description: Learn how to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server. + Last updated 04/08/2022 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server.
azure-vmware Configure Dns Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dns-azure-vmware-solution.md
Title: Configure DNS forwarder for Azure VMware Solution
description: Learn how to configure DNS forwarder for Azure VMware Solution using the Azure portal. + Last updated 04/11/2022 #Customer intent: As an Azure service administrator, I want to <define conditional forwarding rules for a desired domain name to a desired set of private DNS servers via the NSX-T Data Center DNS Service.>
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-github-enterprise-server.md
Title: Configure GitHub Enterprise Server on Azure VMware Solution description: Learn how to Set up GitHub Enterprise Server on your Azure VMware Solution private cloud. + Last updated 07/07/2021
azure-vmware Configure Hcx Network Extension High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-hcx-network-extension-high-availability.md
Title: Configure HCX network extension high availability description: Learn how to configure HCX network extension high availability + Last updated 05/06/2022
azure-vmware Configure Hcx Network Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-hcx-network-extension.md
Title: Create an HCX network extension description: Learn how to extend any networks from your on-premises environment to Azure VMware Solution. + Last updated 09/07/2021
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter Server description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source. + Last updated 04/22/2022
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
Title: Configure DHCP on L2 stretched VMware HCX networks
description: Learn how to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server. + Last updated 04/11/2022 # Customer intent: As an Azure service administrator, I want to configure DHCP on L2 stretched VMware HCX networks to send DHCP requests from my Azure VMware Solution VMs to a non-NSX-T DHCP server.
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
Title: Configure NSX-T Data Center network components using Azure VMware Solution description: Learn how to use the Azure VMware Solution to configure NSX-T Data Center network segments. + Last updated 04/11/2022 # Customer intent: As an Azure service administrator, I want to configure NSX-T Data Center network components using a simplified view of NSX-T Data Center operations a VMware administrator needs daily. The simplified view is targeted at users unfamiliar with NSX-T Manager.
azure-vmware Configure Port Mirroring Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-port-mirroring-azure-vmware-solution.md
Title: Configure port mirroring for Azure VMware Solution
description: Learn how to configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another. + Last updated 04/11/2022 # Customer intent: As an Azure service administrator, I want to configure port mirroring to monitor network traffic that involves forwarding a copy of each packet from one network switch port to another.
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
Title: Configure a site-to-site VPN in vWAN for Azure VMware Solution
description: Learn how to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel into Azure VMware Solutions. + Last updated 04/11/2022
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
Title: Configure storage policy description: Learn how to configure storage policy for your Azure VMware Solution virtual machines. + Last updated 04/11/2022 #Customer intent: As an Azure service administrator, I want set the VMware vSAN storage policies to determine how storage is allocated to the VM.
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
Title: Configure VMware HCX in Azure VMware Solution description: Configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud. + Last updated 09/07/2021
azure-vmware Configure Vmware Syslogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-syslogs.md
Title: Configure VMware syslogs for Azure VMware Solution description: Learn how to configure diagnostic settings to collect VMware syslogs for your Azure VMware Solution private cloud. + Last updated 04/11/2022 #Customer intent: As an Azure service administrator, I want to collect VMware syslogs and store it in my storage account so that I can view the vCenter Server logs and analyze for any diagnostic purposes.
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-windows-server-failover-cluster.md
Title: Configure Windows Server Failover Cluster on Azure VMware Solution vSAN description: Learn how to configure Windows Server Failover Cluster (WSFC) on Azure VMware Solution vSAN with native shared disks. + Last updated 04/11/2022
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
Title: Connect multiple Azure VMware Solution private clouds in the same region description: Learn how to create a network connection between two or more Azure VMware Solution private clouds located in the same region. + Last updated 09/20/2021 #Customer intent: As an Azure service administrator, I want create a network connection between two or more Azure VMware Solution private clouds located in the same region.
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/create-placement-policy.md
Title: Create placement policy description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. + Last updated 04/07/2022 #Customer intent: As an Azure service administrator, I want to control the placement of virtual machines on hosts within a cluster in my private cloud.
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Title: Deploy Arc for Azure VMware Solution (Preview) description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. + Last updated 04/11/2022
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution
description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud. + Last updated 07/28/2021
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
Title: Deploy disaster recovery using JetStream DR description: Learn how to implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware workloads. + Last updated 04/11/2022
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
Title: Deploy disaster recovery using VMware HCX description: Learn how to deploy disaster recovery of your virtual machines (VMs) with VMware HCX Disaster Recovery. Also learn how to use Azure VMware Solution as the recovery or target site. + Last updated 06/10/2021
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
Title: Deploy Traffic Manager to balance Azure VMware Solution workloads description: Learn how to integrate Traffic Manager with Azure VMware Solution to balance application workloads across multiple endpoints in different regions. + Last updated 02/08/2021
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vm-content-library.md
Title: Create a content library to deploy VMs in Azure VMware Solution description: Create a content library to deploy a VM in an Azure VMware Solution private cloud. + Last updated 04/11/2022
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Title: Deploy Zerto disaster recovery on Azure VMware Solution (Initial Availability) description: Learn how to implement Zerto disaster recovery for on-premises VMware or Azure VMware Solution virtual machines. + Last updated 10/25/2021
azure-vmware Disable Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disable-internet-access.md
Title: Disable internet access or enable a default route description: This article explains how to disable internet access for Azure VMware Solution and enable default route for Azure VMware Solution. + Last updated 05/12/2022 + # Disable internet access or enable a default route In this article, you'll learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premise. If you don't set up a default route, there will be no Internet access to your Azure VMware Solution private cloud.
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Title: Deploy disaster recovery with VMware Site Recovery Manager description: Deploy disaster recovery with VMware Site Recovery Manager (SRM) in your Azure VMware Solution private cloud. + Last updated 04/11/2022
azure-vmware Ecosystem App Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md
Title: Application performance monitoring and troubleshooting solutions for Azure VMware Solution description: Learn about leading application monitoring and troubleshooting solutions for your Azure VMware Solution private cloud. + Last updated 04/11/2022
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-back-up-vms.md
Title: Backup solutions for Azure VMware Solution virtual machines description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. + Last updated 04/21/2021
azure-vmware Ecosystem Disaster Recovery Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-disaster-recovery-vms.md
Title: Disaster recovery solutions for Azure VMware Solution virtual machines description: Learn about leading disaster recovery solutions for your Azure VMware Solution private cloud. + Last updated 11/29/2021 + # Disaster recovery solutions for Azure VMware Solution virtual machines (VMs) One of the most important aspects of any Azure VMware Solution deployment is disaster recovery, which can be achieved by creating disaster recovery plans between different Azure VMware Solution regions or between Azure and an on-premises vSphere environment.
azure-vmware Ecosystem Migration Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-migration-vms.md
Title: Migration solutions for Azure VMware Solution virtual machines description: Learn about leading migration solutions for your Azure VMware Solution virtual machines. + Last updated 03/22/2021
azure-vmware Ecosystem Os Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-os-vms.md
Title: Operating system support for Azure VMware Solution virtual machines description: Learn about operating system support for your Azure VMware Solution virtual machines. + Last updated 04/11/2022
azure-vmware Ecosystem Security Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-security-solutions.md
Title: Security solutions for Azure VMware Solution description: Learn about leading security solutions for your Azure VMware Solution private cloud. + Last updated 04/11/2022 + # Security solutions for Azure VMware Solution A fundamental part of Azure VMware Solution is security. It allows customers to run their VMware-based workloads in a safe and trustable environment.
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
Title: Enable Managed SNAT for Azure VMware Solution Workloads description: This article explains how to enable Managed SNAT for Azure VMware Solution Workloads. + Last updated 05/12/2022 + # Enable Managed SNAT for Azure VMware Solution workloads In this article, you'll learn how to enable Azure VMware SolutionΓÇÖs Managed Source NAT (SNAT) to connect to the Internet outbound. A SNAT service translates from RFC1918 space to the public Internet for simple outbound Internet access. The SNAT service won't work when you have a default route from Azure.
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-internet-access.md
Title: Enable public internet for Azure VMware Solution workloads description: This article explains how to use the public IP functionality in Azure Virtual WAN. + Last updated 06/25/2021 + # Enable public internet for Azure VMware Solution workloads Public IP is a feature in Azure VMware Solution connectivity. It makes resources, such as web servers, virtual machines (VMs), and hosts accessible through a public network.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP to the NSX Edge for Azure VMware Solution (Preview) description: This article explains how to enable internet access for your Azure VMware Solution. + Last updated 05/12/2022 + # Enable Public IP to the NSX Edge for Azure VMware Solution (Preview) In this article, you'll learn how to enable Public IP to the NSX Edge for your Azure VMware Solution.
azure-vmware Fix Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/fix-deployment-failures.md
Title: Support for Azure VMware Solution deployment or provisioning failure description: Get information from your Azure VMware Solution private cloud to file a service request for an Azure VMware Solution deployment or provisioning failure. + Last updated 10/28/2020
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Title: Install VMware HCX in Azure VMware Solution description: Install VMware HCX in your Azure VMware Solution private cloud. + Last updated 03/29/2022
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/integrate-azure-native-services.md
Title: Monitor and protect VMs with Azure native services description: Learn how to integrate and deploy Microsoft Azure native tools to monitor and manage your Azure VMware Solution workloads. + Last updated 08/15/2021
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. + Last updated 04/20/2021
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Title: Move Azure VMware Solution resources across regions
description: This article describes how to move Azure VMware Solution resources from one Azure region to another. + Last updated 04/11/2022 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution resources from Azure Region A to Azure Region B.
azure-vmware Move Ea Csp Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-ea-csp-subscriptions.md
Title: Move Azure VMware Solution subscription to another subscription
description: This article describes how to move Azure VMware Solution subscription to another subscription. You might move your resources for various reasons, such as billing. + Last updated 04/26/2021 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution subscription to another subscription.
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Title: Attach Azure NetApp Files to Azure VMware Solution VMs description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. + Last updated 05/10/2022
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Title: Plan the Azure VMware Solution deployment
description: Learn how to plan your Azure VMware Solution deployment. + Last updated 09/27/2021
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Title: Protect web apps on Azure VMware Solution with Azure Application Gateway description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. + Last updated 02/10/2021
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
Title: Request host quota for Azure VMware Solution
description: Learn how to request host quota/capacity for Azure VMware Solution. You can also request more hosts in an existing Azure VMware Solution private cloud. + Last updated 09/27/2021 #Customer intent: As an Azure service admin, I want to request hosts for either a new private cloud deployment or I want to have more hosts allocated in an existing private cloud.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/reserved-instance.md
Title: Reserved instances of Azure VMware Solution description: Learn how to buy a reserved instance for Azure VMware Solution. The reserved instance covers only the compute part of your usage and includes software licensing costs. + Last updated 05/13/2021
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
Title: Rotate the cloudadmin credentials for Azure VMware Solution description: Learn how to rotate the vCenter Server credentials for your Azure VMware Solution private cloud. + Last updated 04/11/2022 #Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. + Last updated 04/06/2022
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud description: Learn how to access an Azure VMware Solution private cloud + Last updated 08/13/2021
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
Title: Tutorial - Configure networking for your VMware private cloud in Azure
description: Learn to create and configure the networking needed to deploy your private cloud in Azure + Last updated 05/31/2022
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-create-private-cloud.md
Title: Tutorial - Deploy an Azure VMware Solution private cloud description: Learn how to create and deploy an Azure VMware Solution private cloud + Last updated 09/29/2021
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-delete-private-cloud.md
Title: Tutorial - Delete an Azure VMware Solution private cloud description: Learn how to delete an Azure VMware Solution private cloud that you no longer need. + Last updated 03/13/2021
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Peer on-premises environments to Azure VMware Solution
description: Learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution. + Last updated 07/28/2021
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-network-checklist.md
Title: Tutorial - Network planning checklist description: Learn about the network requirements for network connectivity and network ports on Azure VMware Solution. + Last updated 07/01/2021
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add a network segment in Azure VMware Solution
description: Learn how to add a network segment to use for virtual machines (VMs) in vCenter Server. + Last updated 07/16/2021
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-scale-private-cloud.md
Title: Tutorial - Scale clusters in a private cloud description: In this tutorial, you use the Azure portal to scale an Azure VMware Solution private cloud. + Last updated 08/03/2021 #Customer intent: As a VMware administrator, I want to learn how to scale an Azure VMware Solution private cloud in the Azure portal.
azure-vmware Vmware Hcx Mon Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vmware-hcx-mon-guidance.md
Title: VMware HCX Mobility Optimized Networking (MON) guidance description: Learn about Azure VMware Solution-specific use cases for Mobility Optimized Networking (MON). + Last updated 04/11/2022
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud. + Last updated 04/11/2022
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
When a WebSocket connection connects, the Web PubSub service transforms the conn
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub-express) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub-express) |
-[API reference documentation](/javascript/api/overview/azure/web-pubsub-express-readme?view=azure-node-latest) |
+[API reference documentation](/javascript/api/overview/azure/web-pubsub-express-readme?view=azure-node-latest&preserve-view=true) |
[Product documentation](./index.yml) | [Samples][samples_ref]
azure-web-pubsub Tutorial Serverless Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-static-web-app.md
Title: Tutorial - Create a serverless chat app using Azure Web PubSub service and Azure Static Web Apps
-description: A tutorial for how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless chat application.
+ Title: Tutorial - Create a serverless chat app with Azure Web PubSub service and Azure Static Web Apps
+description: A tutorial about how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless chat application.
Previously updated : 06/01/2022 Last updated : 06/03/2022
-# Tutorial: Create a serverless chat app using Azure Web PubSub service and Azure Static Web Apps
+# Tutorial: Create a serverless chat app with Azure Web PubSub service and Azure Static Web Apps
-The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets. And with Azure Static Web Apps, you can automatically build and deploy full stack web apps to Azure from a code repository conveniently. In this tutorial, you learn how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless real-time messaging application under chat room scenario.
+Azure Web PubSub service helps you build real-time messaging web applications using WebSockets. By using Azure Static Web Apps, you can automatically build and deploy full-stack web apps to Azure from a code repository. In this tutorial, you'll learn how to use Web PubSub service and Static Web Apps to build a serverless, real-time chat room messaging application.
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Build a serverless chat app
In this tutorial, you learn how to:
## Overview
-* GitHub along with DevOps provide source control and continuous delivery. So whenever there's code change to the source repo, Azure DevOps pipeline will soon apply it to Azure Static Web App and present to endpoint user.
-* When a new user is login, Functions `login` API will be triggered and generate Azure Web PubSub service client connection url.
-* When client init the connection request to Azure Web PubSub service, service will send a system `connect` event and Functions `connect` API will be triggered to auth the user.
-* When client send message to Azure Web PubSub service, service will send a user `message` event and Functions `message` API will be triggered and broadcast the message to all the connected clients.
-* Functions `validate` API will be triggered periodically for [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) purpose, when the events in Azure Web PubSub are configured with predefined parameter `{event}`, that is, https://$STATIC_WEB_APP/api/{event}.
+GitHub or Azure Repos provide source control for Static Web Apps. Azure monitors the repo branch you select, and every time there's a code change to the source repo a new build of your web app is automatically run and deployed to Azure. Continuous delivery is provided by GitHub Actions and Azure Pipelines. Static Web Apps detects the new build and presents it to the endpoint user.
+
+The sample chat room application provided with this tutorial has the following workflow.
+
+1. When a user signs in to the app, the Azure Functions `login` API will be triggered to generate a Web PubSub service client connection URL.
+1. When the client initializes the connection request to Web PubSub, the service sends a system `connect` event that triggers the Functions `connect` API to authenticate the user.
+1. When a client sends a message to Azure Web PubSub service, the service will respond with a user `message` event and the Functions `message` API will be triggered to broadcast the message to all the connected clients.
+1. The Functions `validate` API is triggered periodically for [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) when the events in Azure Web PubSub are configured with predefined parameter `{event}`, that is, https://$STATIC_WEB_APP/api/{event}.
> [!NOTE]
-> Functions APIs `connect` and `message` will be triggered when Azure Web PubSub service is configured with these 2 events.
+> The Functions APIs `connect` and `message` are triggered when Azure Web PubSub service is configured with these two events.
## Prerequisites
-* [GitHub](https://github.com/) account
-* [Azure](https://portal.azure.com/) account
-* [Azure CLI](/cli/azure) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources
+* A [GitHub](https://github.com/) account.
+* An [Azure](https://portal.azure.com/) account. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+* [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources.
## Create a Web PubSub resource
In this tutorial, you learn how to:
```azurecli-interactive AWPS_ACCESS_KEY=<YOUR_AWPS_ACCESS_KEY> ```
- Replace the placeholder `<YOUR_AWPS_ACCESS_KEY>` from previous result `primaryConnectionString`.
-## Create a repository
+ Replace the placeholder `<YOUR_AWPS_ACCESS_KEY>` with the value for `primaryConnectionString` from the previous step.
-This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app used to deploy using Azure Static Web Apps.
+## Create a repository
-1. Navigate to the following template to create a new repository under your repo:
- 1. [https://github.com/Azure/awps-swa-sample/generate](https://github.com/login?return_to=/Azure/awps-swa-sample/generate)
-1. Name your repository **my-awps-swa-app**
+This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app that you will deploy to Azure Static Web Apps.
-Select **`Create repository from template`**.
+1. Go to [https://github.com/Azure/awps-swa-sample/generate](https://github.com/login?return_to=/Azure/awps-swa-sample/generate) to create a new repo for this tutorial.
+1. Select yourself as **Owner** and name your repository **my-awps-swa-app**.
+1. You can create a **Public** or **Private** repo according to your preference. Both work for the tutorial.
+1. Select **Create repository from template**.
## Create a static web app
Now that the repository is created, you can create a static web app from the Azu
Replace the placeholder `<YOUR_GITHUB_USER_NAME>` with your GitHub user name.
-1. Create a new static web app from your repository. As you execute this command, the CLI starts GitHub interactive login experience. Following the message to complete authorization.
+1. Create a new static web app from your repository. When you run this command, the CLI starts a GitHub interactive sign-in. Follow the message to complete authorization.
```azurecli-interactive az staticwebapp create \
Now that the repository is created, you can create a static web app from the Azu
--api-location "api" \ --login-with-github ```+ > [!IMPORTANT] > The URL passed to the `--source` parameter must not include the `.git` suffix.
-1. Navigate to **https://github.com/login/device**.
+1. Go to **https://github.com/login/device**.
1. Enter the user code as displayed your console's message.
-1. Select the **Continue** button.
+1. Select **Continue**.
-1. Select the **Authorize AzureAppServiceCLI** button.
+1. Select **Authorize AzureAppServiceCLI**.
1. Configure the static web app settings.
Now that the repository is created, you can create a static web app from the Azu
## View the website
-There are two aspects to deploying a static app. The first operation creates the underlying Azure resources that make up your app. The second is a GitHub Actions workflow that builds and publishes your application.
+There are two aspects to deploying a static app: The first creates the underlying Azure resources that make up your app. The second is a GitHub Actions workflow that builds and publishes your application.
Before you can navigate to your new static site, the deployment build must first finish running.
Before you can navigate to your new static site, the deployment build must first
At this point, Azure is creating the resources to support your static web app. Wait until the icon next to the running workflow turns into a check mark with green background ✅. This operation may take a few minutes to complete.
- Once the success icon appears, the workflow is complete and you can return back to your console window.
+ Once the success icon appears, the workflow is complete and you can return to your console window.
-2. Run the following command to query for your website's URL.
+1. Run the following command to query for your website's URL.
```azurecli-interactive az staticwebapp show \
Before you can navigate to your new static site, the deployment build must first
## Configure the Web PubSub event handler
-Now you're very close to complete. The last step is to configure Web PubSub transfer client requests to your function APIs.
+You're very close to complete. The last step is to configure Web PubSub transfer client requests to your function APIs.
-1. Run command to configure Web PubSub service events. It's mapping to some functions under the `api` folder in your repo.
+1. Run the following command to configure Web PubSub service events. It maps functions under the `api` folder in your repo to the Web PubSub event handler.
```azurecli-interactive az webpubsub hub create \
Now you're very close to complete. The last step is to configure Web PubSub tran
--event-handler url-template=https://$STATIC_WEB_APP/api/{event} system-event="connect" ```
-Now you're ready to play with your website **<YOUR_STATIC_WEB_APP>**. Copy it to browser and click continue to start chatting with your friends.
+Now you're ready to play with your website **<YOUR_STATIC_WEB_APP>**. Copy it to browser and select **Continue** to start chatting with your friends.
## Clean up resources
az group delete --name my-awps-swa-group
## Next steps
-In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
> [!div class="nextstepaction"] > [Tutorial: Client streaming using subprotocol](tutorial-subprotocol.md)
backup Backup Azure Database Postgresql Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-troubleshoot.md
Title: Troubleshoot Azure Database for PostgreSQL backup description: Troubleshooting information for backing up Azure Database for PostgreSQL. Previously updated : 01/24/2022 Last updated : 06/07/2022
Establish network line of sight by enabling the **Allow access to Azure services
![Screenshot showing how to search for vault name.](./media/backup-azure-database-postgresql/search-for-vault-name.png)
+## UserErrorDBUserAuthFailed
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Ensure that the credentials stored as part of the secret value in the key vault are valid. Ensure that the specified database user has login access.
+
+## UserErrorInvalidSecret
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Ensure that the specified secret name is present in the key vault.
+
+## UserErrorMissingDBPermissions
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Grant appropriate permissions to the relevant backup or the database user to perform this operation on the database.
+
+## UserErrorSecretValueInUnsupportedFormat
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). However the secret value is not in a format supported by Azure Backup. Check the supported format as documented [here](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault).
+
+## UserErrorInvalidSecretStore
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Ensure that the given key vault exists and the backup service is given access as documented [here](backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup).
+
+## UserErrorMissingPermissionsOnSecretStore
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Ensure that backup vault's MSI is given access to key vault as documented [here](backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup).
+
+## UserErrorSSLDisabled
+
+SSL needs to be enabled for connections to the server.
+
+## UserErrorDBNotFound
+
+Ensure that the database and the relevant server exist.
+
+## UserErrorDatabaseNameAlreadyInUse
+
+The name given for the restored database already exists and hence the restore operation failed. Retry the restore operation with a different name.
+
+## UserErrorServerConnectionClosed
+
+The operation failed because the server closed the connection unexpectedly. Retry the operation and if the error still persists, please contact Microsoft Support.
++ ## Next steps [About Azure Database for PostgreSQL backup](backup-azure-database-postgresql-overview.md)
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
BareMetal Infrastructure offers these benefits:
- Certified hardware for specialized workloads - SAP (Refer to [SAP Note #1928533](https://launchpad.support.sap.com/#/notes/1928533). You'll need an SAP account for access.)
- - Oracle (Refer to [Oracle document ID #948372.1](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=52088246571495&id=948372.1&_adf.ctrl-state=kwnkj1hzm_52). You'll need an Oracle account for access.)
+ - Oracle (You'll need an Oracle account for access.)
- Non-hypervised BareMetal instance, single tenant ownership - Low latency between Azure hosted application VMs to BareMetal instances (0.35 ms) - All Flash SSD and NVMe
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Previously updated : 12/06/2021 Last updated : 06/06/2022 #Customer intent: As a website owner, I want to enable HTTPS on the custom domain of my CDN endpoint so that my users can use my custom domain to access my content securely.
Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke
:::image type="content" source="./media/cdn-custom-ssl/cdn-access-policy-settings.png" alt-text="Select service principal of Azure CDN" border="true":::
-4. Select **Certificate permissions**. Select the check boxes for **Get** and **List** to allow CDN permissions to get and list the certificates.
+4. Select **Certificate permissions**. Select the check box for **Get** to allow CDN permissions to get the certificates.
-5. Select **Secret permissions**. Select the check boxes for **Get** and **List** to allow CDN permissions to get and list the secrets:
+5. Select **Secret permissions**. Select the check box for **Get** to allow CDN permissions to get the secrets:
:::image type="content" source="./media/cdn-custom-ssl/cdn-vault-permissions.png" alt-text="Select permissions for CDN to keyvault" border="true":::
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
Title: Manage expiration of Azure Blob storage
description: Learn about the options for controlling time-to-live for blobs in Azure CDN caching.
+documentationcenter:
editor: ''
$blob.ICloudBlob.SetProperties()
> ## Setting Cache-Control headers by using .NET
-To specify a blob's `Cache-Control` header by using .NET code, use the [Azure Storage Client Library for .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md) to set the [BlobHttpHeaders.CacheControl](/dotnet/api/azure.storage.blobs.models.blobhttpheaders.cachecontrol?view=azure-dotnet) property.
+To specify a blob's `Cache-Control` header by using .NET code, use the [Azure Storage Client Library for .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md) to set the [BlobHttpHeaders.CacheControl](/dotnet/api/azure.storage.blobs.models.blobhttpheaders.cachecontrol?view=azure-dotnet&preserve-view=true) property.
For example:
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
To meet the certification requirements, your device must:
## Test with the Azure IoT Extension CLI
-The [Azure IoT CLI extension](/cli/azure/ext/azure-iot/iot/product?view=azure-cli-latest) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
+The [Azure IoT CLI extension](/cli/azure/ext/azure-iot/iot/product?view=azure-cli-latest&preserve-view=true) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
The following steps show you how to prepare for and run the certification tests using the CLI: ### Install the Azure IoT extension for the Azure CLI
-Install the [Azure CLI](/cli/azure/install-azure-cli) and review the installation instructions to set up the [Azure CLI](/cli/azure/iot?view=azure-cli-latest) in your environment.
+Install the [Azure CLI](/cli/azure/install-azure-cli) and review the installation instructions to set up the [Azure CLI](/cli/azure/iot?view=azure-cli-latest&preserve-view=true) in your environment.
To install the Azure IoT Extension, run the following command:
To install the Azure IoT Extension, run the following command:
az extension add --name azure-iot ```
-To learn more, see [Azure CLI for Azure IoT](/cli/azure/iot/product?view=azure-cli-latest).
+To learn more, see [Azure CLI for Azure IoT](/cli/azure/iot/product?view=azure-cli-latest&preserve-view=true).
### Create a new product test
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
The following faults are available for use today. Visit the [Fault Providers](./
| Capability Name | CPUPressure-1.0 | | Target type | Microsoft-Agent | | Supported OS Types | Windows, Linux |
-| Description | Add CPU pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Add CPU pressure up to the specified value on the VM where this fault is injected for the duration of the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the "% Processor Utility" performance counter is used at fault start to determine current CPU percentage and this is subtracted from the pressureLevel defined in the fault so that % Processor Utility will hit approximately the pressureLevel defined in the fault parameters. |
| Prerequisites | **Linux:** Running the fault on a Linux VM requires the **stress-ng** utility to be installed. You can install it using the package manager for your Linux distro, </br> APT Command to install stress-ng: *sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng* </br> YUM Command to install stress-ng: *sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng* | | | **Windows:** None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 |
cognitive-services Multivariate How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/multivariate-how-to.md
- Previously updated : 01/18/2022+ Last updated : 06/07/2022
The following are the basic steps needed to use MVAD:
1. Get model status. 1. Detect anomalies during the inference process with the trained MVAD model.
-To test out this feature, try this SDK [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb).
+To test out this feature, try this SDK [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/API%20Sample/Multivariate%20API%20Demo%20Notebook.ipynb). For more instructions on how to run a jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
## Multivariate Anomaly Detector APIs overview
The response contains the result status, variable information, inference paramet
* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there is insufficient historical data, so inference cannot be performed on them. In this case, the error message can be ignored. * `"isAnomaly": false` indicates the current timestamp is not an anomaly.
- * `severity ` indicates the relative severity of the anomaly and for normal data it is always 0.
+ * `severity` indicates the relative severity of the anomaly and for normal data it is always 0.
* `score` is the raw output of the model on which the model makes a decision, which could be non-zero even for normal data points. * `"isAnomaly": true` indicates an anomaly at the current timestamp.
- * `severity ` indicates the relative severity of the anomaly and for abnormal data it is always greater than 0.
+ * `severity` indicates the relative severity of the anomaly and for abnormal data it is always greater than 0.
* `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`. * `contributors` is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
A sample request looks like following format, this case is detecting last two ti
"2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z", "2021-01-01T00:02:00Z"
- //more variables
+ //more timestamps
], "values": [ 0.4551378545933972,
A sample request looks like following format, this case is detecting last two ti
"2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z", "2021-01-01T00:02:00Z"
- //more variables
+ //more timestamps
], "values": [ 0.9617871613964145,
A sample request looks like following format, this case is detecting last two ti
"2021-01-01T00:00:00Z", "2021-01-01T00:01:00Z", "2021-01-01T00:02:00Z"
- //more variables
+ //more timestamps
], "values": [ 0.4030756879437628,
See the following example of a JSON response:
## Next steps
-* [What is the Multivariate Anomaly Detector API?](../overview-multivariate.md)
-* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
+* [Best practices for using the Multivariate Anomaly Detector API](../concepts/best-practices-multivariate.md)
+* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
cognitive-services How To Specify Source Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-specify-source-language.md
- Title: Specify source language for speech to text-
-description: The Speech SDK allows you to specify the source language when you convert speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
------ Previously updated : 05/19/2020-
-zone_pivot_groups: programming-languages-set-two
---
-# Specify source language for speech-to-text
-
-In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. The example code that's provided specifies a custom speech model for improved recognition.
--
-## Specify source language in C#
-
-In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct:
-
-```csharp
-var recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig);
-```
-
-In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
-
-```csharp
-var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE");
-var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
-In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
-
-```csharp
-var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE", "The Endpoint ID for your custom model.");
-var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
->[!Note]
-> The `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
---
-## Specify source language in C++
-
-In the following example, the source language is provided explicitly as a parameter by using the `FromConfig` method.
-
-```C++
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "de-DE", audioConfig);
-```
-
-In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
-
-```C++
-auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE");
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
-In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
-
-```C++
-auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE", "The Endpoint ID for your custom model.");
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
->[!Note]
-> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
---
-## Specify source language in Java
-
-In the following example, the source language is provided explicitly when you create a new `SpeechRecognizer` construct.
-
-```Java
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig);
-```
-
-In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
-
-```Java
-SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE");
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
-In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
-
-```Java
-SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE", "The Endpoint ID for your custom model.");
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
-```
-
->[!Note]
-> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
---
-## Specify source language in Python
-
-In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct.
-
-```Python
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config, language="de-DE", audio_config=audio_config)
-```
-
-In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
-
-```Python
-source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE")
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config, source_language_config=source_language_config, audio_config=audio_config)
-```
-
-In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
-
-```Python
-source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE", "The Endpoint ID for your custom model.")
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config, source_language_config=source_language_config, audio_config=audio_config)
-```
-
->[!Note]
-> The `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
---
-## Specify source language in JavaScript
-
-The first step is to create a `SpeechConfig` construct:
-
-```Javascript
-var speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionkey", "YourRegion");
-```
-
-Next, specify the source language of your audio with `speechRecognitionLanguage`:
-
-```Javascript
-speechConfig.speechRecognitionLanguage = "de-DE";
-```
-
-If you're using a custom model for recognition, you can specify the endpoint with `endpointId`:
-
-```Javascript
-speechConfig.endpointId = "The Endpoint ID for your custom model.";
-```
-
-## Specify source language in Objective-C
-
-In the following example, the source language is provided explicitly as a parameter by using the `SPXSpeechRecognizer` construct.
-
-```Objective-C
-SPXSpeechRecognizer* speechRecognizer = \
- [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"de-DE" audioConfiguration:audioConfig];
-```
-
-In the following example, the source language is provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
-
-```Objective-C
-SPXSourceLanguageConfiguration* sourceLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"de-DE"];
-SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
- sourceLanguageConfiguration:sourceLanguageConfig
- audioConfiguration:audioConfig];
-```
-
-In the following example, the source language and custom endpoint are provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
-
-```Objective-C
-SPXSourceLanguageConfiguration* sourceLanguageConfig = \
- [[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"de-DE"
- endpointId:@"The Endpoint ID for your custom model."];
-SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
- sourceLanguageConfiguration:sourceLanguageConfig
- audioConfiguration:audioConfig];
-```
-
->[!Note]
-> The `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged. Don't use them when you create a `SPXSpeechRecognizer` construct.
--
-## Next steps
--- [Language support](language-support.md)
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 03/02/2022 Last updated : 06/06/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Quickstart: Create a Cognitive Services resource using the Azure CLI
-Use this quickstart to get started with Azure Cognitive Services using [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) commands.
+Use this quickstart to create a Cognitive Services resource using [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) commands. After creating the resource, use the keys and endpoint generated for you to authenticate your applications.
-Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
+Azure Cognitive Services is a cloud-based service with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
-Cognitive Services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create in your Azure subscription. After creating the resource, Use the keys and endpoint generated for you to authenticate your applications.
-
-In this quickstart, you'll learn how to sign up for Azure Cognitive Services and create an account that has a single-service or multi-service subscription via the [Azure CLI](/cli/azure/install-azure-cli). These services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md), which enable you to connect to one or more of the Azure Cognitive Services APIs.
+## Types of Cognitive Services resource
[!INCLUDE [cognitive-services-subscription-types](../../includes/cognitive-services-subscription-types.md)]
You can also use the green **Try It** button to run these commands in your brows
## Create a new Azure Cognitive Services resource group
-Before creating a Cognitive Services resource, you must have an Azure resource group to contain the resource. When you create a new resource, you have the option to either create a new resource group, or use an existing one. This article shows how to create a new resource group.
+Before creating a Cognitive Services resource, you must have an Azure resource group to contain the resource. When you create a new resource, you can either create a new resource group, or use an existing one. This article shows how to create a new resource group.
### Choose your resource group location
az group create \
### Choose a cognitive service and pricing tier
-When creating a new resource, you will need to know the "kind" of service you want to use, along with the [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/) (or sku) you want. You will use this and other information as parameters when creating the resource.
+When creating a new resource, you'll need to know the "kind" of service you want to use, along with the [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/) (or sku) you want. You'll use this and other information as parameters when creating the resource.
[!INCLUDE [cognitive-services-subscription-types](../../includes/cognitive-services-subscription-types.md)]
az cognitiveservices account list-kinds
### Add a new resource to your resource group
-To create and subscribe to a new Cognitive Services resource, use the [az cognitiveservices account create](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-create) command. This command adds a new billable resource to the resource group created earlier. When creating your new resource, you will need to know the "kind" of service you want to use, along with its pricing tier (or sku) and an Azure location:
+To create and subscribe to a new Cognitive Services resource, use the [az cognitiveservices account create](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-create) command. This command adds a new billable resource to the resource group created earlier. When creating your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or sku) and an Azure location:
You can create an F0 (free) resource for Anomaly Detector, named `anomaly-detector-resource` with the command below.
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 05/24/2022 Last updated : 06/06/2022 # Quickstart: Create a Cognitive Services resource using the Azure portal
-Use this quickstart to start using Azure Cognitive Services. After creating a Cognitive Service resource in the Azure portal, you'll get an endpoint and a key for authenticating your applications.
+Use this quickstart to create a Cognitive Services resource. After you create a Cognitive Service resource in the Azure portal , you'll get an endpoint and a key for authenticating your applications.
Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
+## Types of Cognitive Services resource
+ [!INCLUDE [cognitive-services-subscription-types](../../includes/cognitive-services-subscription-types.md)] ## Prerequisites
Azure Cognitive Services are cloud-based services with REST APIs, and client lib
* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/). * [!INCLUDE [contributor-requirement](./includes/quickstarts/contributor-requirement.md)] - ## Create a new Azure Cognitive Services resource ### [Multi-service](#tab/multiservice)
The multi-service resource is named **Cognitive Services** in the portal. The mu
:::image type="content" source="media/cognitive-services-apis-create-account/cognitive-services-resource-deployed.png" alt-text="Get resource keys screen"::: 1. From the quickstart pane that opens, you can access the resource endpoint and manage keys.-
+<!--
1. If you missed the previous steps or need to find your resource later, go to the [Azure services](https://ms.portal.azure.com/#home) home page. From here you can view recent resources, select **My resources**, or use the search box to find your resource by name. :::image type="content" source="media/cognitive-services-apis-create-account/home-my-resources.png" alt-text="Find resource keys from home screen":::
+-->
[!INCLUDE [cognitive-services-environment-variables](../../includes/cognitive-services-environment-variables.md)]
The multi-service resource is named **Cognitive Services** in the portal. The mu
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group. 1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
-1. Locate the resource group containing the resource to be deleted
-1. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
+1. Locate the resource group containing the resource to be deleted.
+1. If you want to delete the entire resource group, select the resource group name. On the next page, Select **Delete resource group**, and confirm.
+1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, click the ellipsis menu for that row, and select **Delete**.
If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
LetΓÇÖs us add the following words and their alterations to improve the results:
|Word | Alterations| |--|--|
-| fix problems | `troubleshoot`, `trouble-shoot`|
-| whiteboard | `white-board`, `white board` |
-| bluetooth | `blue-tooth`, `blue tooth` |
+| fix problems | `troubleshoot`, `diagnostic`|
+| whiteboard | `white board`, `white canvas` |
+| bluetooth | `blue tooth`, `BT` |
```json {
LetΓÇÖs us add the following words and their alterations to improve the results:
"alterations": [ "fix problems", "troubleshoot",
- "trouble-shoot",
+ "diagnostic",
] }, { "alterations": [ "whiteboard",
- "white-board",
- "white board"
+ "white board",
+ "white canvas"
] }, { "alterations": [ "bluetooth",
- "blue-tooth",
- "blue tooth"
+ "blue tooth",
+ "BT"
] } ]
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/best-practices.md
You can request device permissions using the SDK:
#### Camera being used by another process - On Windows Chrome and Windows Edge, if you start/join/accept a call with video on and the camera device is being used by another process other than the browser that the web sdk is running on, then the call will be started with audio only and no video. A cameraStartFailed UFD will be raised because the camera failed to start since it was being used by another process. Same applies to turning video on mid-call. You can turn off the camera in the other process so that that process releases the camera device, and then start video again from the call and video will now turn on for the call and remote participants will start seeing your video. -- This is not an issue in MacOS Chrome nor MacOS Safari because the OS will let processes/threads share the camera device.
+- This is not an issue in macOS Chrome nor macOS Safari because the OS will let processes/threads share the camera device.
- On mobile devices, if a ProcessA requests the camera device and it is being used by ProcessB, then ProcessA will overtake the camera device and ProcessB will stop using the camera device - On iOS safari, you cannot have the camera on for multiple call clients within the same tab nor across tabs. When any call client uses the camera, it will overtake the camera from any previous call client that was using it. Previous call client will get a cameraStoppedUnexpectedly UFD.
+### Screen sharing
+#### Closing out of application does not stop it from being shared
+For example, lets say that from Chromium, you screen share the Microsoft Teams application. You then click on the "X" button on the Teams application to close it. The Teams application will not be closed and it will still be running in the background. You will even still see the icon in the bottom right of your desktop bar. Since the Teams application is still running, that means that it is still being screen shared and the remote participant in the call can still see your Teams application being screen shared. In order to stop the application from being screen shared, you will have to right click its icon on the desktop bar and then click on quit. Or you will have to click on "Stop sharing" button on the browser. Or call the sdk's Call.stopScreenSharing() API.
+
+#### Safari can only do full screen sharing
+Safari only allows to screen share the entire screen. Unlike Chromium, which lets you screen share full screen, specific desktop app, or specific browser tab.
+
+#### Screen sharing permissions on macOS
+In order to do screen sharing in macOS Safari or macOs Chrome, screen recording permissions must be granted to the browsers in the OS menu: "Systems Preferences" -> "Security & Privacy" -> "Screen Recording".
+ ## Next steps For more information, see the following articles:
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
The Network Diagnostics Tool enables Azure Communication Services developers to
![Network Diagnostic Tool home screen](../media/network-diagnostic-tool.png) As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames. +
+If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can levearge [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
## Performed tests
When a user runs a network diagnostic, the tool collects and store service and c
## Next Steps
+- [Use Pre-Call Diagnostic APIs to build your own tech check](../voice-video-calling/pre-call-diagnostics.md)
- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Add Real-Time Inspection tool to your application](./real-time-inspection.md)
+- [Debug your application with Monitoring tool](./real-time-inspection.md)
- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
Communication Monitoring is compatible with the same browsers as the Calling SDK
## Get started with Communication Monitoring
-The tool can be accessed through an npm package `@azure/communication-monitoring`. The package contains the `CommunicationMonitoring` object that can be attached to a `Call`. The Call Inspector requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the Call Inspector.
+The tool can be accessed through an npm package `@azure/communication-monitoring`. The package contains the `CommunicationMonitoring` object that can be attached to a `Call`. Instructions on how to initialize the required `CallClient` and `CallAgent` objects can be found [here](https://docs.microsoft.com/azure/communication-services/how-tos/calling-sdk/manage-calls?pivots=platform-web#initialize-required-objects). `CommunicationMonitoring` also requires an `HTMLDivElement` as part of its constructor on which it will be rendered. The `HTMLDivElement` will dictate the size of the rendered panel.
### Installing Communication Monitoring
npm i @azure/communication-monitoring
import { CallAgent, CallClient } from '@azure/communication-calling' import { CommunicationMonitoring } from '@azure/communication-monitoring'
-interface Options {
- callClient: CallClient
- callAgent: CallAgent
- divElement: HTMLDivElement
-}
- const selectedDiv = document.getElementById('selectedDiv') const options = {
- callClient = this.callClient,
- callAgent = this.callAgent,
+ callClient = {INSERT CALL CLIENT OBJECT},
+ callAgent = {INSERT CALL AGENT OBJECT},
divElement = selectedDiv, }
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
# Quickstart: Set up and manage access tokens for Teams users + In this quickstart, you'll build a .NET console application to authenticate a Microsoft 365 user by using the Microsoft Authentication Library (MSAL) and retrieving a Microsoft Azure Active Directory (Azure AD) user token. You'll then exchange that token for an access token of Teams user with the Azure Communication Services Identity SDK. The access token for Teams user can then be used by the Communication Services Calling SDK to build a custom Teams endpoint. > [!NOTE]
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
# QuickStart: Add 1:1 video calling to your customized Teams application + [!INCLUDE [Video calling with JavaScript](./includes/custom-teams-endpoint/voice-video-calling-cte-javascript.md)] ## Clean up resources
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
The following steps use the Azure portal, but with the appropriate Azure Logic A
1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
+1. Find and select the [SQL Server trigger](/connectors/sql) that you want to use.
1. On the designer, under the search box, select **All**. 1. In the search box, enter **sql server**.
- 1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
+ 1. From the triggers list, select the SQL trigger that you want.
+
+ This example continues with the trigger named **When an item is created**.
![Screenshot showing the Azure portal, Consumption logic app workflow designer, search box with "sql server", and "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-consumption.png)
-1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
-1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
+1. After the trigger information box appears, specify the interval and frequency for how often the trigger checks the table.
1. To add other properties available for this trigger, open the **Add new parameter** list and select those properties. This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL Server connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
- For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [SQL Server managed connector reference](/connectors/sql/).
1. When you're done, save your workflow.
- Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
- ### [Standard](#tab/standard) In Standard logic app workflows, only the SQL Server managed connector has triggers. The SQL Server built-in connector doesn't have any triggers. 1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
+1. Find and select the [SQL Server trigger](/connectors/sql) that you want to use.
1. On the designer, select **Choose an operation**.
In Standard logic app workflows, only the SQL Server managed connector has trigg
1. In the search box, enter **sql server**.
- 1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
+ 1. From the triggers list, select the SQL trigger that you want.
+
+ This example continues with the trigger named **When an item is created**.
![Screenshot showing Azure portal, Standard logic app workflow designer, search box with "sql server", and "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-standard.png)
-1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
-1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
+1. After the trigger information box appears, specify the interval and frequency for how often the trigger checks the table.
1. To add other properties available for this trigger, open the **Add new parameter** list and select those properties. This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL Server connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
- For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [SQL Server managed connector reference](/connectors/sql/).
1. When you're done, save your workflow.
- Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
-
+When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the SQL database based on your specified schedule. You have to [add an action](#add-sql-action) that responds to the trigger.
+ <a name="trigger-recurrence-shift-drift"></a> ## Trigger recurrence shift and drift (daylight saving time)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. Find and select the [SQL Server managed connector action](/connectors/sql) that you want to use. This example continues with the action named **Get row**.
+1. Find and select the [SQL Server action](/connectors/sql) that you want to use.
+
+ This example continues with the action named **Get row**.
1. Under the trigger or action where you want to add the SQL action, select **New step**.
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. In the search box, enter **sql server**.
- 1. From the actions list, select the SQL Server action that you want. This example uses the **Get row** action, which gets a single record.
+ 1. From the actions list, select the SQL Server action that you want.
+
+ This example uses the **Get row** action, which gets a single record.
![Screenshot showing the Azure portal, workflow designer for Consumption logic app, the search box with "sql server", and "Get row" selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-consumption.png)
-1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. Find and select the SQL Server connector action that you want to use.
+1. Find and select the SQL Server action that you want to use.
1. Under the trigger or action where you want to add the SQL Server action, select the plus sign (**+**), and then select **Add an action**.
In this example, the logic app workflow starts with the [Recurrence trigger](../
![Screenshot showing the designer search box with "sql server" and "Azure" selected underneath with the "Get row" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-standard.png)
-1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. Provide the [information for your connection](#create-connection). When you're done, select **Create**.
1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 5/16/2022 Last updated : 06/07/2022 zone_pivot_groups: azure-cli-or-portal
zone_pivot_groups: azure-cli-or-portal
The following example shows you how to create a Container Apps environment in an existing virtual network. > [!IMPORTANT]
-> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. For more information, see [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview).
+> Container Apps environments are deployed on a virtual network. This network can be managed or custom (pre-configured by the user beforehand). In either case, the environment has dependencies on services outside of that virtual network. For a list of these dependencies see [Outbound FQDN dependencies](firewall-integration.md#outbound-fqdn-dependencies).
::: zone pivot="azure-portal"
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Previously updated : 05/16/2022 Last updated : 06/07/2022 zone_pivot_groups: azure-cli-or-portal
zone_pivot_groups: azure-cli-or-portal
The following example shows you how to create a Container Apps environment in an existing virtual network. > [!IMPORTANT]
-> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. For more information, see [Known issues for public preview](https://github.com/microsoft/azure-container-apps/wiki/Known-Issues-for-public-preview).
+> Container Apps environments are deployed on a virtual network. This network can be managed or custom (pre-configured by the user beforehand). In either case, the environment has dependencies on services outside of that virtual network. For a list of these dependencies see [Outbound FQDN dependencies](firewall-integration.md#outbound-fqdn-dependencies).
::: zone pivot="azure-portal"
container-instances Container Instances Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-reference-yaml.md
Title: YAML reference for container group description: Reference for the YAML file supported by Azure Container Instances to configure a container group- Previously updated : 11/11/2021+++++ Last updated : 06/06/2022 # YAML reference: Azure Container Instances
-This article covers the syntax and properties for the YAML file supported by Azure Container Instances to configure a [container group](container-instances-container-groups.md). Use a YAML file to input the group configuration to the [az container create][az-container-create] command in the Azure CLI.
+This article covers the syntax and properties for the YAML file supported by Azure Container Instances to configure a [container group](container-instances-container-groups.md). Use a YAML file to input the group configuration to the [az container create][az-container-create] command in the Azure CLI.
-A YAML file is a convenient way to configure a container group for reproducible deployments. It is a concise alternative to using a [Resource Manager template](/azure/templates/Microsoft.ContainerInstance/2019-12-01/containerGroups) or the Azure Container Instances SDKs to create or update a container group.
+A YAML file is a convenient way to configure a container group for reproducible deployments. It's a concise alternative to using a [Resource Manager template](/azure/templates/Microsoft.ContainerInstance/2019-12-01/containerGroups) or the Azure Container Instances SDKs to create or update a container group.
> [!NOTE]
-> This reference applies to YAML files for Azure Container Instances REST API version `2021-07-01`.
+> This reference applies to YAML files for Azure Container Instances REST API version `2021-10-01`.
-## Schema
+## Schema
The schema for the YAML file follows, including comments to highlight key properties. For a description of the properties in this schema, see the [Property values](#property-values) section. -
-```yml
+```yaml
name: string # Name of the container group
-apiVersion: '2021-07-01'
+apiVersion: '2021-10-01'
location: string tags: {} identity:
properties: # Properties of container group
The following tables describe the values you need to set in the schema. -- ### Microsoft.ContainerInstance/containerGroups object | Name | Type | Required | Value | | - | - | - | - | | name | string | Yes | The name of the container group. |
-| apiVersion | enum | Yes | 2018-10-01 |
+| apiVersion | enum | Yes | **2021-10-01 (latest)**, 2021-09-01, 2021-07-01, 2021-03-01, 2020-11-01, 2019-12-01, 2018-10-01, 2018-09-01, 2018-07-01, 2018-06-01, 2018-04-01 |
| location | string | No | The resource location. | | tags | object | No | The resource tags. | | identity | object | No | The identity of the container group, if configured. - [ContainerGroupIdentity object](#containergroupidentity-object) | | properties | object | Yes | [ContainerGroupProperties object](#containergroupproperties-object) | --- ### ContainerGroupIdentity object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| type | enum | No | The type of identity used for the container group. The type 'SystemAssigned, UserAssigned' includes both an implicitly created identity and a set of user assigned identities. The type 'None' will remove any identities from the container group. - SystemAssigned, UserAssigned, SystemAssigned, UserAssigned, None | | userAssignedIdentities | object | No | The list of user identities associated with the container group. The user identity dictionary key references will be Azure Resource Manager resource IDs in the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'. | --- ### ContainerGroupProperties object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| encryptionProperties | object | No | The encryption properties for a container group. - [EncryptionProperties object](#encryptionproperties-object) | | initContainers | array | No | The init containers for a container group. - [InitContainerDefinition object](#initcontainerdefinition-object) | --- ### Container object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| name | string | Yes | The user-provided name of the container instance. | | properties | object | Yes | The properties of the container instance. - [ContainerProperties object](#containerproperties-object) | --- ### ImageRegistryCredential object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| identity | string | No | The resource ID of the user or system-assigned managed identity used to authenticate. | | identityUrl | string | No | The identity URL for the private registry. | --- ### IpAddress object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| ip | string | No | The IP exposed to the public internet. | | dnsNameLabel | string | No | The Dns name label for the IP. | --- ### Volume object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| secret | object | No | The secret volume. | | gitRepo | object | No | The git repo volume. - [GitRepoVolume object](#gitrepovolume-object) | --- ### ContainerGroupDiagnostics object | Name | Type | Required | Value | | - | - | - | - | | logAnalytics | object | No | Container group log analytics information. - [LogAnalytics object](#loganalytics-object) | --- ### ContainerGroupSubnetIds object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| id | string | Yes | The identifier for a subnet. | | name | string | No | The name of the subnet. | --- ### DnsConfiguration object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| searchDomains | string | No | The DNS search domains for hostname lookup in the container group. | | options | string | No | The DNS options for the container group. | - ### EncryptionProperties object
-| Name | Type | Required | Value |
+| Name | Type | Required | Value |
| - | - | - | - |
-| vaultBaseUrl | string | Yes | The keyvault base url. |
-| keyName | string | Yes | The encryption key name. |
-| keyVersion | string | Yes | The encryption key version. |
+| vaultBaseUrl | string | Yes | The keyvault base url. |
+| keyName | string | Yes | The encryption key name. |
+| keyVersion | string | Yes | The encryption key version. |
### InitContainerDefinition object
-| Name | Type | Required | Value |
+| Name | Type | Required | Value |
| - | - | - | - |
-| name | string | Yes | The name for the init container. |
-| properties | object | Yes | The properties for the init container. - [InitContainerPropertiesDefinition object](#initcontainerpropertiesdefinition-object)
-
+| name | string | Yes | The name for the init container. |
+| properties | object | Yes | The properties for the init container. - [InitContainerPropertiesDefinition object](#initcontainerpropertiesdefinition-object)
### ContainerProperties object
The following tables describe the values you need to set in the schema.
| livenessProbe | object | No | The liveness probe. - [ContainerProbe object](#containerprobe-object) | | readinessProbe | object | No | The readiness probe. - [ContainerProbe object](#containerprobe-object) | --- ### Port object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| protocol | enum | No | The protocol associated with the port. - TCP or UDP | | port | integer | Yes | The port number. | --- ### AzureFileVolume object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| storageAccountName | string | Yes | The name of the storage account that contains the Azure File share. | | storageAccountKey | string | No | The storage account access key used to access the Azure File share. | --- ### GitRepoVolume object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| repository | string | Yes | Repository URL | | revision | string | No | Commit hash for the specified revision. | -- ### LogAnalytics object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| logType | enum | No | The log type to be used. - ContainerInsights or ContainerInstanceLogs | | metadata | object | No | Metadata for log analytics. | - ### InitContainerPropertiesDefinition object
-| Name | Type | Required | Value |
+| Name | Type | Required | Value |
| - | - | - | - |
-| image | string | No | The image of the init container. |
-| command | array | No | The command to execute within the init container in exec form. - string |
-| environmentVariables | array | No |The environment variables to set in the init container. - [EnvironmentVariable object](#environmentvariable-object)
-| volumeMounts |array | No | The volume mounts available to the init container. - [VolumeMount object](#volumemount-object)
+| image | string | No | The image of the init container. |
+| command | array | No | The command to execute within the init container in exec form. - string |
+| environmentVariables | array | No |The environment variables to set in the init container. - [EnvironmentVariable object](#environmentvariable-object)
+| volumeMounts | array | No | The volume mounts available to the init container. - [VolumeMount object](#volumemount-object)
### ContainerPort object
The following tables describe the values you need to set in the schema.
| protocol | enum | No | The protocol associated with the port. - TCP or UDP | | port | integer | Yes | The port number exposed within the container group. | --- ### EnvironmentVariable object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| value | string | No | The value of the environment variable. | | secureValue | string | No | The value of the secure environment variable. | --- ### ResourceRequirements object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| requests | object | Yes | The resource requests of this container instance. - [ResourceRequests object](#resourcerequests-object) | | limits | object | No | The resource limits of this container instance. - [ResourceLimits object](#resourcelimits-object) | --- ### VolumeMount object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| mountPath | string | Yes | The path within the container where the volume should be mounted. Must not contain colon (:). | | readOnly | boolean | No | The flag indicating whether the volume mount is read-only. | --- ### ContainerProbe object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| successThreshold | integer | No | The success threshold. | | timeoutSeconds | integer | No | The timeout seconds. | --- ### ResourceRequests object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| cpu | number | Yes | The CPU request of this container instance. | | gpu | object | No | The GPU request of this container instance. - [GpuResource object](#gpuresource-object) | --- ### ResourceLimits object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| cpu | number | No | The CPU limit of this container instance. | | gpu | object | No | The GPU limit of this container instance. - [GpuResource object](#gpuresource-object) | --- ### ContainerExec object | Name | Type | Required | Value | | - | - | - | - | | command | array | No | The commands to execute within the container. - string | --- ### ContainerHttpGet object | Name | Type | Required | Value |
The following tables describe the values you need to set in the schema.
| count | integer | Yes | The count of the GPU resource. | | sku | enum | Yes | The SKU of the GPU resource. - K80, P100, V100 | - ## Next steps See the tutorial [Deploy a multi-container group using a YAML file](container-instances-multi-container-yaml.md).
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
description: This article describes how cost alerts help you monitor usage and spending in Cost Management. Previously updated : 05/12/2022 Last updated : 06/07/2022
Budget alerts notify you when spending, based on usage or cost, reaches or excee
In the Azure portal, budgets are defined by cost. Using the Azure Consumption API, budgets are defined by cost or by consumption usage. Budget alerts support both cost-based and usage-based budgets. Budget alerts are generated automatically whenever the budget alert conditions are met. You can view all cost alerts in the Azure portal. Whenever an alert is generated, it's shown in cost alerts. An alert email is also sent to the people in the alert recipients list of the budget.
+If you have an Enterprise Agreement, you can [Create and edit budgets with PowerShell](tutorial-acm-create-budgets.md#create-and-edit-budgets-with-powershell). However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs.
+ You can use the Budget API to send email alerts in a different language. For more information, see [Supported locales for budget alert emails](manage-automation.md#supported-locales-for-budget-alert-emails). ## Credit alerts
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 05/13/2022 Last updated : 06/07/2022
Budget integration with action groups works for action groups which have enabled
## Create and edit budgets with PowerShell
-If you're an EA customer, you can create and edit budgets programmatically using the Azure PowerShell module.
+If you're an EA customer, you can create and edit budgets programmatically using the Azure PowerShell module. However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs.
->[!Note]
->Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically because PowerShell and CLI aren't yet supported.
+> [!NOTE]
+> Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically because PowerShell and CLI aren't yet supported.
To download the latest version of Azure PowerShell, run the following command:
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
Previously updated : 10/21/2021 Last updated : 06/07/2022
Some third-party reseller services available on Azure Marketplace now consume yo
### Partners
+> [!NOTE]
+> The Azure Marketplace price list feature in the EA portal is retired. The same feature is available in the Azure portal.
+ LSPs can download an Azure Marketplace price list from the price sheet page in the Azure Enterprise portal. Select the **Marketplace Price list** link in the upper right. Azure Marketplace price list shows all available services and their prices. To download the price list:
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 05/05/2022- Last updated : 06/06/2022+
When you create an Azure subscription programmatically, that subscription is gov
A user must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role: * The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account.
-* An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
+* An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
For more information about the EA role assignment API request, see [Assign roles
> [!NOTE] > - Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API. > - If you're migrating to use the newer APIs, your previous configuration made with the [2015-07-01 version](grant-access-to-create-subscription.md) doesn't automatically convert for use with the newer APIs.
+> - The Enrollment Account information is only visible when the user's role is Account Owner. When a user has multiple roles, the API uses the user's least restrictive role.
## Find accounts you have access to
Using one of the following methods, you'll create a subscription alias name. We
An alias is used for simple substitution of a user-defined string instead of the subscription GUID. In other words, you can use it as a shortcut. You can learn more about alias at [Alias - Create](/rest/api/subscription/2020-09-01/alias/create). In the following examples, `sampleAlias` is created but you can use any string you like.
+If you have multiple user roles in addition to the Account Owner role, then you must retrieve the account ID from the Azure portal. Then you can use the ID to programmatically create subscriptions.
+ ### [REST](#tab/rest) Call the PUT API to create a subscription creation request/alias.
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
tags: billing
Previously updated : 05/05/2022 Last updated : 06/07/2022
Other information available in Azure usage data has changed:
- Term - 12 months or 36 months. - RINormalizationRatio - Available under AdditionalInfo. This is the ratio where the reservation is applied to the usage record. If instance size flexibility is enabled on for your reservation, then it can apply to other sizes. The value shows the ratio that the reservation was applied to for the usage record.
-[See field definition](/rest/api/consumption/usagedetails/list#definitions)
+For more information, see the Usage details field [Definitions](/rest/api/consumption/usagedetails/list#definitions).
## Get Azure consumption and reservation usage data using API You can get the data using the API or download it from Azure portal.
+For information about permissions needed to view and manage reservations, see [Who can manage a reservation by default](view-reservations.md#who-can-manage-a-reservation-by-default).
+ You call the [Usage Details API](/rest/api/consumption/usagedetails/list) to get the new data. For details about terminology, see [usage terms](../understand/understand-usage.md). Here's an example call to the Usage Details API:
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
If you have Azure credits, they automatically apply to your invoice each billing
## Reserve Bank of India
-**The Reserve Bank of India has issued new regulations.**
+**The Reserve Bank of India has issued new directives.**
-On 1 October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this you may need to make payments manually in the Azure portal. These regulations won't affect the total amount you will be charged for your Azure usage.
+On 1 October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this you may need to make payments manually in the Azure portal. This directive will not affect the total amount you will be charged for your Azure usage.
-[Learn more about the Reserve Bank of India regulation for recurring payments](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
+[Learn more about the Reserve Bank of India directive; Processing of e-mandate on cards for recurring transactions](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
On 1 July 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add a payment method and make a one-time payment for all invoices.
-[Learn about the Reserve Bank of India regulation for card storage](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12211)
+[Learn about the Reserve Bank of India directive; Restriction on storage of actual card data ](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12211)
## Pay by default payment method
databox-online Azure Stack Edge Gpu 2205 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md
Previously updated : 06/06/2022 Last updated : 06/07/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2205** release, which maps to software version number **2.2.1981.5086**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2205** release, which maps to software version number **2.2.1983.5094**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
The following table lists the issues that were release noted in previous release
| No. | Feature | Issue | | | | | |**1.**|GPU Extension installation | In the previous releases, there were issues that caused the GPU extension installation to fail. These issues are described in [Troubleshooting GPU extension issues](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md). These are fixed in the 2205 release and both the Windows and Linux installation packages are updated. More information on 2205 specific installation changes is covered in [Install GPU extension on your Azure Stack Edge device](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md). |
+|**2.**|HPN VMs | In the previous release, the Standard_F12_HPN could only support one network interface and couldn't be used for Multi-Access Edge Computing (MEC) deployments. This issue is fixed in this release. |
## Known issues in 2205 release
The following table provides a summary of known issues in this release.
| No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
-|**2.**|HPN VMs |For this release, the Standard_F12_HPN can only support one network interface and can't be used for Multi-Access Edge Computing (MEC) deployments. | |
- ## Known issues from previous releases
databox Data Box Customer Managed Encryption Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-customer-managed-encryption-key-portal.md
If you receive any errors related to your customer-managed key, use the followin
| SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key, but key access has not been granted or has been revoked, or the key vault couldn't be accessed because a firewall is enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If the key vault has a firewall enabled, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). | | SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](../key-vault/keys/about-keys-details.md). | | SsemUserErrorSoftDeleteAndPurgeProtectionNotEnabled | Key vault does not have soft delete or purge protection enabled. | Ensure that both soft delete and purge protection are enabled on the key vault. |
-| SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0) in PowerShell. |
+| SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0&preserve-view=true) in PowerShell. |
| SsemUserErrorKeyVaultUrlWithInvalidScheme | Only HTTPS is supported for passing the key vault URI. | Pass the key vault URI over HTTPS. | | SsemUserErrorKeyVaultUrlInvalidHost | The key vault URI host is not an allowed host in the geographical region. | In the public cloud, the key vault URI should end with `vault.azure.net`. In the Azure Government cloud, the key vault URI should end with `vault.usgovcloudapi.net`. | | Generic error | Could not fetch the passkey. | This error is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.|
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
na Previously updated : 3/11/2022 Last updated : 06/07/2022
There are two specific alerts that you will see for any DDoS attack detection an
- **DDoS Attack detected for Public IP**: This alert is generated when the DDoS protection service detects that one of your public IP addresses is the target of a DDoS attack. - **DDoS Attack mitigated for Public IP**: This alert is generated when an attack on the public IP address has been mitigated.
-To view the alerts, open **Defender for Cloud** in the Azure portal. Under **Threat Protection**, select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts.
+To view the alerts, open **Defender for Cloud** in the Azure portal and select **Security alerts**. Under **Threat Protection**, select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts.
![DDoS Alert in Microsoft Defender for Cloud](./media/manage-ddos-protection/ddos-alert-asc.png)
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
na Previously updated : 09/9/2020 Last updated : 06/07/2022
Azure DDoS Protection Standard, combined with application design best practices,
- **Always-on traffic monitoring:** Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it is detected. - **Adaptive tuning:** Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time. - **Multi-Layered protection:** When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).-- **Extensive mitigation scale:** Over 60 different attack types can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
+- **Extensive mitigation scale:** all L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
- **Attack analytics:** Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. - **Attack metrics:** Summarized metrics from each attack are accessible through Azure Monitor. - **Attack alerting:** Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal.
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-partner-onboarding.md
documentationcenter: na Previously updated : 08/28/2020 Last updated : 06/07/2022 # Partnering with Azure DDoS Protection Standard
The following steps are required for partners to configure integration with Azur
View existing partner integrations: - [Barracuda WAF-as-a-service](https://www.barracuda.com/waf-as-a-service)-- [Azure Cloud WAF from Radware](https://www.radware.com/resources/microsoft-azure/)+
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Last updated 05/19/2022
# What is Microsoft Defender for Cloud?
-Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multi-cloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:
+Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:
:::image type="content" source="media/defender-for-cloud-introduction/defender-for-cloud-synopsis.png" alt-text="Understanding the core functionality of Microsoft Defender for Cloud.":::
As soon as you open Defender for Cloud for the first time, Defender for Cloud:
- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards. -- **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multi-cloud resources.
+- **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources.
[Learn more about secure score](secure-score-security-controls.md).
If you would like to learn more about Defender for Cloud from a cybersecurity ex
You can also check out the following blogs: -- [A new name for multi-cloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
+- [A new name for multicloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
- [Microsoft Defender for Cloud - Use cases](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-use-cases/ba-p/2953619) - [Microsoft Defender for Cloud PoC Series - Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-poc-series-microsoft-defender-for/ba-p/3064644)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
A full list of supported alerts is available in the [reference table of all Defe
## Learn More
-Learn more from the product manager about [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md).
+Learn more from the product manager about [Microsoft Defender for Containers in a multicloud environment](episode-nine.md).
You can also learn how to [Protect Containers in GCP with Defender for Containers](episode-ten.md). You can also check out the following blogs: - [Protect your Google Cloud workloads with Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/protect-your-google-cloud-workloads-with-microsoft-defender-for/ba-p/3073360) - [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)-- [A new name for multi-cloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
+- [A new name for multicloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
## Next steps
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Defender for Containers provides real-time threat protection for your containeri
In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered.
-This solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+This solution monitors the growing attack surface of multicloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
# Introduction to Microsoft Defender for SQL
-Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to protect your SQL estate regardless of where it is located (Azure, multi-cloud or Hybrid environments). Microsoft Defender for SQL includes functions that can be used to discover and mitigate potential database vulnerabilities. Defender for SQL can also detect anomalous activities that may be an indication of a threat to your databases.
+Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to protect your SQL estate regardless of where it is located (Azure, multicloud or Hybrid environments). Microsoft Defender for SQL includes functions that can be used to discover and mitigate potential database vulnerabilities. Defender for SQL can also detect anomalous activities that may be an indication of a threat to your databases.
-To protect SQL databases in hybrid and multi-cloud environments, Defender for Cloud uses Azure Arc. Azure ARC connects your hybrid and multi-cloud machines. You can check out the following articles for more information:
+To protect SQL databases in hybrid and multicloud environments, Defender for Cloud uses Azure Arc. Azure ARC connects your hybrid and multicloud machines. You can check out the following articles for more information:
- [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md)
To protect SQL databases in hybrid and multi-cloud environments, Defender for Cl
- [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
- - Multi-cloud SQL servers:
+ - Multicloud SQL servers:
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
defender-for-cloud Episode Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eight.md
Learn more about [Defender for IoT](../defender-for-iot/index.yml).
## Next steps > [!div class="nextstepaction"]
-> [Microsoft Defender for Containers in a Multi-Cloud Environment](episode-nine.md)
+> [Microsoft Defender for Containers in a Multicloud Environment](episode-nine.md)
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Last updated 06/01/2022
# Microsoft Defender for Servers
-**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with TVM. Aviv explains how this new integration with TVM works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multi-cloud connector for AWS.
+**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with TVM. Aviv explains how this new integration with TVM works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multicloud connector for AWS.
<br> <br>
defender-for-cloud Episode Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nine.md
Title: Microsoft Defender for Containers in a multi-cloud environment
+ Title: Microsoft Defender for Containers in a multicloud environment
description: Learn about Microsoft Defender for Containers implementation in AWS and GCP. Last updated 06/01/2022
-# Microsoft Defender for Containers in a Multi-Cloud Environment
+# Microsoft Defender for Containers in a Multicloud Environment
**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers implementation in AWS and GCP.
-Maya explains about the new workload protection capabilities related to Containers when they're deployed in a multi-cloud environment. Maya also demonstrates the onboarding experience in GCP and how to visualize security recommendations across AWS, GCP, and Azure in a single dashboard.
+Maya explains about the new workload protection capabilities related to Containers when they're deployed in a multicloud environment. Maya also demonstrates the onboarding experience in GCP and how to visualize security recommendations across AWS, GCP, and Azure in a single dashboard.
<br> <br> <iframe src="https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> -- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multi-cloud environment
+- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multicloud environment
- [05:03](/shows/mdc-in-the-field/containers-multi-cloud#time=05m03s) - Workload protection capabilities for GCP
defender-for-cloud Episode Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seven.md
Last updated 05/29/2022
# New GCP connector in Microsoft Defender for Cloud
-**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new GCP Connector in Microsoft Defender for Cloud. Or explains the use case scenarios for the new connector and how the new connector works. She demonstrates the onboarding process to connect GCP with Microsoft Defender for Cloud and talks about custom assessment and the CSPM experience for multi-cloud.
+**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new GCP Connector in Microsoft Defender for Cloud. Or explains the use case scenarios for the new connector and how the new connector works. She demonstrates the onboarding process to connect GCP with Microsoft Defender for Cloud and talks about custom assessment and the CSPM experience for multicloud
<br> <br>
defender-for-cloud Episode Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-six.md
Last updated 05/25/2022
# Lessons learned from the field with Microsoft Defender for Cloud
-**Episode description**: In this episode Carlos Faria, Microsoft Cybersecurity Consultant joins Yuri to talk about lessons from the field and how customers are using Microsoft Defender for Cloud to improve their security posture and protect their workloads in a multi-cloud environment.
+**Episode description**: In this episode Carlos Faria, Microsoft Cybersecurity Consultant joins Yuri to talk about lessons from the field and how customers are using Microsoft Defender for Cloud to improve their security posture and protect their workloads in a multicloud environment.
Carlos also covers how Microsoft Defender for Cloud is used to fill the gap between cloud security posture management and cloud workload protection, and demonstrates some features related to this scenario.
Carlos also covers how Microsoft Defender for Cloud is used to fill the gap betw
- [2:58](/shows/mdc-in-the-field/lessons-from-the-field#time=02m58s) - How to fulfill the gap between CSPM and CWPP -- [4:42](/shows/mdc-in-the-field/lessons-from-the-field#time=04m42s) - How a multi-cloud affects the CSPM lifecycle and how Defender for Cloud fits in?
+- [4:42](/shows/mdc-in-the-field/lessons-from-the-field#time=04m42s) - How a multicloud affects the CSPM lifecycle and how Defender for Cloud fits in?
- [8:05](/shows/mdc-in-the-field/lessons-from-the-field#time=08m05s) - Demonstration
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
Last updated 05/29/2022
# Microsoft Defender for Containers
-**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multi-cloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you may receive.
+**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multicloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you may receive.
<br> <br>
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
For instructions on how to enable a system-managed identity for Azure Digital Tw
[Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
-Similarly, you can use private endpoints for your Azure Digital Twin instance to allow clients located in your virtual network to securely access the instance over Private Link.
+Similarly, you can use private endpoints for your Azure Digital Twins instance to allow clients located in your virtual network to securely access the instance over Private Link. Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
The private endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Azure Digital Twins instance traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure to the public internet. Here's a visual representation of this system:
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
-# Mandatory fields.
Title: Enable private access with Private Link
+ Title: Enable private access to Azure Digital Twins
-description: Learn how to enable private access for Azure Digital Twins solutions with Private Link.
+description: Learn how to enable private access to your Azure Digital Twins solutions, using Azure Private Link.
- Previously updated : 02/22/2022+ Last updated : 06/07/2022 -+ ms.devlang: azurecli-
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
-# Enable private access with Private Link
+# Enable private access to Azure Digital Twins using Private Link
-This article describes the different ways to enable [Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
+By using Azure Digital Twins together with [Azure Private Link](../private-link/private-link-overview.md), you can enable private endpoints for your Azure Digital Twins instance, to eliminate public exposure and allow clients located in your virtual network to securely access the instance over Private Link. For more information about this security strategy for Azure Digital Twins, see [Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link).
Here are the steps that are covered in this article: 1. Turn on Private Link and configure a private endpoint for an Azure Digital Twins instance.
-1. View, edit, or delete a private endpoint from an instance.
-1. Disable or enable public network access flags, to restrict API access to Private Link connections only.
+1. View, edit, or delete a private endpoint from an Azure Digital Twins instance.
+1. Disable or enable public network access flags, to restrict API access for an Azure Digital Twins to Private Link connections only.
+
+This article also contains information for deploying Azure Digital Twins with Private Link using an ARM template, and troubleshooting the configuration.
## Prerequisites Before you can set up a private endpoint, you'll need an [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md) where the endpoint can be deployed. If you don't have a VNet already, you can follow one of the [Azure Virtual Network quickstarts](../virtual-network/quick-create-portal.md) to set this up.
-## Add a private endpoint to Azure Digital Twins
+## Add private endpoints to Azure Digital Twins
You can use either the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli) to turn on Private Link with a private endpoint for an Azure Digital Twins instance.
For a full list of required and optional parameters, as well as more private end
-## Manage private endpoint connections
+## Manage private endpoints
In this section, you'll see how to view, edit, and delete a private endpoint after it's been created.
For a sample template that allows an Azure function to connect to Azure Digital
This template creates an Azure Digital Twins instance, a virtual network, an Azure function connected to the virtual network, and a Private Link connection to make the Azure Digital Twins instance accessible to the Azure function through a private endpoint.
-## Troubleshoot Private Link with Azure Digital Twins
+## Troubleshoot
-Here are some common issues experienced with Private Link for Azure Digital Twins.
+Here are some common issues that might arise when using Private Link with Azure Digital Twins.
* **Issue:** When trying to access Azure Digital Twins APIs, you see an HTTP error code 403 with the following error in the response body: ```json
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
When you migrate database(s) using the Azure SQL migration extension for Azure D
- SSIS packages - Server roles - Server audit-- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below as target versions are not supported currently.
+- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2008 and below as target versions are not supported currently.
+- If you are using SQL Server 2012 or SQL Server 2014 you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after.
- Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.
dns Tutorial Public Dns Zones Child https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-public-dns-zones-child.md
Title: 'Tutorial: Creating an Azure child DNS zones'
+ Title: 'Tutorial: Create an Azure child DNS zone'
-description: Tutorial on how to create child DNS zones in Azure portal.
+description: In this tutorial, you learn how to create child DNS zones in Azure portal.
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 -+ Previously updated : 04/19/2021 Last updated : 06/07/2022
-# Tutorial: Creating a new Child DNS zone
+# Tutorial: Create a new Child DNS zone
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Signing in to Azure Portal.
-> * Creating child DNS zone via new DNS zone.
-> * Creating child DNS zone via parent DNS zone.
-> * Verifying NS Delegation for new Child DNS zone.
+> * Create a child DNS zone via parent DNS zone.
+> * Create a child DNS zone via new DNS zone.
+> * Verify NS Delegation for the new Child DNS zone.
## Prerequisites
-* An Azure account with an active subscription. If you don't have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Existing parent Azure DNS zone.
+* An Azure account with an active subscription. If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A parent Azure DNS zone. If you don't have one, you can [create a DNS zone](./dns-getstarted-portal.md#create-a-dns-zone).
-In this tutorial, we'll use contoso.com as the parent zone and subdomain.contoso.com as the child domain name. Replace *contoso.com* with your parent domain name and *subdomain* with your child domain. If you haven't created your parent DNS zone, see steps to [create DNS zone using Azure portal](./dns-getstarted-portal.md#create-a-dns-zone).
+In this tutorial, we'll use `contoso.com` as the parent zone and `subdomain.contoso.com` as the child domain name. Replace `contoso.com` with your parent domain name and `subdomain` with your child domain.
+There are two ways you can create your child DNS zone:
+1. Through the parent DNS zone's **Overview** page.
+1. Through the **Create DNS zone** page.
-## Sign in to Azure portal
+## Create a child DNS zone via parent DNS zone Overview page
-Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-If you don't have an Azure subscription, create a free account before you begin.
+You'll create a new child DNS zone and delegate it to the parent DNS zone using the **Child Zone** button from parent zone **Overview** page. Using this button, the parent parameters are automatically pre-populated.
-There are two ways you can do create your child DNS zone.
-1. Through the "Create DNS zone" portal page.
-1. Through the parent DNS zone's configuration page.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page, select the **+Child zone** button.
-## Create child DNS zone via create DNS zone
+ :::image type="content" source="./media/tutorial-public-dns-zones-child/child-zone-button.png" alt-text="Screenshot of D N S zone showing the child zone button.":::
-In this step, we'll create a new child DNS zone with name **subdomain.contoso.com** and delegate it to existing parent DNS zone **contoso.com**. You'll create the DNS zone using the tabs on the **Create DNS zone** page.
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears.
-1. Select **Networking**, then select **DNS zone** and then select **Add** button.
+1. In the **Create DNS zone**, enter or select this information in the **Basics** tab:
-1. On the **basics** tab, type or select the following values:
- * **Subscription**: Select a subscription to create the zone in.
- * **Resource group**: Enter your existing Resource group or create a new one by selecting **Create new**. Enter *MyResourceGroup*, and select **OK**. The resource group name must be unique within the Azure subscription.
- * Check this checkbox: **This zone is a child of an existing zone already hosted in Azure DNS**
- * **Parent zone subscription**: From this drop down, search or select the subscription name under which parent DNS zone *contoso.com* was created.
- * **Parent zone**: In the search bar type *contoso.com* to load it in dropdown list. Once loaded select *contoso.com* from dropdown list.
- * **Name:** Type *subdomain* for this tutorial example. Notice that your parent DNS zone name *contoso.com* is automatically added as suffix to name when we select parent zone from the above step.
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription.|
+ | Resource group | Select an existing resource group for the child zone or create a new one by selecting **Create new**. </br> In this tutorial, the resource group **MyResourceGroup** of the parent DNS zone is selected. |
+ | **Instance details** | |
+ | Name | Enter your child zone name. In this tutorial, *subdomain* is used. Notice that the parent DNS zone name `contoso.com` is automatically added as a suffix to **Name**. |
+ | Resource group location | The resource group location is selected for you if you selected an existing resource group for the child zone. </br> Select the resource group location if you created a new resource group for the child zone. </br> The resource group location doesn't affect your DNS zone service, which is global and not bound to a location. |
-1. Select **Next: Review + create**.
-1. On the **Review + create** tab, review the summary, correct any validation errors, and then select **Create**.
-It may take a few minutes to create the zone.
+ :::image type="content" source="./media/tutorial-public-dns-zones-child/child-zone-via-overview-page.png" alt-text="Screenshot of Create D N S zone page accessed via the Add child zone button.":::
- :::image type="content" source="./media/dns-delegate-domain-azure-dns/create-dns-zone-inline.png" alt-text="Screenshot of the create DNS zone page." lightbox="./media/dns-delegate-domain-azure-dns/create-dns-zone-expanded.png":::
+ > [!NOTE]
+ > Parent zone information is automatically pre-populated with child zone option box already checked.
-## Create child DNS zone via parent DNS zone overview page
-You can also create a new child DNS zone and delegate it into the parent DNS zone by using the **Child Zone** button from parent zone overview page. Using this button automatically pre-populates the parent parameters for the child zone automatically.
+1. Select **Review + create** button.
+1. Select **Create** button. It may take a few minutes to create the child zone.
-1. In the Azure portal, under **All resources**, open the *contoso.com* DNS zone in the **MyResourceGroup** resource group. You can enter *contoso.com* in the **Filter by name** box to find it more easily.
-1. On DNS zone overview page, select the **+Child Zone** button.
- :::image type="content" source="./media/dns-delegate-domain-azure-dns/create-child-zone-inline.png" alt-text="Screenshot child zone button." border="true" lightbox="./media/dns-delegate-domain-azure-dns/create-child-zone-expanded.png":::
+## Create a child DNS zone via Create DNS zone
-1. The create DNS zone page will then open. Child zone option is already checked, and parent zone subscription and parent zone gets populated for you on this page.
-1. Type the name as *child* for this tutorial example. Notice that you parent DNS zone name contoso.com is automatically added as prefix to name.
-1. Select **Next: Tags** and then **Next: Review + create**.
-1. On the **Review + create** tab, review the summary, correct any validation errors, and then select **Create**.
+You'll create a new child DNS zone and delegate it to the parent DNS zone using the **Create DNS zone** page.
- :::image type="content" source="./media/dns-delegate-domain-azure-dns/create-dns-zone-child-inline.png" alt-text="Screenshot of child zone selected" border="true" lightbox="./media/dns-delegate-domain-azure-dns/create-dns-zone-child-expanded.png":::
+1. On the Azure portal menu or from the **Home** page, select **Create a resource** and then select **Networking**.
+1. Select **DNS zone** and then select the **Create** button.
-## Verify child DNS zone
-Now that you have a new child DNS zone *subdomain.contoso.com* created. To verify that delegation happened correctly, you'll want to check the nameserver(NS) records for your child zone is in the parent zone as described below.
+1. In **Create DNS zone**, enter or select this information in the **Basics** tab:
-**Retrieve name servers of child DNS zone:**
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription.|
+ | Resource group | Select an existing resource group or create a new one by selecting **Create new**. </br> In this tutorial, the resource group **MyResourceGroup** of the parent DNS zone is selected. |
+ | **Instance details** | |
+ | This zone is a child of an existing zone already hosted in Azure DNS | Check this checkbox. |
+ | Parent zone subscription | Select your Azure subscription under which parent DNS zone `contoso.com` was created. |
+ | Parent zone | In the search bar, enter *contoso.com* to load it in dropdown list. Once loaded, select it from dropdown list. |
+ | Name | Enter your child zone name. In this tutorial, *subdomain* is used. Notice that the parent DNS zone name `contoso.com` is automatically added as a suffix to **Name** after you selected parent zone from the previous step. |
+ | Resource group location | The resource group location is selected for you if you selected an existing resource group for the child zone. </br> Select the resource group location if you created a new resource group for the child zone. </br> The resource group location doesn't affect your DNS zone service, which is global and not bound to a location. |
-1. In the Azure portal, under **All resources**, open the *subdomain.contoso.com* DNS zone in the **MyResourceGroup** resource group. You can enter *subdomain.contoso.com* in the **Filter by name** box to find it more easily.
-1. Retrieve the name servers from the DNS zone overview page. In this example, the zone contoso.com has been assigned name servers *ns1-08.azure-dns.com, ns2-08.azure-dns.net, ns3-08.azure-dns.org*, and *ns4-08.azure-dns.info*:
+ :::image type="content" source="./media/tutorial-public-dns-zones-child/child-zone-via-create-dns-zone-page.png" alt-text="Screenshot of Create D N S zone page accessed via the Create button of D N S zone page.":::
- :::image type="content" source="./media/dns-delegate-domain-azure-dns/create-child-zone-ns-inline.png" alt-text="Screenshot of child zone nameservers" border="true" lightbox="./media/dns-delegate-domain-azure-dns/create-child-zone-ns-expanded.png":::
-**Verify the NS record in parent DNS zone:**
+1. Select **Review + create** button.
+1. Select **Create** button. It may take a few minutes to create the zone.
-Now in this step we go the parent DNS zone *contoso.com* and check that its NS record set entry for the child zones nameservers has been created.
-1. In the Azure portal, under **All resources**, open the contoso.com DNS zone in the **MyResourceGroup** resource group. You can enter contoso.com in the **Filter by name** box to find it more easily.
-1. On the *contoso.com* DNS zones overview page, check for the record sets.
-1. You'll find that record set of type NS and name subdomain is already created in parent DNS zone. Check the values for this record set it's similar to the nameserver list we retrieved from child DNS zone in above step.
+## Verify the child DNS zone
+
+After the new child DNS zone `subdomain.contoso.com` created, verify that the delegation configured correctly. You'll need to check that your child zone name server (NS) records are in the parent zone as described below.
+
+### Retrieve name servers of child DNS zone
+
+1. In the Azure portal, enter *subdomain.contoso.com* in the search box at the top of the portal and then select **subdomain.contoso.com** DNS zone from the search results.
+
+1. Retrieve the name servers from the DNS zone **Overview** page. In this example, the zone `subdomain.contoso.com` has been assigned name servers `ns1-05.azure-dns.com.`, `ns2-05.azure-dns.net.`, `ns3-05.azure-dns.org.`, and `ns4-05.azure-dns.info.`:
+
+ :::image type="content" source="./media/tutorial-public-dns-zones-child/child-zone-name-servers-inline.png" alt-text="Screenshot of child D N S zone Overview page showing its name servers." lightbox="./media/tutorial-public-dns-zones-child/child-zone-name-servers-expanded.png":::
+
+### Check the NS record set in parent DNS zone
+
+After retrieving the name servers from the child DNS zone, check that the parent DNS zone `contoso.com` has the NS record set entry for its child zone name servers.
+
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal and then select **contoso.com** DNS zone from the search results.
+1. Check the record sets in **Overview** page of **contoso.com** DNS zone.
+1. You'll find a record set of type **NS** and name **subdomain** created in the parent DNS zone. Compare the name servers in this record set with the ones you retrieved from the **Overview** page of the child DNS zone.
+
+ :::image type="content" source="./media/tutorial-public-dns-zones-child/parent-zone-name-servers-inline.png" alt-text="Screenshot of child zone name servers validation in the parent D N S zone Overview page." lightbox="./media/tutorial-public-dns-zones-child/parent-zone-name-servers-expanded.png":::
- :::image type="content" source="./media/dns-delegate-domain-azure-dns/create-child-zone-ns-validate-inline.png" alt-text="Screenshot of Child zone nameservers validation" border="true" lightbox="./media/dns-delegate-domain-azure-dns/create-child-zone-ns-validate-expanded.png":::
## Clean up resources
-When you no longer need the resources you created in this tutorial, remove them by deleting the **MyResourceGroup** resource group. Open the **MyResourceGroup** resource group, and select **Delete resource group**.
+
+When no longer needed, you can delete all resources created in this tutorial by following these steps to delete the resource group **MyResourceGroup**:
+
+1. On the Azure portal menu, select **Resource groups**.
+
+2. Select the **MyResourceGroup** resource group.
+
+3. Select **Delete resource group**.
+
+4. Enter *MyResourceGroup* and select **Delete**.
+ ## Next steps
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
na Previously updated : 12/06/2021 Last updated : 06/06/2022 #Customer intent: As a website owner, I want to enable HTTPS on the custom domain in my Front Door so that my users can use my custom domain to access their content securely.
Before you can complete the steps in this tutorial, you must first create a Fron
To enable the HTTPS protocol for securely delivering content on a Front Door custom domain, you must use a TLS/SSL certificate. You can choose to use a certificate that is managed by Azure Front Door or use your own certificate. - ### Option 1 (default): Use a certificate managed by Front Door When you use a certificate managed by Azure Front Door, the HTTPS feature can be turned on with just a few clicks. Azure Front Door completely handles certificate management tasks such as procurement and renewal. After you enable the feature, the process starts immediately. If the custom domain is already mapped to the Front Door's default frontend host (`{hostname}.azurefd.net`), no further action is required. Front Door will process the steps and complete your request automatically. However, if your custom domain is mapped elsewhere, you must use email to validate your domain ownership.
To enable HTTPS on a custom domain, follow these steps:
You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected.
-#### Prepare your Azure Key vault account and certificate
+#### Prepare your key vault and certificate
+
+- You must have a key vault account in the same Azure subscription as your front door. Create a key vault account if you don't have one.
+
+ > [!WARNING]
+ > Azure Front Door currently only supports Key Vault accounts in the same subscription as the Front Door configuration. Choosing a Key Vault under a different subscription than your Front Door will result in a failure.
-1. Azure Key Vault: You must have a running Azure Key Vault account under the same subscription as your Front Door that you want to enable custom HTTPS. Create an Azure Key Vault account if you don't have one.
+- If your key vault has network access restrictions enabled, you must configure your key vault to allow trusted Microsoft services to bypass the firewall.
-> [!WARNING]
-> Azure Front Door currently only supports Key Vault accounts in the same subscription as the Front Door configuration. Choosing a Key Vault under a different subscription than your Front Door will result in a failure.
+- Your key vault must be configured to use the *Key Vault access policy* permission model.
-2. Azure Key Vault certificates: If you already have a certificate, you can upload it directly to your Azure Key Vault account or you can create a new certificate directly through Azure Key Vault from one of the partner CAs that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
+- If you already have a certificate, you can upload it directly to your key vault. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner certificate authorities (CAs) that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
> [!NOTE]
-> For your own TLS/SSL certificate, Front Door doesn't support certificates with EC cryptography algorithms. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA list](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
+> Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA list](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
#### Register Azure Front Door
-Register the service principal for Azure Front Door as an app in your Azure Active Directory using Azure PowerShell or Azure CLI.
+Register the service principal for Azure Front Door as an app in your Azure Active Directory (Azure AD) by using Azure PowerShell or the Azure CLI.
> [!NOTE]
-> This action requires Global Administrator permissions, and needs to be performed only **once** per tenant.
+> This action requires you to have Global Administrator permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
##### Azure PowerShell
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037" -Role Contributor
+ New-AzADServicePrincipal -ApplicationId 'ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037' -Role Contributor
``` ##### Azure CLI
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- SP_ID=$(az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --query objectId -o tsv)
+ SP_ID=$(az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --query objectId -o tsv)
az role assignment create --assignee $SP_ID --role Contributor ``` #### Grant Azure Front Door access to your key vault
-Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
+Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
-1. In your key vault account, under SETTINGS, select **Access policies**, then select **Add new** to create a new policy.
+1. In your key vault account, select **Access policies**.
-2. In **Select principal**, search for **ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037**, and choose **Microsoft.Azure.Frontdoor**. Click **Select**.
+1. Select **Create** to create a new access policy.
-3. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
+1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
-4. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
+1. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
-5. Select **Add**.
+1. In **Select principal**, search for **ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037**, and select **Microsoft.Azure.Frontdoor**. Select **Next**.
-6. On the **Access policies** page, select **Save**.
+1. In **Application**, select **Next**.
+
+1. In **Review + create**, select **Create**.
+
+> [!NOTE]
+> If your key vault is protected with network access restrictions, make sure to allow trusted Microsoft services to access your key vault.
-Azure Front Door can now access this Key Vault and the certificates that are stored in this Key Vault.
+Azure Front Door can now access this key vault and the certificates it contains.
#### Select the certificate for Azure Front Door to deploy
Azure Front Door can now access this Key Vault and the certificates that are sto
- The available secret versions. > [!NOTE]
- > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 48 hours for the new version of the certificate/secret to be deployed.
> > :::image type="content" source="./media/front-door-custom-domain-https/certificate-version.png" alt-text="Screenshot of selecting secret version on update custom domain page.":::
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
For a comparison of supported features in Azure Front Door, see [Tier comparison
## Where is the service available?
-Azure Front Door is available in Microsoft Azure (Commercial) and Microsoft Azure Government (US).
+Azure Front Door Classic/Standard/Premium SKUs are available in Microsoft Azure (Commercial) and Azure Front Door Classic SKU is available in Microsoft Azure Government (US).
## Pricing
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Azure Front Door private link is available in the following regions:
| East US | Sweden Central | Japan East | | East US 2 | UK South | Korea Central | | South Central US | West Europe | |
-| West US 2 | | |
| West US 3 | | | ## Limitations Origin support for direct private end point connectivity is limited to Storage (Azure Blobs), App Services and internal load balancers.
-For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Azure Front Door private link endpoint.
+The Azure Front Door Private Link feature is region agnostic but for the best latency, you should always pick an Azure region closest to your origin when choosing to enable Azure Front Door Private Link endpoint.
## Next steps
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Previously updated : 03/18/2022 Last updated : 06/06/2022 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
Azure Front Door supports both Azure managed certificate and customer-managed ce
You can also choose to use your own TLS certificate. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected. This certificate must be imported into an Azure Key Vault before you can use it with Azure Front Door Standard/Premium. See how to [import a certificate](../../key-vault/certificates/tutorial-import-certificate.md) to Azure Key Vault.
-#### Prepare your Azure Key vault account and certificate
+#### Prepare your key vault and certificate
-1. You must have a running Azure Key Vault account under the same subscription as your Azure Front Door Standard/Premium that you want to enable custom HTTPS. Create an Azure Key Vault account if you don't have one.
+- You must have a key vault in the same Azure subscription as your Azure Front Door Standard/Premium profile. Create a key vault if you don't have one.
> [!WARNING]
- > Azure Front Door currently only supports Key Vault accounts in the same subscription as the Front Door configuration. Choosing a Key Vault under a different subscription than your Azure Front Door Standard/Premium will result in a failure.
+ > Azure Front Door currently only supports key vaults in the same subscription as the Front Door profile. Choosing a key vault under a different subscription than your Azure Front Door Standard/Premium profile will result in a failure.
-1. If you already have a certificate, you can upload it directly to your Azure Key Vault account. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner Certificate Authorities that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
+- If your key vault has network access restrictions enabled, you must configure your key vault to allow trusted Microsoft services to bypass the firewall.
- > [!NOTE]
- > For your own TLS/SSL certificate, Front Door doesn't support certificates with EC cryptography algorithms. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
+- Your key vault must be configured to use the *Key Vault access policy* permission model.
+
+- If you already have a certificate, you can upload it to your key vault. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner certificate authorities (CAs) that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
+
+> [!NOTE]
+> Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
#### Register Azure Front Door
-Register the service principal for Azure Front Door as an app in your Azure Active Directory via PowerShell.
+Register the service principal for Azure Front Door as an app in your Azure Active Directory (Azure AD) by using Azure PowerShell or the Azure CLI.
> [!NOTE]
-> This action requires Global Administrator permissions, and needs to be performed only **once** per tenant.
+> This action requires you to have Global Administrator permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
+
+##### Azure PowerShell
1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine.
-1. In PowerShell, run the following command:
+2. In PowerShell, run the following command:
+
+ ```azurepowershell-interactive
+ New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8' -Role Contributor
+ ```
+
+##### Azure CLI
- `New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor`
+1. If need, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+
+2. In CLI, run the following command:
+
+ ```azurecli-interactive
+ SP_ID=$(az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --query objectId -o tsv)
+ az role assignment create --assignee $SP_ID --role Contributor
+ ```
#### Grant Azure Front Door access to your key vault
-Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
+Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
-1. In your key vault account, under SETTINGS, select **Access policies**. Then select **Add new** to create a new policy.
+1. In your key vault account, select **Access policies**.
-1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose **Microsoft.AzureFrontDoor-Cdn**. Select **Select**.
+1. Select **Add new** or **Create** to create a new access policy.
1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate. 1. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
-1. Select **OK**.
+1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and select **Microsoft.AzureFrontDoor-Cdn**. Select **Next**.
+
+1. In **Application**, select **Next**.
+
+1. In **Review + create**, select **Create**.
> [!NOTE]
-> If your Azure Key Vault is being protected with Firewall, make sure to allow Azure Front Door to access your Azure Key Vault account.
+> If your key vault is protected with network access restrictions, make sure to allow trusted Microsoft services to access your key vault.
+
+Azure Front Door can now access this key vault and the certificates it contains.
#### Select the certificate for Azure Front Door to deploy
Grant Azure Front Door permission to access the certificates in your Azure Key
:::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate.png" alt-text="Screenshot of Azure Front Door secret landing page.":::
-1. On the **Add certificate** page, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium. Leave the version selection as "Latest" and select **Add**.
+1. On the **Add certificate** page, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium.
+
+1. When you select a certificate, you must [select the certificate version](#rotate-own-certificate). If you select **Latest**, Azure Front Door will automatically update whenever the certificate is rotated (renewed). Alternatively, you can select a specific certificate version if you prefer to manage certificate rotation yourself.
+
+ Leave the version selection as "Latest" and select **Add**.
:::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate-page.png" alt-text="Screenshot of add certificate page.":::
Grant Azure Front Door permission to access the certificates in your Azure Key
"Bring Your Own Certificate (BYOC)" for *HTTPS*. For *Secret*, select the certificate you want to use from the drop-down. > [!NOTE]
- > The selected certificate must have a common name (CN) same as the custom domain being added.
+ > The common name (CN) of the selected certificate must match the custom domain being added.
:::image type="content" source="../media/how-to-configure-https-custom-domain/add-custom-domain-https.png" alt-text="Screenshot of add a custom domain page with HTTPS.":::
Grant Azure Front Door permission to access the certificates in your Azure Key
## Certificate renewal and changing certificate types
-### Azure managed certificate
+### Azure-managed certificate
-Azure managed certificate will be automatically rotated when your custom domain has the CNAME record to an Azure Front Door standard or premium endpoint. The auto rotation won't happen for the two scenarios below
+Azure-managed certificates are automatically rotated when your custom domain uses a CNAME record that points to an Azure Front Door standard or premium endpoint.
-* If the custom domain CNAME record is pointing to other DNS resources
+Front Door won't automatically rotate certificates in the following scenarios:
-* If your custom domain points to Azure Front Door through a long chain, for example, putting an Azure Traffic Manager before Azure Front Door and other CDN providers, the CNAME chain is contoso.com CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
+* The custom domain's CNAME record is pointing to other DNS resources.
+* The custom domain points to Azure Front Door through a long chain. For example, if you put Azure Traffic Manager before Azure Front Door, the CNAME chain is `contoso.com` CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
-The domain validation state will become ΓÇÿPending RevalidationΓÇÖ 45 days before managed certificate expiry or ΓÇÿRejectedΓÇÖ if the managed certificate issuance is rejected by the certificate authority. Refer to [Add a custom domain](how-to-add-custom-domain.md#domain-validation-state) for actions for different domain state.
+The domain validation state will become *Pending Revalidation* 45 days before the managed certificate expires, or *Rejected* if the managed certificate issuance is rejected by the certificate authority. Refer to [Add a custom domain](how-to-add-custom-domain.md#domain-validation-state) for actions for each of the domain states.
-### Use your own certificate
+### <a name="rotate-own-certificate"></a>Use your own certificate
-In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
+In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration.md
Management Groups.
The guest configuration extension writes log files to the following locations:
-Windows: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log`
+Windows
+
+- Azure VM: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log`
+- Arc-enabled server: `C:\ProgramData\GuestConfig\arc_policy_logs\gc_agent.log`
Linux
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-as-code-github.md
Title: "Tutorial: Implement Azure Policy as Code with GitHub" description: In this tutorial, you implement an Azure Policy as Code workflow with export, GitHub actions, and GitHub workflows Previously updated : 08/17/2021 Last updated : 06/07/2022 ++ # Tutorial: Implement Azure Policy as Code with GitHub
resources, the quickstart articles explain how to do so.
[free account](https://azure.microsoft.com/free/) before you begin. - Review [Design an Azure Policy as Code workflow](../concepts/policy-as-code.md) to have an understanding of the design patterns used in this tutorial.
+- Your account must be assigned the **Owner** role at the management group or subscription scope. For more information on Azure RBAC permissions in Azure Policy, see [Overview of Azure Policy](../overview.md).
### Export Azure Policy objects from the Azure portal
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
With Spark & Hive Tools for Visual Studio Code, you can submit interactive Hive
- **MESSAGES** panel: When you select a **Line** number, it jumps to the first line of the running script.
-## Submit interactive PySpark queries
+## Submit interactive PySpark queries
-Users can perform PySpark interactive in the following ways. Note here that Jupyter Extension version (ms-jupyter): v2022.1.1001614873 and Python Extension version (ms-python): v2021.12.1559732655, python 3.6.x and 3.7.x are only for HDInsight interactive PySpark queries.
+### Prerequisite for Pyspark interactive
+
+Note here that Jupyter Extension version (ms-jupyter): v2022.1.1001614873 and Python Extension version (ms-python): v2021.12.1559732655, python 3.6.x and 3.7.x are required for HDInsight interactive PySpark queries.
+
+Users can perform PySpark interactive in the following ways.
### Using the PySpark interactive command in PY file Using the PySpark interactive command to submit the queries, follow these steps:
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 03/10/2022 Last updated : 06/07/2022 # Archived release notes ## Summary
-Azure HDInsight is one of the most popular services among enterprise customers for open-source Apache Hadoop and Apache Spark analytics on Azure.
+Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure.
+If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
+
+## Release date: 03/10/2022
+
+This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
+
+The OS versions for this release are:
+- HDInsight 4.0: Ubuntu 18.04.5
+
+## Spark 3.1 is now generally available
+
+Spark 3.1 is now Generally Available on HDInsight 4.0 release. This release includes
+
+* Adaptive Query Execution,
+* Convert Sort Merge Join to Broadcast Hash Join,
+* Spark Catalyst Optimizer,
+* Dynamic Partition Pruning,
+* Customers will be able to create new Spark 3.1 clusters and not Spark 3.0 (preview) clusters.
+
+For more details, see the [Apache Spark 3.1](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/spark-3-1-is-now-generally-available-on-hdinsight/ba-p/3253679) is now Generally Available on HDInsight - Microsoft Tech Community.
+
+For a complete list of improvements, see the [Apache Spark 3.1 release notes.](https://spark.apache.org/releases/spark-release-3-1-2.html)
+
+For more details on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
+
+## Kafka 2.4 is now generally available
+
+Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html)
+Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
+
+## Map Datatype in HWC is now supported in HDInsight 4.0
+
+This release includes Map Datatype Support for HWC 1.0 (Spark 2.4) Via the spark-shell application, and all other all spark clients that HWC supports. Following improvements are included like any other data types:
+
+A user can
+* Create a Hive table with any column(s) containing Map datatype, insert data into it and read the results from it.
+* Create an Apache Spark dataframe with Map Type and do batch/stream reads and writes.
+
+### New regions
+
+HDInsight has now expanded its geographical presence to two new regions: China East 3 and China North 3.
+
+### OSS backport changes
+
+OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supports Map data type.
+
+### Here are the OSS backported Apache JIRAs for this release:
+
+| Impacted Feature | Apache JIRA |
+||--|
+| Metastore direct sql queries with IN/(NOT IN) should be split based on max parameters allowed by SQL DB | [HIVE-25659](https://issues.apache.org/jira/browse/HIVE-25659) |
+| Upgrade log4j 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
+| Update Flatbuffer version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
+| Support Map data-type natively in Arrow format | [HIVE-25553](https://issues.apache.org/jira/browse/HIVE-25553) |
+| LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) |
+| Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
+
+## Deprecation notices
+### Azure Virtual Machine Scale Sets on HDInsight
+
+HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the clusters, no breaking change is expected. Existing HDInsight clusters on virtual machine scale sets will have no impact, any new clusters on latest images will no longer use Virtual Machine Scale Sets.
+
+### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
+
+Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
+ ## Release date: 12/27/2021
HDInsight now uses Azure virtual machines to provision the cluster. The service
### Deprecation #### Disabled VM sizes
-Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+Starting from January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
### Behavior changes #### Default cluster VM size changes to Ev3-series
Starting February 2021, the default version of HDInsight cluster will be changed
HDInsight is upgrading OS version from Ubuntu 16.04 to 18.04. The upgrade will complete before April 2021. #### HDInsight 3.6 end of support on June 30 2021
-HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+HDInsight 3.6 will be end of support. Starting from June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
### Component version change No component version change for this release. You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 in [this doc](./hdinsight-component-versioning.md).
HDInsight now uses Azure virtual machines to provision the cluster. Starting fro
HDInsight 3.6 ML Services cluster type will be end of support by December 31 2020. Customers won't be able to create new 3.6 ML Services clusters after December 31 2020. Existing clusters will run as is without the support from Microsoft. Check the support expiration for HDInsight versions and cluster types [here](./hdinsight-component-versioning.md). #### Disabled VM sizes
-Starting from November 16 2020, HDInsight will block new customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing customers who have used these VM sizes in the past three months won't be affected. Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+Starting from November 16 2020, HDInsight will block new customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing customers who have used these VM sizes in the past three months won't be affected. Starting from January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
### Behavior changes #### Add NSG rule checking before scaling operation
HDInsight now uses Azure virtual machines to provision the cluster. Starting fro
HDInsight 3.6 ML Services cluster type will be end of support by December 31 2020. Customers won't create new 3.6 ML Services clusters after December 31 2020. Existing clusters will run as is without the support from Microsoft. Check the support expiration for HDInsight versions and cluster types [here](./hdinsight-component-versioning.md#supported-hdinsight-versions). #### Disabled VM sizes
-Starting from November 16 2020, HDInsight will block new customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing customers who have used these VM sizes in the past three months won't be affected. Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+Starting from November 16 2020, HDInsight will block new customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing customers who have used these VM sizes in the past three months won't be affected. Starting from January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
### Behavior changes No behavior change for this release.
HDInsight today doesn't support customizing Zookeeper node size for Spark, Hadoo
Starting February 2021, the default version of HDInsight cluster will be changed from 3.6 to 4.0. For more information about available versions, see [supported versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md) #### HDInsight 3.6 end of support on June 30 2021
-HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+HDInsight 3.6 will be end of support. Starting from June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
### Bug fixes HDInsight continues to make cluster reliability and performance improvements.
A minimum 4-core VM is required for Head Node to ensure the high availability an
#### Cluster worker node provisioning change When 80% of the worker nodes are ready, the cluster enters **operational** stage. At this stage, customers can do all the data plane operations like running scripts and jobs. But customers can't do any control plane operation like scaling up/down. Only deletion is supported.
-After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minutes, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes are not available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
+After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes are not available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
#### Create new service principal through HDInsight Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers cannot create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made a
### New features #### TLS 1.2 enforcement
-Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are cryptographic protocols that provide communications security over a computer network. Learn more about [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0.2C_2.0_and_3.0). HDInsight uses TLS 1.2 on public HTTPs endpoints but TLS 1.1 is still supported for backward compatibility.
+Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are cryptographic protocols that provide communications security over a computer network. Learn more about [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0.2C_2.0_and_3.0). HDInsight uses TLS 1.2 on public HTTP's endpoints but TLS 1.1 is still supported for backward compatibility.
With this release, customers can opt into TLS 1.2 only for all connections through the public cluster endpoint. To support this, the new property **minSupportedTlsVersion** is introduced and can be specified during cluster creation. If the property is not set, the cluster still supports TLS 1.0, 1.1 and 1.2, which is the same as today's behavior. Customers can set the value for this property to "1.2", which means that the cluster only supports TLS 1.2 and above. For more information, see [Transport Layer Security](./transport-layer-security.md).
The following changes will happen in upcoming releases.
#### Transport Layer Security (TLS) 1.2 enforcement Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are cryptographic protocols that provide communications security over a computer network. For more information, see [Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0.2C_2.0_and_3.0). While Azure HDInsight clusters accept TLS 1.2 connections on public HTTPS endpoints, TLS 1.1 is still supported for backward compatibility with older clients.
-Starting from the next release, you will be able to opt-in and configure your new HDInsight clusters to only accept TLS 1.2 connections.
+Starting from the next release, you will be able to opt in and configure your new HDInsight clusters to only accept TLS 1.2 connections.
Later in the year, starting on 6/30/2020, Azure HDInsight will enforce TLS 1.2 or later versions for all HTTPS connections. We recommend that you ensure that all your clients are ready to handle TLS 1.2 or later versions.
HDInsight now offers a new capacity to enable customers to use their own SQL DB
#### F-series virtual machines are now available with HDInsight
-F-series virtual machines(VMs) are good choice to get started with HDInsight with light processing requirements. At a lower per-hour list price, the F-series is the best value in price-performance in the Azure portfolio based on the Azure Compute Unit (ACU) per vCPU. For more information, see [Selecting the right VM size for your Azure HDInsight cluster](./hdinsight-selecting-vm-size.md).
+F-series virtual machines(VMs) is a good choice to get started with HDInsight with light processing requirements. At a lower per-hour list price, the F-series are the best value in price-performance in the Azure portfolio based on the Azure Compute Unit (ACU) per vCPU. For more information, see [Selecting the right VM size for your Azure HDInsight cluster](./hdinsight-selecting-vm-size.md).
### Deprecation
HDP 2.6.4 provided Hadoop Common 2.7.3 and the following Apache patches:
- [YARN-5641](https://issues.apache.org/jira/browse/YARN-5641): Localizer leaves behind tarballs after container is complete. -- [YARN-6004](https://issues.apache.org/jira/browse/YARN-6004): Refactor TestResourceLocalizationService\#testDownloadingResourcesOnContainer so that it is less than 150 lines.
+- [YARN-6004](https://issues.apache.org/jira/browse/YARN-6004): Refactor TestResourceLocalizationService\#testDownloadingResourcesOnContainer so that it is fewer than 150 lines.
- [YARN-6078](https://issues.apache.org/jira/browse/YARN-6078): Containers stuck in Localizing state.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-18164](https://issues.apache.org/jira/browse/HBASE-18164): Much faster locality cost function and candidate generator. -- [HBASE-18212](https://issues.apache.org/jira/browse/HBASE-18212): In Standalone mode with local filesystem HBase logs Warning message: Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream.
+- [HBASE-18212](https://issues.apache.org/jira/browse/HBASE-18212): In Standalone mode with local filesystem HBase logs Warning message: Failed to invoke 'unbuffer' method in class org.apache.hadoop.fs.FSDataInputStream.
-- [HBASE-18808](https://issues.apache.org/jira/browse/HBASE-18808): Ineffective config check in BackupLogCleaner\#getDeletableFiles().
+- [HBASE-18808](https://issues.apache.org/jira/browse/HBASE-18808): Ineffective config check-in BackupLogCleaner\#getDeletableFiles().
- [HBASE-19052](https://issues.apache.org/jira/browse/HBASE-19052): FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-17013*](https://issues.apache.org/jira/browse/HIVE-17013): Delete request with a subquery based on select over a view. -- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fail when drop partition first.
+- [*HIVE-17063*](https://issues.apache.org/jira/browse/HIVE-17063): insert overwrite partition onto an external table fails when drop partition first.
- [*HIVE-17259*](https://issues.apache.org/jira/browse/HIVE-17259): Hive JDBC does not recognize UNIONTYPE columns.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18352*](https://issues.apache.org/jira/browse/HIVE-18352): introduce a METADATAONLY option while doing REPL DUMP to allow integrations of other tools. -- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup (Prabhu Joseph via Thejas Nair).
+- [*HIVE-18353*](https://issues.apache.org/jira/browse/HIVE-18353): CompactorMR should call jobclient.close() to trigger cleanup.
- [*HIVE-18390*](https://issues.apache.org/jira/browse/HIVE-18390): IndexOutOfBoundsException when query a partitioned view in ColumnPruner.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18327*](https://issues.apache.org/jira/browse/HIVE-18327): Remove the unnecessary HiveConf dependency for MiniHiveKdc. -- [*HIVE-18331*](https://issues.apache.org/jira/browse/HIVE-18331): Add relogin when TGT expire and some logging/lambda.
+- [*HIVE-18331*](https://issues.apache.org/jira/browse/HIVE-18331): Add relogin when TGT expires and some logging/lambda.
- [*HIVE-18341*](https://issues.apache.org/jira/browse/HIVE-18341): Add repl load support for adding "raw" namespace for TDE with same encryption keys.
This release provides Storm 1.1.1 and the following Apache patches:
- [STORM-2841](https://issues.apache.org/jira/browse/STORM-2841): testNoAcksIfFlushFails UT fails with NullPointerException. -- [STORM-2854](https://issues.apache.org/jira/browse/STORM-2854): Expose IEventLogger to make event logging pluggable.
+- [STORM-2854](https://issues.apache.org/jira/browse/STORM-2854): Expose IEventLogger to make event log pluggable.
- [STORM-2870](https://issues.apache.org/jira/browse/STORM-2870): FileBasedEventLogger leaks non-daemon ExecutorService which prevents process to be finished.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 03/10/2022 Last updated : 06/03/2022 + # Azure HDInsight release notes This article provides information about the **most recent** Azure HDInsight release updates. For information on earlier releases, see [HDInsight Release Notes Archive](hdinsight-release-notes-archive.md).
This article provides information about the **most recent** Azure HDInsight rele
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
-## Release date: 03/10/2022
+## Release date: 06/03/2022
This release applies for HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region over several days.
-The OS versions for this release are:
-- HDInsight 4.0: Ubuntu 18.04.5 -
-## Spark 3.1 is now generally available
-
-Spark 3.1 is now Generally Available on HDInsight 4.0 release. This release includes
-
-* Adaptive Query Execution,
-* Convert Sort Merge Join to Broadcast Hash Join,
-* Spark Catalyst Optimizer,
-* Dynamic Partition Pruning,
-* Customers will be able to create new Spark 3.1 clusters and not Spark 3.0 (preview) clusters.
-
-For more details, see the [Apache Spark 3.1](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/spark-3-1-is-now-generally-available-on-hdinsight/ba-p/3253679) is now Generally Available on HDInsight - Microsoft Tech Community.
-
-For a complete list of improvements, see the [Apache Spark 3.1 release notes.](https://spark.apache.org/releases/spark-release-3-1-2.html)
-
-For more details on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
-
-## Kafka 2.4 is now generally available
-
-Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html)
-Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
-
-## Map Datatype in HWC is now supported in HDInsight 4.0
-
-This release includes Map Datatype Support for HWC 1.0 (Spark 2.4) Via the spark-shell application, and all other all spark clients that HWC supports. Following improvements are included like any other data types:
-
-A user can
-* Create a Hive table with any column(s) containing Map datatype, insert data into it and read the results from it.
-* Create an Apache Spark dataframe with Map Type and do batch/stream reads and writes.
-
-### New regions
-
-HDInsight has now expanded its geographical presence to two new regions: China East 3 and China North 3.
-
-### OSS backport changes
-
-OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supports Map data type.
-
-### Here are the OSS backported Apache JIRAs for this release:
-
-| Impacted Feature | Apache JIRA |
-||--|
-| Metastore direct sql queries with IN/(NOT IN) should be split based on max parameters allowed by SQL DB | [HIVE-25659](https://issues.apache.org/jira/browse/HIVE-25659) |
-| Upgrade log4j 2.16.0 to 2.17.0 | [HIVE-25825](https://issues.apache.org/jira/browse/HIVE-25825) |
-| Update Flatbuffer version | [HIVE-22827](https://issues.apache.org/jira/browse/HIVE-22827) |
-| Support Map data-type natively in Arrow format | [HIVE-25553](https://issues.apache.org/jira/browse/HIVE-25553) |
-| LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) |
-| Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
-
-## Deprecation notices
-### Azure Virtual Machine Scale Sets on HDInsight
-
-HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the clusters, no breaking change is expected. Existing HDInsight clusters on virtual machine scale sets will have no impact, any new clusters on latest images will no longer use Virtual Machine Scale Sets.
-
-### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
-
-Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
-
-## HDInsight 3.6 end of support extension
-
-HDInsight 3.6 end of support is extended until September 30, 2022.
-
-Starting from September 30, 2022, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
-
-Customers who are on Azure HDInsight 3.6 clusters will continue to get [Basic support](./hdinsight-component-versioning.md#support-options-for-hdinsight-versions) until September 30, 2022. After September 30, 2022 customers won't be able to create new HDInsight 3.6 clusters.
+## Release highlights
+
+**The Hive Warehouse Connector (HWC) on Spark v3.1.2**
+
+The Hive Warehouse Connector (HWC) allows you to take advantage of the unique features of Hive and Spark to build powerful big-data applications. HWC is currently supported for Spark v2.4 only. This feature adds business value by allowing ACID transactions on Hive Tables using Spark. This feature is useful for customers who use both Hive and Spark in their data estate.
+For more information, see [Apache Spark & Hive - Hive Warehouse Connector - Azure HDInsight | Microsoft Docs](/azure/hdinsight/interactive-query/apache-hive-warehouse-connector)
+
+## Ambari
+
+* Scaling and provisioning improvement changes
+* HDI hive is now compatible with OSS version 3.1.2
+
+HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes and features available in open source Hive 3.1.2 version.
+
+> [!NOTE]
+> **Spark**
+>
+> * If you are using Azure User Interface to create Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0). This is only an UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template.
+
+![Screenshot_of spark 3.1 for HDI 5.0.](media/hdinsight-release-notes/spark-3-1-for-hdi-5-0.png)
+
+> [!NOTE]
+> **Interactive Query**
+>
+> * If you are creating an Interactive Query Cluster, you will see from the dropdown list an additional version as Interactive Query 3.1 (HDI 5.0).
+> * If you are going to use Spark 3.1 version along with Hive which require ACID support, you need to select this version Interactive Query 3.1 (HDI 5.0).
+
+![Screenshot_of interactive query 3.1 for HDI 5.0.](media/hdinsight-release-notes/interactive-query-3-1-for-hdi-5-0.png)
+
+## TEZ bug fixes
+
+| Bug Fixes|Apache JIRA|
+|||
+|TezUtils.createConfFromByteString on Configuration larger than 32 MB throws com.google.protobuf.CodedInputStream exception |[TEZ-4142](https://issues.apache.org/jira/browse/TEZ-4142)|
+|TezUtils createByteStringFromConf should use snappy instead of DeflaterOutputStream|[TEZ-4113](https://issues.apache.org/jira/browse/TEZ-4113)|
+
+## HBase bug fixes
+
+| Bug Fixes|Apache JIRA|
+|||
+|TableSnapshotInputFormat should use ReadType.STREAM for scanning HFiles |[HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273)|
+|Add option to disable scanMetrics in TableSnapshotInputFormat |[HBASE-26330](https://issues.apache.org/jira/browse/HBASE-26330)|
+|Fix for ArrayIndexOutOfBoundsException when balancer is executed |[HBASE-22739](https://issues.apache.org/jira/browse/HBASE-22739)|
+
+## Hive bug fixes
+
+|Bug Fixes|Apache JIRA|
+|||
+| NPE when inserting data with 'distribute by' clause with dynpart sort optimization|[HIVE-18284](https://issues.apache.org/jira/browse/HIVE-18284)|
+| MSCK REPAIR Command with Partition Filtering Fails While Dropping Partitions|[HIVE-23851](https://issues.apache.org/jira/browse/HIVE-23851)|
+| Wrong exception thrown if capacity<=0|[HIVE-25446](https://issues.apache.org/jira/browse/HIVE-25446)|
+| Support parallel load for HastTables - Interfaces|[HIVE-25583](https://issues.apache.org/jira/browse/HIVE-25583)|
+| Include MultiDelimitSerDe in HiveServer2 By Default|[HIVE-20619](https://issues.apache.org/jira/browse/HIVE-20619)|
+| Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar|[HIVE-22134](https://issues.apache.org/jira/browse/HIVE-22134)|
+| Null pointer exception on running compaction against an MM table.|[HIVE-21280 ](https://issues.apache.org/jira/browse/HIVE-21280)|
+| Hive query with large size via knox fails with Broken pipe Write failed|[HIVE-22231](https://issues.apache.org/jira/browse/HIVE-22231)|
+| Adding ability for user to set bind user|[HIVE-21009](https://issues.apache.org/jira/browse/HIVE-21009)|
+| Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendar|[HIVE-22241](https://issues.apache.org/jira/browse/HIVE-22241)|
+| Beeline option to show/not show execution report|[HIVE-22204](https://issues.apache.org/jira/browse/HIVE-22204)|
+| Tez: SplitGenerator tries to look for plan files, which won't exist for Tez|[HIVE-22169 ](https://issues.apache.org/jira/browse/HIVE-22169)|
+| Remove expensive logging from the LLAP cache hotpath|[HIVE-22168](https://issues.apache.org/jira/browse/HIVE-22168)|
+| UDF: FunctionRegistry synchronizes on org.apache.hadoop.hive.ql.udf.UDFType class|[HIVE-22161](https://issues.apache.org/jira/browse/HIVE-22161)|
+| Prevent the creation of query routing appender if property is set to false|[HIVE-22115](https://issues.apache.org/jira/browse/HIVE-22115)|
+| Remove cross-query synchronization for the partition-eval|[HIVE-22106](https://issues.apache.org/jira/browse/HIVE-22106)|
+| Skip setting up hive scratch dir during planning|[HIVE-21182](https://issues.apache.org/jira/browse/HIVE-21182)|
+| Skip creating scratch dirs for tez if RPC is on|[HIVE-21171](https://issues.apache.org/jira/browse/HIVE-21171)|
+| switch Hive UDFs to use Re2J regex engine|[HIVE-19661 ](https://issues.apache.org/jira/browse/HIVE-19661)|
+| Migrated clustered tables using bucketing_version 1 on hive 3 uses bucketing_version 2 for inserts|[HIVE-22429](https://issues.apache.org/jira/browse/HIVE-22429)|
+| Bucketing: Bucketing version 1 is incorrectly partitioning data|[HIVE-21167 ](https://issues.apache.org/jira/browse/HIVE-21167)|
+| Adding ASF License header to the newly added file|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)|
+| Schema tool enhancements to support mergeCatalog|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)|
+| Hive with TEZ UNION ALL and UDTF results in data loss|[HIVE-21915](https://issues.apache.org/jira/browse/HIVE-21915)|
+| Split text files even if header/footer exists|[HIVE-21924](https://issues.apache.org/jira/browse/HIVE-21924)|
+| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the once are present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)|
+| LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)|
+| Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)|
+|Prevent LLAP shutdown on AMReporter related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
+| LLAP status service driver may get stuck with wrong Yarn app ID|[HIVE-21866](https://issues.apache.org/jira/browse/HIVE-21866)|
+| OperationManager.queryIdOperation doesn't properly clean up multiple queryIds|[HIVE-22275](https://issues.apache.org/jira/browse/HIVE-22275)|
+| Bringing a node manager down blocks restart of LLAP service|[HIVE-22219](https://issues.apache.org/jira/browse/HIVE-22219)|
+| StackOverflowError when drop lots of partitions|[HIVE-15956](https://issues.apache.org/jira/browse/HIVE-15956)|
+| Access check is failed when a temporary directory is removed|[HIVE-22273](https://issues.apache.org/jira/browse/HIVE-22273)|
+| Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions|[HIVE-22120](https://issues.apache.org/jira/browse/HIVE-22120)|
+| Remove distribution management tag from pom.xml|[HIVE-19667](https://issues.apache.org/jira/browse/HIVE-19667)|
+| Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)|
+| For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057 ](https://issues.apache.org/jira/browse/HIVE-20057)|
+| JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
+| Update repo URLs in poms - branh 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
+| DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)|
+| Load data into a bucketed table is ignoring partitions specs and loads data into default partition|[HIVE-21564](https://issues.apache.org/jira/browse/HIVE-21564)|
+| Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException|[HIVE-21613](https://issues.apache.org/jira/browse/HIVE-21613)|
+| Analyze compute stats for column leave behind staging dir on HDFS|[HIVE-21342](https://issues.apache.org/jira/browse/HIVE-21342)|
+| Incompatible change in Hive bucket computation|[HIVE-21376](https://issues.apache.org/jira/browse/HIVE-21376)|
+| Provide a fallback authorizer when no other authorizer is in use|[HIVE-20420](https://issues.apache.org/jira/browse/HIVE-20420)|
+| Some alterPartitions invocations throw 'NumberFormatException: null'|[HIVE-18767](https://issues.apache.org/jira/browse/HIVE-18767)|
+| HiveServer2: Preauthenticated subject for http transport isn't retained for entire duration of http communication in some cases|[HIVE-20555](https://issues.apache.org/jira/browse/HIVE-20555)|
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this document, you learned about the tools and programming languages that you
>[!div class="nextstepaction"] >[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Previously updated : 03/22/2022 Last updated : 06/06/2022
In this document, you learned the authentication and authorization of Azure Heal
>[!div class="nextstepaction"] >[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Previously updated : 05/03/2022 Last updated : 06/02/2022 # Autoscale for Azure API for FHIR
-The Azure API for FHIR as a managed service allows customers to persist with FHIR compliant healthcare data and exchange it securely through the service API. To accommodate different transaction workloads, customers can use manual scale or autoscale.
+Azure API for FHIR, as a managed service, allows customers to persist with Fast Healthcare Interoperability Resources (FHIR&#174;) compliant healthcare data and exchange it securely through the service API. To accommodate different transaction workloads, customers can use manual scale or autoscale.
## What is autoscale?
-By default, the Azure API for FHIR is set to manual scale. This option works well when the transaction workloads are known and consistent. Customers can adjust the throughput `RU/s` through the portal up to 10,000 and submit a request to increase the limit.
+By default, Azure API for FHIR is set to manual scale. This option works well when the transaction workloads are known and consistent. Customers can adjust the throughput `RU/s` through the portal up to 10,000 and submit a request to increase the limit.
The autoscale feature is designed to scale computing resources including the database throughput `RU/s` up and down automatically according to the workloads, thus eliminating the manual steps of adjusting allocated computing resources.
Keep in mind that this is only an estimate based on data size and that there are
The autoscale feature incurs costs because of managing the provisioned throughput units automatically. The actual costs depend on hourly usage, but keep in mind that there are minimum costs of 10% of `Tmax` for reserved throughput RU/s. However, this cost increase doesn't apply to storage and runtime costs. For information about pricing, see [Azure API for FHIR pricing](https://azure.microsoft.com/pricing/details/azure-api-for-fhir/).
+## Next steps
+
+In this document, you learned about the autoscale feature for Azure API for FHIR. For an overview about Azure API for FHIR, see
+ >[!div class="nextstepaction"] >[About Azure API for FHIR](overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Previously updated : 02/15/2022 Last updated : 06/02/2022 # Azure Active Directory identity configuration for Azure API for FHIR
-When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. The [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, we'll walk through Azure API for FHIR as the FHIR server and Azure Active Directory (Azure AD) as our identity provider in this article.
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, we'll walk through Azure API for FHIR as the FHIR server and Azure Active Directory (Azure AD) as our identity provider in this article.
## Access control overview
In order for a client application to access Azure API for FHIR, it must present
There are many ways to obtain a token, but the Azure API for FHIR doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
-Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps:
+For example like when you use [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md), accessing a FHIR server goes through the following four steps:
![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png) 1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration. 1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When you request a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
-1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.
-1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
+1. The client makes a request to Azure API for FHIR, for example `GET /Patient`, to search all patients. When the client makes the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.
+1. Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
-It's important to note that the Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The Azure API for FHIR simply validates that the token is signed correctly (it's authentic) and that it has appropriate claims.
+It's important to note that Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. Azure API for FHIR simply validates that the token is signed correctly (it's authentic) and that it has appropriate claims.
## Structure of an access token
-Development of FHIR applications often involves debugging access issues. If a client is denied access to the Azure API for FHIR, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token.
+Development of Fast Healthcare Interoperability Resources (FHIR&#174;) applications often involves debugging access issues. If a client is denied access to Azure API for FHIR, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token.
FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts:
The pertinent sections of the Azure AD documentation are:
* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When you use Azure API for FHIR, there are some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+There are other variations (for example due to flow) for obtaining a token. Refer to the [Azure AD documentation](../../active-directory/index.yml) for details. When you use Azure API for FHIR, there are some shortcuts for obtaining an access token (such as for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
## Next steps
-In this document, you learned some of the basic concepts involved in securing access to the Azure API for FHIR using Azure AD. For information about how to deploy the Azure API for FHIR service, see.
+In this document, you learned some of the basic concepts involved in securing access to the Azure API for FHIR using Azure AD. For information about how to deploy the Azure API for FHIR service, see
>[!div class="nextstepaction"]
->[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Previously updated : 02/15/2022 Last updated : 06/02/2022 # Azure API for FHIR access token validation
How Azure API for FHIR validates the access token will depend on implementation
## Validate token has no issues with identity provider
-The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When you use Azure AD, the full URL is:
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When you use Azure Active Directory (Azure AD), the full URL is:
``` GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
The important properties for the FHIR server are `jwks_uri`, which tells the ser
Once the server has verified the authenticity of the token, the FHIR server will then proceed to validate that the client has the required claims to access the token.
-When using the Azure API for FHIR, the server will validate:
+When you use Azure API for FHIR, the server will validate:
1. The token has the right `Audience` (`aud` claim). 1. The user or principal that the token was issued for is allowed to access the FHIR server data plane. The `oid` claim of the token contains an identity object ID, which uniquely identifies the user or principal.
-We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. But you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure Active Directory tenant.
+We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. However, you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure AD tenant.
When you use the OSS Microsoft FHIR server for Azure, the server will validate:
When you use the OSS Microsoft FHIR server for Azure, the server will validate:
Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
-A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the Azure API for FHIR and the FHIR server for Azure don't validate token scopes.
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, Azure API for FHIR and the FHIR server for Azure don't validate token scopes.
## Next steps
-Now that you know how to walk through token validation, you can complete the tutorial to create a JavaScript application and read FHIR data.
+Now that you know how to walk through token validation, you can complete the tutorial to create a JavaScript application and read Fast Healthcare Interoperability Resources (FHIR&#174;) data.
>[!div class="nextstepaction"]
->[Web application tutorial](tutorial-web-app-fhir-server.md)
+>[Web application tutorial](tutorial-web-app-fhir-server.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 # Quickstart: Use an ARM template to deploy Azure API for FHIR
In this quickstart guide, you've deployed the Azure API for FHIR into your subsc
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
Previously updated : 02/15/2022 Last updated : 06/02/2022 # Additional settings for Azure API for FHIR
For more information on how to change the default settings, see [configure datab
## Access control
-The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options aren't as rich when using the local RBAC mechanism. For details, choose one of the following options:
+Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options aren't as rich when using the local RBAC mechanism. For details, choose one of the following options:
* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you're using the Azure Active Directory tenant associated with your subscription. * [Local FHIR data plane access control](configure-local-rbac.md). Use this option only when you need to use an external Azure Active Directory tenant for data plane access control.
In this how-to guide, you set up additional settings for the Azure API for FHIR.
Next check out the series of tutorials to create a web application that reads FHIR data. >[!div class="nextstepaction"]
->[Deploy JavaScript application](tutorial-web-app-fhir-server.md)
+>[Deploy JavaScript application](tutorial-web-app-fhir-server.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 02/15/2022 Last updated : 06/02/2022 # CARIN Implementation Guide for Blue Button&#174; for Azure API for FHIR
The final test we'll walk through is testing [error handling](https://touchstone
In this tutorial, we walked through how to pass the CARIN IG for Blue Button tests in Touchstone. Next, you can review how to test the Da Vinci formulary tests. >[!div class="nextstepaction"]
->[DaVinci Drug Formulary](davinci-drug-formulary-tutorial.md)
+>[DaVinci Drug Formulary](davinci-drug-formulary-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 02/15/2022 Last updated : 06/02/2022 # Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule introduction
In this series of tutorials, we'll cover a high-level summary of the Center for
The CMS released the [Interoperability and Patient Access rule](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index) on May 1, 2020. This rule requires free and secure data flow between all parties involved in patient care (patients, providers, and payers) to allow patients to access their health information when they need it. Interoperability has plagued the healthcare industry for decades, resulting in siloed data that causes negative health outcomes with higher and unpredictable costs for care. CMS is using their authority to regulate Medicare Advantage (MA), Medicaid, Children's Health Insurance Program (CHIP), and Qualified Health Plan (QHP) issuers on the Federally Facilitated Exchanges (FFEs) to enforce this rule.
-In August 2020, CMS detailed how organizations can meet the mandate. To ensure that data can be exchanged securely and in a standardized manner, CMS identified FHIR version release 4 (R4) as the foundational standard required for the data exchange.
+In August 2020, CMS detailed how organizations can meet the mandate. To ensure that data can be exchanged securely and in a standardized manner, CMS identified Fast Healthcare Interoperability Resources (FHIR&#174;) version release 4 (R4) as the foundational standard required for the data exchange.
There are three main pieces to the Interoperability and Patient Access ruling:
There are three main pieces to the Interoperability and Patient Access ruling:
* **Provider Directory API (Required July 1, 2021)** ΓÇô CMS-regulated payers are required by this portion of the rule to make provider directory information publicly available via a standards-based API. Through making this information available, third-party application developers will be able to create services that help patients find providers for specific care needs and clinicians find other providers for care coordination.
-* **Payer-to-Payer Data Exchange (Originally required Jan 1, 2022 - [Currently Delayed](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index))** ΓÇô CMS-regulated payers are required to exchange certain patient clinical data at the patientΓÇÖs request with other payers. While there's no requirement to follow any kind of standard, applying FHIR to exchange this data is encouraged.
+* **Payer-to-Payer Data Exchange (Originally required Jan 1, 2022 - [Currently Delayed](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index))** ΓÇô CMS-regulated payers are required to exchange certain patient clinical data at the patientΓÇÖs request with other payers. While there's no requirement to follow any kind of standard, applying FHIR&#174; to exchange this data is encouraged.
## Key FHIR concepts
To test adherence to the various implementation guides, [Touchstone](https://tou
Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), weΓÇÖll walk through setting up the Azure API for FHIR for the CARIN IG for Blue Button. >[!div class="nextstepaction"]
->[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
+>[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Previously updated : 05/03/2022 Last updated : 06/02/2022
In this article, you learned how to assign Azure roles for the FHIR data plane.
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR
description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR. Previously updated : 05/03/2022 Last updated : 06/03/2022 # Configure cross-origin resource sharing in Azure API for FHIR
-Azure API for Fast Healthcare Interoperability Resources (FHIR) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+Azure API for FHIR supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
CORS is often used in a single-page app that must call a RESTful API to a different domain.
To configure a CORS setting in the Azure API for FHIR, specify the following set
## Next steps
-In this article, you learned how to configure cross-origin sharing in Azure API for FHIR. Next deploy a fully managed Azure API for FHIR:
+In this article, you learned how to configure cross-origin resource sharing in Azure API for FHIR. For more information about deploying Azure API for FHIR, see
>[!div class="nextstepaction"]
->[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 # Configure database settings
In this article, you learned how to update your RUs for Azure API for FHIR. To l
Or you can deploy a fully managed Azure API for FHIR: >[!div class="nextstepaction"]
->[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
After you've completed this final step, youΓÇÖre now ready to export the data us
In this article, you learned the steps in configuring export settings that allow you to export data out the Azure API for FHIR to a storage account. For more information about configuring database settings, access control, enabling diagnostic logging, and using custom headers to add data to audit logs, see >[!div class="nextstepaction"]
->[Additional Settings](azure-api-for-fhir-additional-settings.md)
+>[Additional Settings](azure-api-for-fhir-additional-settings.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 ms.devlang: azurecli
In this article, you learned how to assign FHIR data plane access using an exter
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
Based on your private link setup and for more information about registering your
* [Register a public client application](register-public-azure-ad-client-app.md) * [Register a service application](register-service-azure-ad-client-app.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
In this article, you learned about data conversion for Azure API for FHIR. For m
>[!div class="nextstepaction"] >[Related GitHub Projects for Azure API for FHIR](fhir-github-projects.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Previously updated : 03/16/2022 Last updated : 06/03/2022
In this article, you learned three different ways to copy your FHIR data into Sy
Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI. >[!div class="nextstepaction"]
->[Exporting de-identified data](de-identified-export.md)
+>[Exporting de-identified data](de-identified-export.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 ms.devlang: azurecli
ms.devlang: azurecli
When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
-In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a Fast Healthcare Interoperability Resources (FHIR&#174;) request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
To get started, refer to the following links:
In this article, you learned how to configure customer-managed keys at rest usin
>[!div class="nextstepaction"] >[Cosmos DB: how to setup CMK](../../cosmos-db/how-to-setup-cmk.md#frequently-asked-questions)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-drug-formulary-tutorial.md
-- Previously updated : 02/15/2022++ Last updated : 06/03/2022 # Tutorial for Da Vinci Drug Formulary for Azure API for FHIR
In this tutorial, we'll walk through setting up Azure API for FHIR to pass the [
## Touchstone capability statement The first test that we'll focus on is testing Azure API for FHIR against the [Da Vinci Drug Formulary capability
-statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to
-missing search parameters and missing profiles.
+statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to missing search parameters and missing profiles.
### Define search parameters
The second test is the [query capabilities](https://touchstone.aegis.net/touchst
In this tutorial, we walked through how to pass the Da Vinci Payer Data Exchange US Drug Formulary in Touchstone. Next, you can learn how to test the Da Vinci PDex Implementation Guide in Touchstone. >[!div class="nextstepaction"]
->[Da Vinci PDex](davinci-pdex-tutorial.md)
+>[Da Vinci PDex](davinci-pdex-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-pdex-tutorial.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Da Vinci PDex for Azure API for FHIR
The final test we'll walk through is testing patient-everything. For this test,
In this tutorial, we walked through how to pass the Payer Exchange tests in Touchstone. Next, you can learn how to test the Da Vinci PDEX Payer Network (Plan-Net) Implementation Guide. >[!div class="nextstepaction"]
->[Da Vinci Plan Net](davinci-plan-net.md)
+>[Da Vinci Plan Net](davinci-plan-net.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-plan-net.md
- Previously updated : 02/15/2022+ Last updated : 06/03/2022 # Da Vinci Plan Net for Azure API for FHIR
In this tutorial, we walked through setting up Azure API for FHIR to pass the To
>[!div class="nextstepaction"] >[Supported features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
Previously updated : 02/28/2022 Last updated : 06/03/2022 # Exporting de-identified data for Azure API for FHIR
The $export command can also be used to export de-identified data from the FHIR
In this article, you've learned how to set up and use de-identified export. Next, to learn how to export FHIR data using $export for Azure API for FHIR, see >[!div class="nextstepaction"]
->[Export data](export-data.md)
+>[Export data](export-data.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/device-data-through-iot-hub.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Receive device data through Azure IoT Hub
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* provides you the capability to ingest data from Internet of Medical Things (IoMT) devices into Azure API for FHIR. The [Deploy Azure IoT Connector for FHIR (preview) using Azure portal](iot-fhir-portal-quickstart.md) quickstart showed an example of device managed by Azure IoT Central [sending telemetry](iot-fhir-portal-quickstart.md#connect-your-devices-to-iot) to Azure IoT Connector for FHIR. Azure IoT Connector for FHIR can also work with devices provisioned and managed through Azure IoT Hub. This tutorial provides the procedure to connect and route device data from Azure IoT Hub to Azure IoT Connector for FHIR. ## Prerequisites
Learn how to configure IoT Connector using device and FHIR mapping templates.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 # Disaster recovery for Azure API for FHIR
-The Azure API for FHIR® is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
+Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
The DR process involves the following steps:
### Data replication in the secondary region
-By default, the Azure API for FHIR offers data protection through backup and restore. When the disaster recovery feature is enabled, data replication begins. A data replica is automatically created and synchronized in the secondary Azure region. The initial data replication can take a few minutes to a few hours, or longer, depending on the amount of data. The secondary data replica is a replication of the primary data. It's used directly to recover the service, and it helps speed up the recovery process.
+By default, Azure API for FHIR offers data protection through backup and restore. When the disaster recovery feature is enabled, data replication begins. A data replica is automatically created and synchronized in the secondary Azure region. The initial data replication can take a few minutes to a few hours, or longer, depending on the amount of data. The secondary data replica is a replication of the primary data. It's used directly to recover the service, and it helps speed up the recovery process.
It's worth noting that the throughput RU/s must have the same values in the primary and secondary regions.
The private link feature should continue to work during a regional outage and af
### CMK
-Your access to the Azure API for FHIR will be maintained if the key vault hosting the managed key in your subscription is accessible. There's a possible temporary downtime as Key Vault can take up to 20 minutes to re-establish its connection. For more information, see [Azure Key Vault availability and redundancy](../../key-vault/general/disaster-recovery-guidance.md).
+Your access to Azure API for FHIR will be maintained if the key vault hosting the managed key in your subscription is accessible. There's a possible temporary downtime as Key Vault can take up to 20 minutes to re-establish its connection. For more information, see [Azure Key Vault availability and redundancy](../../key-vault/general/disaster-recovery-guidance.md).
### $export
The disaster recovery feature incurs extra costs because data of the compute and
## Next steps
-In this article, you've learned how DR for Azure API for FHIR works and how to enable it. To learn about Azure API for FHIR's other supported features, see:
+In this article, you've learned how DR for Azure API for FHIR works and how to enable it. To learn about Azure API for FHIR's other supported features, see
>[!div class="nextstepaction"] >[FHIR supported features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Enable Diagnostic Logging in Azure API for FHIR
In this article, you learned how to enable Audit Logs for Azure API for FHIR. Fo
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
address range in CIDR format is used instead, 100.64.0.0/10. The reason why the
In this article, you've learned how to export FHIR resources using $export command. Next, to learn how to export de-identified data, see >[!div class="nextstepaction"]
->[Export de-identified data](de-identified-export.md)
+>[Export de-identified data](de-identified-export.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Register the Azure Active Directory apps for Azure API for FHIR
In order for an application to interact with Azure AD, it needs to be registered
In this overview, you've gone through the types of application registrations you may need in order to work with a FHIR API.
-Based on your setup, please see the how-to-guides to register your applications
+Based on your setup, refer to the how-to-guides to register your applications:
* [Register a resource application](register-resource-azure-ad-client-app.md) * [Register a confidential client application](register-confidential-azure-ad-client-app.md) * [Register a public client application](register-public-azure-ad-client-app.md) * [Register a service application](register-service-azure-ad-client-app.md)
-Once you've registered your applications, you can deploy the Azure API for FHIR.
+After you've registered your applications, you can deploy Azure API for FHIR.
>[!div class="nextstepaction"]
->[Deploy Azure API for FHIR](fhir-paas-powershell-quickstart.md)
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Previously updated : 05/05/2022 Last updated : 06/03/2022
In this article, you've read about the supported FHIR features in Azure API for
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-github-projects.md
Previously updated : 02/28/2022 Last updated : 06/03/2022
We have many open-source projects on GitHub that provide you the source code and
In this article, you've learned about the related GitHub Projects for Azure API for FHIR that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Azure API for FHIR, see >[!div class="nextstepaction"]
->[What is Azure API for FHIR?](overview.md)
+>[What is Azure API for FHIR?](overview.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
In this quickstart guide, you've deployed the Azure API for FHIR into your subsc
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
In this quickstart guide, you've deployed the Azure API for FHIR into your subsc
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
In this quickstart guide, you've deployed the Azure API for FHIR into your subsc
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
In this article, you've learned how to find identity object IDs needed to config
>[!div class="nextstepaction"] >[Configure local RBAC settings](configure-local-rbac.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
In this article, you've learned how to obtain an access token for the Azure API
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
Previously updated : 05/17/2022 Last updated : 06/03/2022
This article described the basic steps to get started using Azure API for FHIR.
>[What is Azure API for FHIR?](overview.md) >[!div class="nextstepaction"]
->[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
+>[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Defining custom search parameters for Azure API for FHIR
-The FHIR specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the Azure API for FHIR.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the Azure API for FHIR.
> [!NOTE] > Each time you create, update, or delete a search parameter youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter to be used in production. Below we will outline how you can test search parameters before reindexing the entire FHIR server.
In this article, youΓÇÖve learned how to create a search parameter. Next you can
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Running a reindex job in Azure API for FHIR
In this article, youΓÇÖve learned how to start a reindex job. To learn how to de
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Azure Resource Manager Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-azure-resource-manager-template-quickstart.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Quickstart: Use an Azure Resource Manager (ARM) template to deploy Azure IoT Connector for FHIR (preview)
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) to deploy Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)*, a feature of Azure API for FHIR. To deploy a working instance of Azure IoT Connector for FHIR, this template also deploys a parent Azure API for FHIR service and an Azure IoT Central application that exports telemetry from a device simulator to Azure IoT Connector for FHIR. You can execute ARM template to deploy Azure IoT Connector for FHIR through the Azure portal, PowerShell, or CLI. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
Learn how to configure IoT Connector using device and FHIR mapping templates.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-data-flow.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Azure IoT Connector for FHIR (preview) data flow
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ This article provides an overview of data flow in Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)*. You'll learn about different data processing stages within Azure IoT Connector for FHIR that transform device data into FHIR-based [Observation](https://www.hl7.org/fhir/observation.html) resources. ![Azure IoT Connector for FHIR data flow](media/concepts-iot-data-flow/iot-connector-data-flow.png)
For more information about how to create device and FHIR mapping templates, see
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-fhir-portal-quickstart.md
Previously updated : 04/11/2022 Last updated : 06/03/2022 # Quickstart: Deploy Azure IoT Connector for FHIR (preview) using Azure portal
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* is an optional feature of Azure API for FHIR that provides the capability to ingest data from Internet of Medical Things (IoMT) devices. During the preview phase, Azure IoT Connector for FHIR feature is being available for free. In this quickstart, you'll learn how to: - Deploy and configure Azure IoT Connector for FHIR using the Azure portal - Use a simulated device to send data to Azure IoT Connector for FHIR
Learn how to configure IoT Connector using device and FHIR mapping templates.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Mapping Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-mapping-templates.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Azure IoT Connector for FHIR (preview) mapping templates+
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ This article details how to configure Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* using mapping templates. The Azure IoT Connector for FHIR requires two types of JSON-based mapping templates. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hub end point. It extracts types, device identifiers, measurement date time, and the measurement value(s). The second type, **FHIR mapping**, controls the mapping for FHIR resource. It allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
Check out frequently asked questions on Azure IoT Connector for FHIR (preview).
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR FAQs](fhir-faq.yml)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-metrics-diagnostics-export.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Export IoT connector for FHIR (preview) Metrics through Diagnostic settings
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ In this article, you'll learn how to export Azure IoT connector for Fast Healthcare Interoperability Resources (FHIR&#174;) Metrics logs. The feature that enables Metrics logging is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal. > [!TIP]
For more information about the frequently asked questions of Azure IoT connector
>[!div class="nextstepaction"] >[Frequently asked questions about IoT connector](../../healthcare-apis/iot/iot-connector-faqs.md)
-*In the Azure portal, Azure IoT connector for FHIR is referred to as IoT connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT connector for FHIR is referred to as IoT connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Metrics Display https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-metrics-display.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
-# Display and configure IoT Connector for FHIR (preview) metrics
+# Display and configure IoT Connector for FHIR (preview) metrics
+
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
In this article, you'll learn how to display and configure Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* metrics.
Get answers to frequently asked questions about Azure IoT Connector for FHIR.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR FAQ](fhir-faq.yml)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-troubleshoot-guide.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 + # IoT Connector for FHIR (preview) troubleshooting guide
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
++ This article provides steps for troubleshooting common Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* error messages and conditions. You'll also learn how to create copies of the Azure IoT Connector for FHIR conversion mappings JSON (for example: Device and FHIR).
Check out frequently asked questions about the Azure IoT Connector for FHIR.
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR FAQs](fhir-faq.yml)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
In this article, you've learned how to move the Azure API for FHIR instance. For
>[!div class="nextstepaction"] >[Supported FHIR features](fhir-features-supported.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Overview of search in Azure API for FHIR
-The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
Now that you've learned about the basics of search, see the search samples page
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
-# What is Azure API for FHIR&reg;?
+# What is Azure API for FHIR?
Azure API for FHIR enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
For use cases that require extending or customizing the FHIR server, or requires
## Azure IoT Connector for FHIR (preview)
+> [!IMPORTANT]
+> As of September 2022, the IoT Connector feature within Azure API for FHIR will be retired and replaced with the [MedTech service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md) for enhanced service quality and functionality.
+>
+> All new users are directed to deploy and use the MedTech service feature within the Azure Health Data Services. For more information about the MedTech service, see [What is the MedTech service?](../../healthcare-apis/iot/iot-connector-overview.md).
+ Azure IoT Connector for (FHIR&#174;)* is an optional feature of Azure API for FHIR that provides the capability to ingest data from Internet of Medical Things (IoMT) devices. Internet of Medical Things is a category of IoT devices that capture and exchange health & wellness data with other healthcare IT systems over network. Some examples of IoMT devices include fitness and clinical wearables, monitoring sensors, activity trackers, point of care kiosks, or even a smart pill. The Azure IoT Connector for FHIR feature enables you to quickly set up a service to ingest IoMT data into Azure API for FHIR in a scalable, secure, and compliant manner. Azure IoT Connector for FHIR can accept any JSON-based messages sent out by an IoMT device. This data is first transformed into appropriate FHIR-based [Observation](https://www.hl7.org/fhir/observation.html) resources and then persisted into Azure API for FHIR. The data transformation logic is defined through a pair of mapping templates that you configure based on your message schema and FHIR requirements. Device data can be pushed directly to Azure IoT Connector for FHIR or seamlessly used in concert with other Azure IoT solutions ([Azure IoT Hub](../../iot-hub/index.yml) and [Azure IoT Central](../../iot-central/index.yml)). Azure IoT Connector for FHIR provides a secure data pipeline while allowing the Azure IoT solutions manage provisioning and maintenance of the physical devices.
Use of IoMT devices is rapidly expanding in healthcare and Azure IoT Connector f
## Next Steps
-To start working with the Azure API for FHIR, follow the 5-minute quickstart to deploy the Azure API for FHIR.
+To start working with Azure API for FHIR, follow the 5-minute quickstart to deploy the Azure API for FHIR.
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-To try out the Azure IoT Connector for FHIR feature, check out the quickstart to deploy Azure IoT Connector for FHIR using the Azure portal.
+To try out the Azure IoT Connector for FHIR feature and check out the quickstart to deploy Azure IoT Connector for FHIR using the Azure portal.
>[!div class="nextstepaction"] >[Deploy Azure IoT Connector for FHIR](iot-fhir-portal-quickstart.md)
-*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
Now that you know how to use the Patient-everything operation, you can learn abo
>[!div class="nextstepaction"] >[Overview of search in Azure API for FHIR](overview-of-search.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 05/03/2022 Last updated : 06/03/2022
the link in the **Version** column to view the source on the
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md). - Review [Understanding policy effects](../../governance/policy/concepts/effects.md).
+-
+- FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
Previously updated : 05/05/2022 Last updated : 06/03/2022
In this article, you learned how to purge the history for resources in Azure API
>[Supported FHIR features](fhir-features-supported.md) >[!div class="nextstepaction"]
->[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
+>[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Previously updated : 02/15/2022 Last updated : 06/03/2022
In this article, you were guided through the steps of how to register a confiden
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
If you configure your client application in a different Azure AD tenant from you
## Next steps
-In this article, you've learned how to register a public client application in Azure Active Directory. Next, test access to your FHIR server using Postman.
+In this article, you've learned how to register a public client application in Azure AD. Next, test access to your FHIR Server using Postman.
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Register a resource application in Azure Active Directory for Azure API for FHIR
-In this article, you'll learn how to register a resource (or API) application in Azure Active Directory. A resource application is an Azure Active Directory representation of the FHIR server API itself and client applications can request access to the resource when authenticating. The resource application is also known as the *audience* in OAuth parlance.
+In this article, you'll learn how to register a resource (or API) application in Azure Active Directory (Azure AD). A resource application is an Azure AD representation of the FHIR server API itself and client applications can request access to the resource when authenticating. The resource application is also known as the *audience* in OAuth parlance.
## Azure API for FHIR
-If you're using the Azure API for FHIR, a resource application is automatically created when you deploy the service. As long as you're using the Azure API for FHIR in the same Azure Active Directory tenant as you're deploying your application, you can skip this how-to-guide and instead deploy your Azure API for FHIR to get started.
+If you're using the Azure API for FHIR, a resource application is automatically created when you deploy the service. As long as you're using the Azure API for FHIR in the same Azure AD tenant as you're deploying your application, you can skip this how-to-guide and instead deploy your Azure API for FHIR to get started.
-If you're using a different Azure Active Directory tenant (not associated with your subscription), you can import the Azure API for FHIR resource application into your tenant with
+If you're using a different Azure AD tenant (not associated with your subscription), you can import the Azure API for FHIR resource application into your tenant with
PowerShell: ```azurepowershell-interactive
If you're using the open source FHIR Server for Azure, follow the steps on the [
## Next steps
-In this article, you've learned how to register a resource application in Azure Active Directory. Next, register your confidential client application.
+In this article, you've learned how to register a resource application in Azure AD. Next, register your confidential client application.
>[!div class="nextstepaction"]
->[Register Confidential Client Application](register-confidential-azure-ad-client-app.md)
+>[Register Confidential Client Application](register-confidential-azure-ad-client-app.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
In this article, you've learned how to register a service client application in
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # FHIR search examples for Azure API for FHIR
-Below are some examples of using FHIR search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
+Below are some examples of using Fast Healthcare Interoperability Resources (FHIR&#174;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
## Search result parameters
name=John
``` ## Next steps
+In this article, you learned about how to search using different search parameters, modifiers, and FHIR search tools. For more information about FHIR Search, see
+ >[!div class="nextstepaction"]
->[Overview of FHIR Search](overview-of-search.md)
+>[Overview of FHIR Search](overview-of-search.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/03/2022
compliant with the specific standard.
## Next steps
+In this article, you learned about the Azure Policy Regulatory Compliance controls for Azure API for FHIR. For more information, see
+ - Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md). - See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Store profiles in Azure API for FHIR
-HL7 FHIR defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
+HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
[FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions. Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
In this article, you've learned about FHIR profiles. Next, you'll learn how you
>[!div class="nextstepaction"] >[Validate FHIR resources against profiles](validation-against-profiles.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-member-match.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # $member-match operation for Azure API for FHIR
If the $member-match can't find a unique match, you'll receive a 422 response wi
In this guide, you've learned about the $member-match operation. Next, you can learn about testing the Da Vinci Payer Data Exchange IG in Touchstone, which requires the $member-match operation. >[!div class="nextstepaction"]
->[DaVinci PDex](../fhir/davinci-pdex-tutorial.md)
+>[DaVinci PDex](../fhir/davinci-pdex-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Tutorial Web App Fhir Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-fhir-server.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Deploy JavaScript app to read data from Azure API for FHIR+ In this tutorial, you'll deploy a small JavaScript app, which reads data from a FHIR service. The steps in this tutorial are:+ 1. Deploy a FHIR server 1. Register a public client application 1. Test access to the application 1. Create a web application that reads this FHIR data ## Prerequisites+ Before starting this set of tutorials, you'll need the following items: 1. An Azure subscription 1. An Azure Active Directory tenant
Before starting this set of tutorials, you'll need the following items:
> For this tutorial, the FHIR service, Azure AD application, and Azure AD users are all in the same Azure AD tenant. If this is not the case, you can still follow along with this tutorial, but may need to dive into some of the referenced documents to do additional steps. ## Deploy Azure API for FHIR+ The first step in the tutorial is to get your Azure API for FHIR setup correctly. 1. If you haven't already, deploy the [Azure API for FHIR](fhir-paas-portal-quickstart.md).
The first step in the tutorial is to get your Azure API for FHIR setup correctly
1. Set the **Max age** to **600** ## Next Steps
-Now that you have your Azure API for FHIR deployed, you're ready to register a public client application.
+
+Now that you have your Azure API for FHIR deployed, you're ready to register a public client application. For more information, see
>[!div class="nextstepaction"] >[Register public client application](tutorial-web-app-public-app-reg.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
Previously updated : 03/22/2022 Last updated : 06/03/2022 # Client application registration for Azure API for FHIR+ In the previous tutorial, you deployed and set up your Azure API for FHIR. Now that you have your Azure API for FHIR setup, weΓÇÖll register a public client application. You can read through the full [register a public client app](register-public-azure-ad-client-app.md) how-to guide for more details or troubleshooting, but weΓÇÖve called out the major steps for this tutorial in this article. 1. Navigate to Azure Active Directory
Now that you have set up the correct authentication, set the API permissions:
:::image type="content" source="media/tutorial-web-app/api-permissions.png" alt-text="Screenshot of the Add API permissions blade, with the steps to add API permissions highlighted."::: ## Next Steps+ You now have a public client application. In the next tutorial, weΓÇÖll walk through testing and gaining access to this application through Postman. >[!div class="nextstepaction"] >[Test client application in Postman](tutorial-web-app-test-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Testing the FHIR API on Azure API for FHIR
Last updated 02/15/2022
In the previous tutorial, you deployed the Azure API for FHIR and registered your client application. You're now ready to test your Azure API for FHIR. ## Retrieve capability statement+ First we'll get the capability statement for your Azure API for FHIR. 1. Open Postman. 1. Retrieve the capability statement by doing `GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/metadata`. In the image below the FHIR server name is **fhirserver**.
First we'll get the capability statement for your Azure API for FHIR.
Next we'll attempt to retrieve a patient. To retrieve a patient, enter `GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/Patient`. YouΓÇÖll receive a 401 Unauthorized error. This error is because you haven't proven that you should have access to patient data. ## Get patient from FHIR server+ ![Failed Patient](media/tutorial-web-app/postman-patient-authorization-failed.png) In order to gain access, you need an access token.
In order to gain access, you need an access token.
![Success Patient](media/tutorial-web-app/postman-patient-authorization-success.png) ## Post patient into FHIR server+ Now you have access, you can create a new patient. Here's a sample of a simple patient you can add into your FHIR server. Enter this `json` into the **Body** section of Postman. ``` json
If you do the GET command to retrieve a patient again, you'll see James Tiberiou
> When sending requests to the Azure API for FHIR, you need to ensure that you've set the content-type header to `application/json` ## Troubleshooting access issues+ If you ran into issues during any of these steps, review the documents we have put together on Azure Active Directory and the Azure API for FHIR. * [Azure AD and Azure API for FHIR](azure-active-directory-identity-configuration.md) - This document outlines some of the basic principles of Azure Active Directory and how it interacts with the Azure API for FHIR. * [Access token validation](azure-api-fhir-access-token-validation.md) - This how-to guide gives more specific details on access token validation and steps to take to resolve access issues. ## Next Steps+ Now that you can successfully connect to your client application, youΓÇÖre ready to write your web application. >[!div class="nextstepaction"] >[Write a web application](tutorial-web-app-write-web-app.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Write Azure web application to read FHIR data in Azure API for FHIR+ Now that you're able to connect to your FHIR server and POST data, youΓÇÖre ready to write a web application that will read FHIR data. In this final step of the tutorial, weΓÇÖll walk through writing and accessing the web application. ## Create web application
Included is the code that you can input into **https://docsupdatetracker.net/index.html**. YouΓÇÖll need to up
From here, you can go back to your web application resource and open the URL found on the Overview page. Sign in to see the patient James Tiberious Kirk that you previously created. ## Next Steps+ YouΓÇÖve successfully deployed the Azure API for FHIR, registered a public client application, tested access, and created a small web application. Check out the Azure API for FHIR supported features as a next step. >[!div class="nextstepaction"] >[Supported Features](fhir-features-supported.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 # Add data to audit logs by using custom HTTP headers in Azure API for FHIR
client.OnBeforeRequest += (object sender, BeforeRequestEventArgs e) =>
client.Get("Patient"); ``` ## Next steps+ In this article, you learned how to add data to audit logs by using custom headers in the Azure API for FHIR. For information about Azure API for FHIR configuration settings, see >[!div class="nextstepaction"]
In this article, you learned how to add data to audit logs by using custom heade
>[Configure CORS](configure-cross-origin-resource-sharing.md) >[!div class="nextstepaction"]
->[Configure Private Link](configure-private-link.md)
+>[Configure Private Link](configure-private-link.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Previously updated : 05/03/2022 Last updated : 06/03/2022 # Tutorial: Azure Active Directory SMART on FHIR proxy
-[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have FHIR interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
+[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have Fast Healthcare Interoperability Resources (FHIR&#174;) interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
-This tutorial describes how to use the proxy to enable SMART on FHIR applications with the Azure API for FHIR.
+This tutorial describes how to use the proxy to enable SMART on FHIR applications with Azure API for FHIR.
## Prerequisites
In this tutorial, you've configured the Azure Active Directory SMART on FHIR pro
>[!div class="nextstepaction"] >[FHIR server samples](https://github.com/Microsoft/fhir-server-samples)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
Previously updated : 02/15/2022 Last updated : 06/03/2022 # Validate FHIR resources against profiles in Azure API for FHIR
-`$validate` is an operation in FHIR that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in Azure API for FHIR has the expected attributes and values.
+`$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in Azure API for FHIR has the expected attributes and values.
In the [store profiles in Azure API for FHIR](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. This article will guide you through how to use `$validate` for validating resources against profiles. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
In this article, you learned how to validate resources against profiles using `$
>[!div class="nextstepaction"] >[Azure API for FHIR supported features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you learned how to grant permissions to client applications usi
>[!div class="nextstepaction"] >[Access using REST Client](./fhir/using-rest-client.md) +
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
description: This article describes how to configure Azure RBAC role for FHIR.
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you've learned how to assign Azure roles for the FHIR service a
- [Access using Postman](./fhir/use-postman.md) - [Access using the REST Client](./fhir/using-rest-client.md) - [Access using cURL](./fhir/using-curl.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you learned how to create Azure Health Data Services, including
>[!div class="nextstepaction"] >[What is Azure Health Data Services](healthcare-apis-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Cast Access Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-access-request.md
Previously updated : 03/22/2022 Last updated : 06/03/2022
For more information about DICOMcast, see
>[!div class="nextstepaction"] >[DICOMcast overview](dicom-cast-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
Previously updated : 03/22/2022 Last updated : 06/03/2022
To get started using the DICOM service, see
>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md) >[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
+>[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
Previously updated : 03/22/2022 Last updated : 06/03/2022
This conceptual article provided you with an overview of DICOM and the DICOM ser
## Next steps
-To get started using the DICOM service, see:
+To get started using the DICOM service, see
>[!div class="nextstepaction"] >[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
For more information about how to use the DICOMweb&trade; Standard APIs with th
>[!div class="nextstepaction"] >[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Previously updated : 05/03/2022 Last updated : 06/03/2022
This article described the basic steps to get started using the DICOM service. F
>[!div class="nextstepaction"] >[Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
Previously updated : 03/21/2022 Last updated : 06/03/2022
For more information about using the DICOM service, see
For more information about DICOM cast, see >[!div class="nextstepaction"]
->[DICOM cast overview](dicom-cast-overview.md)
+>[DICOM cast overview](dicom-cast-overview.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Active Directory Identity Configuration Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md
Previously updated : 03/01/2022 Last updated : 06/03/2022
In this document, you learned some of the basic concepts involved in securing ac
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # CARIN Implementation Guide for Blue Button&#174;
The final test we'll walk through is testing [error handling](https://touchstone
In this tutorial, we walked through how to pass the CARIN IG for Blue Button tests in Touchstone. Next, you can review how to test the Da Vinci formulary tests. >[!div class="nextstepaction"]
->[DaVinci Drug Formulary](davinci-drug-formulary-tutorial.md)
+>[DaVinci Drug Formulary](davinci-drug-formulary-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Introduction: Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule
To test adherence to the various implementation guides, [Touchstone](https://tou
Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), we'll walk through setting up FHIR service for the CARIN IG for Blue Button. >[!div class="nextstepaction"]
->[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
+>[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in FHIR service
description: This article describes how to configure cross-origin resource sharing in FHIR service Previously updated : 03/02/2022 Last updated : 06/06/2022
To configure a CORS setting in the FHIR service, specify the following settings:
In this tutorial, we walked through how to configure a CORS setting in the FHIR service. Next, you can review how to pass the CARIN IG for Blue Button tests in Touchstone. >[!div class="nextstepaction"]
->[CARIN Implementation Guide for Blue Button&#174;](carin-implementation-guide-blue-button-tutorial.md)
+>[CARIN Implementation Guide for Blue Button&#174;](carin-implementation-guide-blue-button-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 03/01/2022 Last updated : 06/06/2022
In this article, you learned about the three steps in configuring export setting
>[!div class="nextstepaction"] >[How to export FHIR data](export-data.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Previously updated : 04/20/2022 Last updated : 06/06/2022
In this article, you've learned the FHIR service supports $import operation and
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
+>[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Previously updated : 03/21/2022 Last updated : 06/06/2022
In this article, you've learned about the $convert-data endpoint and customize-c
>[!div class="nextstepaction"] >[Export data](export-data.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
Previously updated : 03/16/2022 Last updated : 06/06/2022 # Copy data from FHIR service to Azure Synapse Analytics
In this article, you learned three different ways to copy your FHIR data into Sy
Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI. >[!div class="nextstepaction"]
->[Exporting de-identified data](./de-identified-export.md)
+>[Exporting de-identified data](./de-identified-export.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Tutorial for Da Vinci Drug Formulary
The second test is the [query capabilities](https://touchstone.aegis.net/touchst
In this tutorial, we walked through how to pass the Da Vinci Payer Data Exchange US Drug Formulary in Touchstone. Next, you can learn how to test the Da Vinci PDex Implementation Guide in Touchstone. >[!div class="nextstepaction"]
->[Da Vinci PDex](davinci-pdex-tutorial.md)
+>[Da Vinci PDex](davinci-pdex-tutorial.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Da Vinci PDex
The final test we'll walk through is testing patient-everything. For this test,
In this tutorial, we walked through how to pass the Payer Exchange tests in Touchstone. Next, you can learn how to test the Da Vinci PDEX Payer Network (Plan-Net) Implementation Guide. >[!div class="nextstepaction"]
->[Da Vinci Plan Net](davinci-plan-net.md)
+>[Da Vinci Plan Net](davinci-plan-net.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-plan-net.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Da Vinci Plan Net
In this tutorial, we walked through setting up the Azure API for FHIR to pass th
>[!div class="nextstepaction"] >[Supported features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 # Exporting de-identified data
In this article, you've learned how to set up and use de-identified export. For
>[!div class="nextstepaction"] >[Export data](export-data.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 # How to export FHIR data
In this article, you've learned how to export FHIR resources using the $export c
>[!div class="nextstepaction"] >[Copy data from the FHIR service to Azure Synapse Analytics](copy-to-synapse.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
Previously updated : 03/01/2022 Last updated : 06/06/2022
We have a collection of reference architectures available on the [Health Archite
In this article, you've learned the answers to frequently asked questions about FHIR service. To see the frequently asked questions about FHIR service in Azure API for FHIR, see >[!div class="nextstepaction"]
->[FAQs about Azure API for FHIR](../azure-api-for-fhir/fhir-faq.yml)
+>[FAQs about Azure API for FHIR](../azure-api-for-fhir/fhir-faq.yml)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Previously updated : 05/05/2022 Last updated : 06/06/2022
In this article, you've read about the supported FHIR features in the FHIR servi
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-portal-quickstart.md
description: This article teaches users how to deploy a FHIR service in the Azur
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you learned how to deploy FHIR service within Azure Health Data
>[!div class="nextstepaction"] >[Access FHIR service using Postman](../fhir/use-postman.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Previously updated : 03/09/2022 Last updated : 06/06/2022
healthcare-apis Fhir Service Access Token Validation Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-access-token-validation-old.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # FHIR service access token validation
In this article, you learned about the FHIR service access token validation step
>[!div class="nextstepaction"] >[Supported FHIR Features](fhir-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you've learned about the FHIR service autoscale feature in Azur
>[!div class="nextstepaction"] >[Supported FHIR Features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Previously updated : 05/03/2022 Last updated : 06/06/2022 # Deploy a FHIR service within Azure Health Data Services - using ARM template
az group delete --name $resourceGroupName
In this quickstart guide, you've deployed the FHIR service within Azure Health Data Services using an ARM template. For more information about FHIR service supported features, see. >[!div class="nextstepaction"]
->[Supported FHIR Features](fhir-features-supported.md)
+>[Supported FHIR Features](fhir-features-supported.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Versioning Policy And History Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-versioning-policy-and-history-management.md
Previously updated : 05/06/2022 Last updated : 06/06/2022
In this article, you learned how to purge the history for resources in the FHIR
>[!div class="nextstepaction"] >[Purge history operation](purge-history.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
This article described the basic steps to get started using the FHIR service. Fo
>[!div class="nextstepaction"] >[Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Defining custom search parameters
In this article, youΓÇÖve learned how to create a search parameter. Next you can
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Running a reindex job
In this article, you've learned how to start a reindex job. To learn how to defi
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Previously updated : 04/22/2022 Last updated : 06/06/2022 # Bulk-import FHIR data (Preview)
-The bulk-import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
+The bulk-import feature enables importing Fast Healthcare Interoperability Resources (FHIR&#174;) data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
> [!NOTE] > You must have the **FHIR Data Importer** role on the FHIR server to use $import.
In this article, you've learned about how the Bulk import feature enables import
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
+>[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Overview of FHIR search
-The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
Now that you've learned about the basics of search, see the search samples page
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Previously updated : 05/16/2022 Last updated : 06/06/2022
To start working with the FHIR service, follow the 5-minute quickstart to deploy
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/patient-everything.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Using Patient-everything in FHIR service
-The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in Azure Health Data Services(hereby called FHIR service), Patient-everything is available to pull data related to a specific patient.
+The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the Fast Healthcare Interoperability Resources (FHIR&#174;) specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in Azure Health Data Services(hereby called FHIR service), Patient-everything is available to pull data related to a specific patient.
## Use Patient-everything To call Patient-everything, use the following command:
Now that you know how to use the Patient-everything operation, you can learn abo
>[!div class="nextstepaction"] >[Overview of FHIR search](overview-of-search.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/purge-history.md
Previously updated : 05/05/2022 Last updated : 06/06/2022 # Purge history operation
-`$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification, but it's useful for [history management](fhir-versioning-policy-and-history-management.md) in large FHIR service instances.
+`$purge-history` is an operation that allows you to delete the history of a single Fast Healthcare Interoperability Resources (FHIR&#174;) resource. This operation isn't defined in the FHIR specification, but it's useful for [history management](fhir-versioning-policy-and-history-management.md) in large FHIR service instances.
## Overview of purge history
In this article, you learned how to purge the history for resources in the FHIR
>[Supported FHIR features](fhir-features-supported.md) >[!div class="nextstepaction"]
->[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
+>[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # FHIR search examples
-Below are some examples of using FHIR search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
+Below are some examples of using Fast Healthcare Interoperability Resources (FHIR&#174;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
## Search result parameters
In this article, you learned about how to search using different search paramete
>[!div class="nextstepaction"] >[Overview of FHIR Search](overview-of-search.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Store profiles in FHIR service
-HL7 FHIR defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
+HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
[FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions. The FHIR service in Azure Health Data Services (hereby called FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
In this article, you've learned about FHIR profiles. Next, you'll learn how you
>[!div class="nextstepaction"] >[Validate FHIR resources against profiles](validation-against-profiles.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/tutorial-member-match.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # $member-match operation in FHIR service
In this guide, you've learned about the $member-match operation. Next, you can l
>[!div class="nextstepaction"] >[DaVinci PDex](davinci-pdex-tutorial.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
In this article, you learned how to access the FHIR service in Azure Health Data
>[!div class="nextstepaction"] >[What is FHIR service?](overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md
Previously updated : 03/22/2022 Last updated : 06/06/2022
In this article, you learned how to access Azure Health Data Services data using
To learn about how to access Azure Health Data Services data using REST Client extension in Visual Studio Code, see >[!div class="nextstepaction"]
->[Access Azure Health Data Services using REST Client](using-rest-client.md)
+>[Access Azure Health Data Services using REST Client](using-rest-client.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md
Previously updated : 03/01/2022 Last updated : 06/06/2022
In this article, you learned how to access Azure Health Data Services data using
To learn about how to validate FHIR resources against profiles in Azure Health Data Services, see >[!div class="nextstepaction"]
->[Validate FHIR resources against profiles in Azure Health Data Services](validation-against-profiles.md)
+>[Validate FHIR resources against profiles in Azure Health Data Services](validation-against-profiles.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
Previously updated : 03/01/2022 Last updated : 06/06/2022 # Validate FHIR resources against profiles in Azure Health Data Services
-`$validate` is an operation in FHIR that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in the FHIR server has the expected attributes and values.
+`$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in the FHIR server has the expected attributes and values.
In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. The FHIR service in Azure Health Data Services (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article will guide you through how to use `$validate` for validating resources against profiles. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
In this article, you learned how to validate resources against profiles using `$
>[!div class="nextstepaction"] >[Supported FHIR features](fhir-features-supported.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-access-token.md
Previously updated : 05/03/2022 Last updated : 06/06/2022 ms.devlang: azurecli
In this article, you learned how to obtain an access token for the FHIR service
>[!div class="nextstepaction"] >[Access DICOM service using cURL](dicom/dicomweb-standard-apis-curl.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-started-with-health-data-services.md
Previously updated : 05/17/2022 Last updated : 06/06/2022
This article described the basic steps to get started using Azure Health Data Se
>[!div class="nextstepaction"] >[Frequently asked questions about Azure Health Data Services](healthcare-apis-faqs.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/github-projects.md
Previously updated : 03/22/2022 Last updated : 06/06/2022 # GitHub Projects
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
Private Link enables you to access Azure Health Data Services over a private end
## Prerequisites
-Before creating a private endpoint, the following Azure resources must be created first:
+Before you create a private endpoint, the following Azure resources must be created first:
- **Resource Group** ΓÇô The Azure resource group that will contain the virtual network and private endpoint. - **Workspace** ΓÇô This is a logical container for FHIR and DICOM service instances.
In this article, you've learned how to configure Private Link for Azure Health D
>[!div class="nextstepaction"] >[Overview of Azure Health Data Services](healthcare-apis-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Previously updated : 03/22/2022 Last updated : 06/06/2022
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Previously updated : 03/22/2022 Last updated : 06/03/2022
Azure Health Data Services provides the following benefits:
* Empower new workloads to leverage PHI by enabling the data to be collected and accessed in one place, in a consistent way. * Discover new insight by bringing disparate PHI together and connecting it end-to-end with tools for machine learning, analytics, and AI. * Build on a trusted cloud with confidence in how Protected Health Information is managed, stored, and made available.
-The new Microsoft Azure Health Data Services will, in addition to FHIR, support other healthcare industry data standards, like DICOM, extending healthcare data interoperability. The business model and infrastructure platform have been redesigned to accommodate the expansion and introduction of different and future healthcare data standards. Customers can use health data of different types across healthcare standards under the same compliance umbrella. Tools have been built into the managed service that allow customers to transform data from legacy or device proprietary formats, to FHIR. Some of these tools have been previously developed and open-sourced; others will be net new.
+The new Microsoft Azure Health Data Services will, in addition to Fast Healthcare Interoperability Resources (FHIR&#174;), support other healthcare industry data standards, like DICOM, extending healthcare data interoperability. The business model and infrastructure platform have been redesigned to accommodate the expansion and introduction of different and future healthcare data standards. Customers can use health data of different types across healthcare standards under the same compliance umbrella. Tools have been built into the managed service that allow customers to transform data from legacy or device proprietary formats, to FHIR. Some of these tools have been previously developed and open-sourced; others will be net new.
Azure Health Data Services enables you to: * Quickly connect disparate health data sources and formats such as structured, imaging, and device data and normalize it to be persisted in the cloud.
To start working with the Azure Health Data Services, follow the 5-minute quick
> [!div class="nextstepaction"] > [Workspace overview](workspace-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Previously updated : 05/03/2022 Last updated : 06/06/2022
For more information about Azure Health Data Services workspace, see
>[!div class="nextstepaction"] >[Workspace overview](workspace-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/logging.md
Previously updated : 03/22/2022 Last updated : 06/06/2022
For more information about service logs and metrics for the DICOM service and Me
>[!div class="nextstepaction"] >[How to display MedTech service metrics](./../healthcare-apis/iot/how-to-display-metrics.md)
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Previously updated : 05/27/2022 Last updated : 06/06/2022
In this article, you learned how to register a client application in the Azure A
>[!div class="nextstepaction"] >[Overview of Azure Health Data Services](healthcare-apis-overview.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/06/2022
page lists the **compliance domains** and **security controls** for the FHIR ser
- For more information, see [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md). - For more information, see built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Previously updated : 03/28/2022 Last updated : 06/06/2022
To start working with Azure Health Data Services, follow the 5-minute quick star
>[!div class="nextstepaction"] >[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Previously updated : 11/12/2019 Last updated : 06/07/2022 # Tutorial: Export data from Azure IoT Central and visualize insights in Power BI In the two previous tutorials, you created and customized an IoT Central application using the **In-store analytics - checkout** application template. In this tutorial, you configure your IoT Central application to export telemetry collected from the devices. You then use Power BI to create a custom dashboard for the store manager to visualize the insights derived from the telemetry.
-In this tutorial, you will learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Configure an IoT Central application to export telemetry to an event hub.
Before you create your event hub and logic app, you need to create a resource gr
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the left navigation, select **Resource groups**. Then select **Add**. 1. For **Subscription**, select the name of the Azure subscription you used to create your IoT Central application.
-1. For the **Resource group** name, enter _retail-store-analysis_*.
+1. For the **Resource group** name, enter _retail-store-analysis_.
1. For the **Region**, select the same region you chose for the IoT Central application. 1. Select **Review + Create**. 1. On the **Review + Create** page, select **Create**.
Before you can configure the retail monitoring application to export telemetry,
1. In the portal, navigate to the **retail-store-analysis** resource group. Wait for the deployment to complete. You may need to select **Refresh** to update the deployment status. You can also check the status of the event hub namespace creation in the **Notifications**. 1. In the **retail-store-analysis** resource group, select the **Event Hubs Namespace**. You see the home page for your **Event Hubs Namespace** in the portal.
-Now you have an **Event Hubs Namespace**, you can create an **Event Hub** to use with your IoT Central application:
+You need a connection string with send permissions to connect from IoT Central. To create a connection string:
+
+1. In your Event Hubs namespace in the Azure portal, select **Shared access policies**. The list of policies includes the default **RootManageSharedAccessKey** policy.
+1. Select **+ Add**.
+1. Enter *SendPolicy* as the policy name, select **Send**, and then select **Create**.
+1. Select **SendPolicy** in the list of policies.
+1. Make a note of the **Connection string-primary key** value. You use it when you configure the export destination in IoT Central.
+
+You need a connection string with manage and listen permissions to connect to the event hub from your logic app. To retrieve a connection string:
+
+1. In your Event Hubs namespace in the Azure portal, select **Shared access policies**. The list of policies includes the default **RootManageSharedAccessKey** policy.
+1. Select **RootManageSharedAccessKey** in the list of policies.
+1. Make a note of the **Connection string-primary key** value. You use it when you configure the logic app to fetch telemetry from your event hub.
+
+Now you have an **Event Hubs Namespace**, you can create an event hub to use with your IoT Central application:
1. On the home page for your **Event Hubs Namespace** in the portal, select **+ Event Hub**. 1. On the **Create Event Hub** page, enter _store-telemetry_ as the name, and then select **Create**. You now have an event hub you can use when you configure data export from your IoT Central application: ## Configure data export
Now you have an event hub, you can configure your **In-store analytics - checkou
1. Sign in to your **In-store analytics - checkout** IoT Central application. 1. Select **Data export** in the left pane.
-1. Enter _Telemetry export_ as the **export Name**.
+1. Select **+ New export**.
+1. Enter _Telemetry export_ as the **export name**.
1. Select **Telemetry** as type of data to export.
-1. Select **create new one** under Destinations.
-1. Enter **Destination name**
-1. Select your **Event Hubs namespace**.
-1. Select the **store-telemetry** event hub.
-1. Switch off **Devices** and **Device Templates** in the **Data to export** section.
-1. Select **Save**.
+1. In the **Destinations** section, select **create a new one**.
+1. Enter _Store data event hub_ as the **Destination name**.
+1. Select **Azure Event Hubs** as the destination type.
+1. Select **Connection string** as the authorization type.
+1. Paste in the connection string for the **SendPolicy** you saved when you create the event hub.
+1. Enter *store-telemetry* as the **Event Hub**.
+1. Select **Create** and then **Save**.
+1. On the **Telemetry export** page, wait for the export status to change to **Healthy**.
The data export may take a few minutes to start sending telemetry to your event hub. You can see the status of the export on the **Data exports** page: ## Create the Power BI datasets
Your Power BI dashboard will display data from your retail monitoring applicatio
1. Sign in to your **Power BI** account. 1. Select **Workspaces**, and then select **Create a workspace**.
-1. On the **Create a workspace** page, enter _In-store analytics - checkout_ as the **Workspace name**.
-1. Scroll to the bottom of the **Welcome to the In-store analytics - checkout workspace** page, and select **Skip**.
-1. On the workspace page, select **Create > Streaming dataset**.
+1. On the **Create a workspace** page, enter _In-store analytics - checkout_ as the **Workspace name**. Select **Save**.
+1. On the workspace page, select **+ New > Streaming dataset**.
1. On the **New streaming dataset** page, choose **API**, and then select **Next**. 1. Enter _Zone 1 sensor_ as the **Dataset name**. 1. Enter the three **Values from stream** in following table:
Your Power BI dashboard will display data from your retail monitoring applicatio
You now have two streaming datasets. The logic app will route telemetry from the two environmental sensors connected to your **In-store analytics - checkout** application to these two datasets: - This solution uses one streaming dataset for each sensor because it's not possible to apply filters to streaming data in Power BI.
You also need a streaming dataset for the occupancy telemetry:
You now have a third streaming dataset that stores values from the simulated occupancy sensor. This sensor reports the queue length at the two checkouts in the store, and how long customers are waiting in these queues: ## Create a logic app
Before you create the logic app, you need the device IDs of the two RuuviTag sen
1. Sign in to your **In-store analytics - checkout** IoT Central application. 1. Select **Devices** in the left pane. Then select **RuuviTag**.
-1. Make a note of the **Device IDs**. In the following screenshot, the IDs are **f5dcf4ac32e8** and **e29ffc8d5326**:
+1. Make a note of the **Device IDs**. In the following screenshot, the IDs are **p7g7h8qqax** and **2ngij10dibe**:
The following steps show you how to create the logic app in the Azure portal:
The following steps show you how to create the logic app in the Azure portal:
* Enter a unique name for your logic app such as _yourname-retail-store-analysis_. * Select the same **Subscription** you used to create your IoT Central application. * Select the **retail-store-analysis** resource group.
- * Select the **Type** as **Consumption**.
+ * Select the **Type** as **Consumption**.
* Select the same location you used for your IoT Central application. * Select **Create**. You may have to wait a few minutes for the system to provision the resources. 1. In the Azure portal, navigate to your new logic app. 1. On the **Logic Apps Designer** page, scroll down and select **Blank Logic App**. 1. In **Search connectors and triggers**, enter _Event Hubs_. 1. In **Triggers**, select **When events are available in Event Hub**.
-1. Enter _Store telemetry_ as the **Connection name**, and select your **Event Hubs Namespace**.
-1. Select the **RootManageSharedAccess** policy, and select **Create**.
+1. Enter _Store telemetry_ as the **Connection name**.
+1. Select **Access key** as the authentication type.
+1. Paste in the event hub connection string for the **RootManageSharedAccessKey** policy you made a note of previously, and select **Create**.
1. In the **When events are available in Event Hub** action: * In **Event Hub name**, select **store-telemetry**. * In **Content type**, select **application/json**.
The following steps show you how to create the logic app in the Azure portal:
To add the logic to your logic app design, select **Code view**:
-1. Replace `"actions": {},` with the following JSON. Replace the two placeholders `[YOUR RUUVITAG DEVICE ID 1]` and `[YOUR RUUVITAG DEVICE ID 2]` with the IDs you noted of your two RuuviTag devices:
+1. Replace `"actions": {},` with the following JSON. Then replace the two placeholders `[YOUR RUUVITAG DEVICE ID 1]` and `[YOUR RUUVITAG DEVICE ID 2]` with the IDs of your two RuuviTag devices. You made a note of these IDs previously:
```json "actions": {
To add the logic to your logic app design, select **Code view**:
"runAfter": {}, "type": "InitializeVariable" },
- "Initialize_Interface_ID_variable": {
- "inputs": {
- "variables": [
- {
- "name": "InterfaceID",
- "type": "String",
- "value": "Other"
- }
- ]
- },
- "runAfter": {
- "Initialize_Device_ID_variable": [
- "Succeeded"
- ]
- },
- "type": "InitializeVariable"
- },
- "Parse_Properties": {
+ "Parse_Telemetry": {
"inputs": {
- "content": "@triggerBody()?['Properties']",
+ "content": "@triggerBody()?['ContentData']",
"schema": { "properties": {
- "iothub-connection-auth-generation-id": {
+ "deviceId": {
"type": "string" },
- "iothub-connection-auth-method": {
+ "enqueuedTime": {
"type": "string" },
- "iothub-connection-device-id": {
- "type": "string"
- },
- "iothub-enqueuedtime": {
- "type": "string"
- },
- "iothub-interface-name": {
- "type": "string"
- },
- "iothub-message-source": {
- "type": "string"
- },
- "x-opt-enqueued-time": {
- "type": "string"
+ "telemetry": {
+ "properties": {
+ "DwellTime1": {
+ "type": "number"
+ },
+ "DwellTime2": {
+ "type": "number"
+ },
+ "count1": {
+ "type": "integer"
+ },
+ "count2": {
+ "type": "integer"
+ },
+ "humidity": {
+ "type": "number"
+ },
+ "temperature": {
+ "type": "number"
+ }
+ },
+ "type": "object"
},
- "x-opt-offset": {
+ "templateId": {
"type": "string"
- },
- "x-opt-sequence-number": {
- "type": "integer"
} }, "type": "object" } }, "runAfter": {
- "Initialize_Interface_ID_variable": [
- "Succeeded"
- ]
- },
- "type": "ParseJson"
- },
- "Parse_Telemetry": {
- "inputs": {
- "content": "@triggerBody()?['ContentData']",
- "schema": {
- "properties": {
- "DwellTime1": {
- "type": "number"
- },
- "DwellTime2": {
- "type": "number"
- },
- "count1": {
- "type": "number"
- },
- "count2": {
- "type": "number"
- },
- "humidity": {
- "type": "number"
- },
- "temperature": {
- "type": "number"
- }
- },
- "type": "object"
- }
- },
- "runAfter": {
- "Initialize_Interface_ID_variable": [
+ "Initialize_Device_ID_variable": [
"Succeeded" ] },
To add the logic to your logic app design, select **Code view**:
"Set_Device_ID_variable": { "inputs": { "name": "DeviceID",
- "value": "@body('Parse_Properties')?['iothub-connection-device-id']"
+ "value": "@body('Parse_Telemetry')?['deviceId']"
}, "runAfter": {
- "Parse_Properties": [
- "Succeeded"
- ]
- },
- "type": "SetVariable"
- },
- "Set_Interface_ID_variable": {
- "inputs": {
- "name": "InterfaceID",
- "value": "@body('Parse_Properties')?['iothub-interface-name']"
- },
- "runAfter": {
- "Set_Device_ID_variable": [
+ "Parse_Telemetry": [
"Succeeded" ] },
To add the logic to your logic app design, select **Code view**:
"Switch_by_DeviceID": { "cases": { "Occupancy": {
- "actions": {
- "Switch_by_InterfaceID": {
- "cases": {
- "Dwell_Time_interface": {
- "actions": {},
- "case": "RS40_Occupancy_Sensor_v2_1l0"
- },
- "People_Count_interface": {
- "actions": {},
- "case": "RS40_Occupancy_Sensor_iv"
- }
- },
- "default": {
- "actions": {}
- },
- "expression": "@variables('InterfaceID')",
- "runAfter": {},
- "type": "Switch"
- }
- },
+ "actions": {},
"case": "Occupancy" }, "Zone 2 environment": {
To add the logic to your logic app design, select **Code view**:
}, "expression": "@variables('DeviceID')", "runAfter": {
- "Parse_Telemetry": [
- "Succeeded"
- ],
- "Set_Interface_ID_variable": [
+ "Set_Device_ID_variable": [
"Succeeded" ] },
To add the logic to your logic app design, select **Code view**:
1. Select **Save** and then select **Designer** to see the visual version of the logic you added:
- :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/logic-app.png" alt-text="Logic app design.":::
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/logic-app.png" alt-text="Screenshot of the Logic Apps Designer in the Azure portal with the initial logic app.":::
1. Select **Switch by DeviceID** to expand the action. Then select **Zone 1 environment**, and select **Add an action**.
-1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
-1. Select the **Add rows to a dataset (preview)** action.
+1. In **Search connectors and actions**, enter **Add rows to a dataset**.
+1. Select the Power BI **Add rows to a dataset** action.
1. Select **Sign in** and follow the prompts to sign in to your Power BI account. 1. After the sign-in process is complete, in the **Add rows to a dataset** action: * Select **In-store analytics - checkout** as the workspace. * Select **Zone 1 sensor** as the dataset. * Select **RealTimeData** as the table. * Select **Add new parameter** and then select the **Timestamp**, **Humidity**, and **Temperature** fields.
- * Select the **Timestamp** field, and then select **x-opt-enqueuedtime** from the **Dynamic content** list.
+ * Select the **Timestamp** field, and then select **enqueuedTime** from the **Dynamic content** list.
* Select the **Humidity** field, and then select **See more** next to **Parse Telemetry**. Then select **humidity**. * Select the **Temperature** field, and then select **See more** next to **Parse Telemetry**. Then select **temperature**.
- * Select **Save** to save your changes. The **Zone 1 environment** action looks like the following screenshot:
- :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/zone-1-action.png" alt-text="Zone 1 environment.":::
+
+ Select **Save** to save your changes. The **Zone 1 environment** action looks like the following screenshot:
+
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/zone-1-action.png" alt-text="Screenshot that shows the zone one environment action in the Logic Apps Designer.":::
+ 1. Select the **Zone 2 environment** action, and select **Add an action**.
-1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
-1. Select the **Add rows to a dataset (preview)** action.
+1. In **Search connectors and actions**, enter **Add rows to a dataset**.
+1. Select the Power BI **Add rows to a dataset** action.
1. In the **Add rows to a dataset 2** action: * Select **In-store analytics - checkout** as the workspace. * Select **Zone 2 sensor** as the dataset. * Select **RealTimeData** as the table. * Select **Add new parameter** and then select the **Timestamp**, **Humidity**, and **Temperature** fields.
- * Select the **Timestamp** field, and then select **x-opt-enqueuedtime** from the **Dynamic content** list.
+ * Select the **Timestamp** field, and then select **enqueuedTime** from the **Dynamic content** list.
* Select the **Humidity** field, and then select **See more** next to **Parse Telemetry**. Then select **humidity**. * Select the **Temperature** field, and then select **See more** next to **Parse Telemetry**. Then select **temperature**.
- Select **Save** to save your changes. The **Zone 2 environment** action looks like the following screenshot:
- :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/zone-2-action.png" alt-text="Zone 2 environment.":::
-1. Select the **Occupancy** action, and then select the **Switch by Interface ID** action.
-1. Select the **Dwell Time interface** action, and select **Add an action**.
-1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
-1. Select the **Add rows to a dataset (preview)** action.
-1. In the **Add rows to a dataset** action:
- * Select **In-store analytics - checkout** as the workspace.
- * Select **Occupancy Sensor** as the dataset.
- * Select **RealTimeData** as the table.
- * Select **Add new parameter** and then select the **Timestamp**, **Dwell Time 1**, and **Dwell Time 2** fields.
- * Select the **Timestamp** field, and then select **x-opt-enqueuedtime** from the **Dynamic content** list.
- * Select the **Dwell Time 1** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime1**.
- * Select the **Dwell Time 2** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime2**.
- * Select **Save** to save your changes. The **Dwell Time interface** action looks like the following screenshot:
- :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/occupancy-action-1.png" alt-text="Dwell Time interface.":::
-1. Select the **People Count interface** action, and select **Add an action**.
-1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
-1. Select the **Add rows to a dataset (preview)** action.
-1. In the **Add rows to a dataset** action:
+
+ Select **Save** to save your changes.
+
+1. Select the **Occupancy** action, and select **Add an action**.
+1. In **Search connectors and actions**, enter **Add rows to a dataset**.
+1. Select the Power BI **Add rows to a dataset** action.
+1. In the **Add rows to a dataset 3** action:
* Select **In-store analytics - checkout** as the workspace.
- * Select **Occupancy Sensor** as the dataset.
+ * Select **Occupancy sensor** as the dataset.
* Select **RealTimeData** as the table.
- * Select **Add new parameter** and then select the **Timestamp**, **Queue Length 1**, and **Queue Length 2** fields.
- * Select the **Timestamp** field, and then select **x-opt-enqueuedtime** from the **Dynamic content** list.
+ * Select **Add new parameter** and then select the **Timestamp**, **Queue Length 1**, **Queue Length 2**, **Dwell Time 1**, and **Dwell Time 2** fields.
+ * Select the **Timestamp** field, and then select **enqueuedTime** from the **Dynamic content** list.
* Select the **Queue Length 1** field, and then select **See more** next to **Parse Telemetry**. Then select **count1**. * Select the **Queue Length 2** field, and then select **See more** next to **Parse Telemetry**. Then select **count2**.
- * Select **Save** to save your changes. The **People Count interface** action looks like the following screenshot:
- :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/occupancy-action-2.png" alt-text="Occupancy action.":::
+ * Select the **Dwell Time 1** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime1**.
+ * Select the **Dwell Time 2** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime2**.
+
+ Select **Save** to save your changes. The **Occupancy** action looks like the following screenshot:
+
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/occupancy-action.png" alt-text="Screenshot that shows the occupancy action in the Logic Apps Designer.":::
-The logic app runs automatically. To see the status of each run, navigate to the **Overview** page for the logic app in the Azure portal:
+The logic app runs automatically. To see the status of each run, navigate to the **Overview** page for the logic app in the Azure portal and select **Runs history**. Select **Refresh** to update the list of runs.
## Create a Power BI dashboard
Now you have telemetry flowing from your IoT Central application through your ev
1. Sign in to your **Power BI** account. 1. Select **Workspaces > In-store analytics - checkout**.
-1. Select **Create > Dashboard**.
+1. Select **+ New > Dashboard**.
1. Enter **Store analytics** as the dashboard name, and select **Create**. ### Add line charts
-Add four line chart tiles to show the temperature and humidity from the two environmental sensors. Use the information in the following table to create the tiles. To add each tile, start by selecting **...(More options) > Add Tile**. Select **Custom Streaming Data**, and then select **Next**:
+Add four line chart tiles to show the temperature and humidity from the two environmental sensors. Use the information in the following table to create the tiles. To add each tile, start by selecting **Edit > Add a tile**. Select **Custom Streaming Data**, and then select **Next**:
| Setting | Chart #1 | Chart #2 | Chart #3 | Chart #4 | | - | -- | -- | -- | -- |
Add four line chart tiles to show the temperature and humidity from the two envi
The following screenshot shows the settings for the first chart: ### Add cards to show environmental data
-Add four card tiles to show the most recent temperature and humidity values from the two environmental sensors. Use the information in the following table to create the tiles. To add each tile, start by selecting **...(More options) > Add Tile**. Select **Custom Streaming Data**, and then select **Next**:
+Add four card tiles to show the most recent temperature and humidity values from the two environmental sensors. Use the information in the following table to create the tiles. To add each tile, start by selecting **Edit > Add a tile**. Select **Custom Streaming Data**, and then select **Next**:
| Setting | Card #1 | Card #2 | Card #3 | Card #4 | | - | - | - | - | - |
Add four card tiles to show the most recent temperature and humidity values from
The following screenshot shows the settings for the first card: ### Add tiles to show checkout occupancy data
-Add four card tiles to show the queue length and dwell time for the two checkouts in the store. Use the information in the following table to create the tiles. To add each tile, start by selecting **...(More options) > Add Tile**. Select **Custom Streaming Data**, and then select **Next**:
+Add four card tiles to show the queue length and dwell time for the two checkouts in the store. Use the information in the following table to create the tiles. To add each tile, start by selecting **Edit > Add a tile**. Select **Custom Streaming Data**, and then select **Next**:
| Setting | Card #1 | Card #2 | Card #3 | Card #4 | | - | - | - | - | - |
Add four card tiles to show the queue length and dwell time for the two checkout
Resize and rearrange the tiles on your dashboard to look like the following screenshot:
-You could add some addition graphics resources to further customize the dashboard:
+You could add some graphics resources to further customize the dashboard:
## Clean up resources
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/gpu-acceleration.md
-# GPU acceleration for Azure IoT Edge for Linux on Windows (Preview)
+# GPU acceleration for Azure IoT Edge for Linux on Windows
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-GPUs are a popular choice for artificial intelligence computations, because they offer parallel processing capabilities and can often execute vision-based inferencing faster than CPUs. To better support artificial intelligence and machine learning applications, Azure IoT Edge for Linux on Windows can expose a GPU to the virtual machine's Linux module.
-
-> [!NOTE]
-> The GPU acceleration features detailed below are in preview and are subject to change.
+GPUs are a popular choice for artificial intelligence computations, because they offer parallel processing capabilities and can often execute vision-based inferencing faster than CPUs. To better support artificial intelligence and machine learning applications, Azure IoT Edge for Linux on Windows (EFLOW) can expose a GPU to the virtual machine's Linux module.
Azure IoT Edge for Linux on Windows supports several GPU passthrough technologies, including:
You must select the appropriate passthrough method during deployment to match th
## Prerequisites
-The GPU acceleration features of Azure IoT Edge for Linux on Windows currently supports a select set of GPU hardware. Additionally, use of this feature may require specific versions of Windows, depending on your configuration.
+The GPU acceleration features of Azure IoT Edge for Linux on Windows currently supports a select set of GPU hardware. Additionally, use of this feature may require specific versions of Windows.
The supported GPUs and required Windows versions are listed below:
-* NVIDIA T4 (supports DDA)
-
- * Windows Server 2019, build 17763 with all current cumulative updates installed
- * Windows Server 2022
- * Windows 11 (Pro, Enterprise, IoT Enterprise)
-
-* NVIDIA GeForce/Quadro (supports GPU-PV)
-
- * Windows 10 (Pro, Enterprise, IoT Enterprise), minimum build 19044.1263 or later
- * Windows 11 (Pro, Enterprise, IoT Enterprise)
+| Supported GPUs | GPU Passthrough Type | Supported Windows Versions |
+| | | -- |
+| NVIDIA T4, A2 | DDA | Windows Server 2019 <br> Windows Server 2022 <br> Windows 10/11 (Pro, Enterprise, IoT Enterprise) |
+| NVIDIA GeForce, Quadro, RTX | GPU-PV | Windows 10/11 (Pro, Enterprise, IoT Enterprise) |
+| Intel iGPU | GPU-PV | Windows 10/11 (Pro, Enterprise, IoT Enterprise) |
-* Select Intel Integrated GPUs (supports GPU-PV)
-
- * Windows 10 (Pro, Enterprise, IoT Enterprise), minimum build 19044.1263 or later
- * Windows 11 (Pro, Enterprise, IoT Enterprise)
-
-Intel iGPU support is available for select processors. For more information, see the Intel driver documentation.
+> [!IMPORTANT]
+>GPU-PV support may be limited to certain generations of processors or GPU architectures as determined by the GPU vendor. For more information, see [Intel's iGPU driver documentation](https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html) or [NVIDIA's CUDA for WSL Documentation](https://docs.nvidia.com/cuda/wsl-user-guide/https://docsupdatetracker.net/index.html#wsl2-system-requirements).
+>
+>Windows Server 2019 users must use minimum build 17763 with all current cumulative updates installed.
+>
+>Windows 10 users must use the [November 2021 update](https://blogs.windows.com/windowsexperience/2021/11/16/how-to-get-the-windows-10-november-2021-update/) build 19044.1620 or higher. After installation, you can verify your build version by running `winver` at the command prompt.
+>
+> GPU passthrough is not supported with nested virtualization, such as running EFLOW in a Windows virtual machine.
-Windows 10 users must use the [November 2021 update](https://blogs.windows.com/windowsexperience/2021/11/16/how-to-get-the-windows-10-november-2021-update/). After installation, you can verify your build version by running `winver` at the command prompt.
## System setup and installation
-The following sections contain information related to setup and installation.
+The following sections contain setup and installation information, according to your GPU.
-### NVIDIA T4 GPUs
+### NVIDIA T4/A2 GPUs
-For **T4 GPUs**, Microsoft recommends installing a device mitigation driver from your GPU's vendor. While optional, installing a mitigation driver may improve the security of your deployment. For more information, see [Deploy graphics devices using direct device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#optionalinstall-the-partitioning-driver).
+For **T4/A2 GPUs**, Microsoft recommends installing a device mitigation driver from your GPU's vendor. While optional, installing a mitigation driver may improve the security of your deployment. For more information, see [Deploy graphics devices using direct device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#optionalinstall-the-partitioning-driver).
> [!WARNING] > Enabling hardware device passthrough may increase security risks. Microsoft recommends a device mitigation driver from your GPU's vendor, when applicable. For more information, see [Deploy graphics devices using discrete device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda).
-### NVIDIA GeForce/Quadro GPUs
+### NVIDIA GeForce/Quadro/RTX GPUs
-For **GeForce/Quadro GPUs**, download and install the [NVIDIA CUDA-enabled driver for Windows Subsystem for Linux (WSL)](https://developer.nvidia.com/cuda/wsl) to use with your existing CUDA ML workflows. Originally developed for WSL, the CUDA for WSL drivers are also used for Azure IoT Edge for Linux on Windows.
+For **NVIDIA GeForce/Quadro/RTX GPUs**, download and install the [NVIDIA CUDA-enabled driver for Windows Subsystem for Linux (WSL)](https://developer.nvidia.com/cuda/wsl) to use with your existing CUDA ML workflows. Originally developed for WSL, the CUDA for WSL drivers are also used for Azure IoT Edge for Linux on Windows.
Windows 10 users must also [install WSL](/windows/wsl/install) because some of the libraries are shared between WSL and Azure IoT Edge for Linux on Windows. ### Intel iGPUs
-For **Intel iGPUs**, download and install the [Intel Graphics Driver with WSL GPU support](https://www.intel.com/content/www/us/en/download-center/home.html?wapkw=quicklink:download-center).
+For **Intel iGPUs**, download and install the [Intel Graphics Driver with WSL GPU support](https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html).
Windows 10 users must also [install WSL](/windows/wsl/install) because some of the libraries are shared between WSL and Azure IoT Edge for Linux on Windows.
-## Using GPU acceleration for your Linux on Windows deployment
+## Enable GPU acceleration in your Azure IoT Edge Linux on Windows deployment
+Once system setup is complete, you are ready to [create your deployment of Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md). During this process, you must [enable GPU](reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow) as part of EFLOW deployment.
-Now you are ready to deploy and run GPU-accelerated Linux modules in your Windows environment through Azure IoT Edge for Linux on Windows. More details on the deployment process can be found in [guide for provisioning a single IoT Edge for Linux on Windows device using symmetric keys](how-to-provision-single-device-linux-on-windows-symmetric.md) or [using X.509 certificates](how-to-provision-single-device-linux-on-windows-x509.md).
+For example, the command below creates a virtual machine with an NVIDIA A2 GPU assigned.
-## Next steps
+```powershell
+Deploy-Eflow -gpuPassthroughType "DirectDeviceAssignment" -gpuCount 1 -gpuName "NVIDIA A2"
+```
+
+Once installation is complete, you are ready to deploy and run GPU-accelerated Linux modules through Azure IoT Edge for Linux on Windows.
-* [Create your deployment of Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md)
+
+## Next steps
* Try our [GPU-enabled sample featuring Vision on Edge](https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/factory-ai-vision/Tutorial/Eflow.md), a solution template illustrating how to build your own vision-based machine learning application.
+* Discover how to run Intel OpenVINOΓäó applications on EFLOW by following [Intel's guide on iGPU with Azure IoT Edge for Linux on Windows (EFLOW) & OpenVINOΓäó Toolkit](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Witness-the-power-of-Intel-iGPU-with-Azure-IoT-Edge-for-Linux-on/post/1382405) and [reference implementations](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-reference-implementation-to-azure-iot-eflow.html).
+ * Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda) and [GPU-PV blog post](https://devblogs.microsoft.com/directx/directx-heart-linux/#gpu-virtualization).
iot-edge How To Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-alerts.md
To access the example alert queries, use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub. 1. Select **Logs** from the **Monitoring** section of the menu.
-1. Select **Queries** to open the example query browser.
+1. The **Queries** example query browser will automatically open. If this is your first time to **Logs** you may have to close a video tutorial before you can see the query browser. The **Queries** tab can be used to bring up the example query browser again if you don't see it.
The [metrics-collector module](how-to-collect-and-transport-metrics.md#metrics-collector-module) ingests all data into the standard [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table. You can create alert rules based on metrics data from custom modules by querying the same table.
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-Create an **IoT Edge automatic deployment** in the Azure portal to manage ongoing deployments for many devices at once. Automatic deployments for IoT Edge are part of the [automatic device management](../iot-hub/iot-hub-automatic-device-management.md) feature of IoT Hub. Deployments are dynamic processes that enable you to deploy multiple modules to multiple devices, track the status and health of the modules, and make changes when necessary.
+Create an **IoT Edge automatic deployment** in the Azure portal to manage ongoing deployments for many devices at once. Automatic deployments for IoT Edge are part of the [device management](../iot-hub/iot-hub-automatic-device-management.md) feature of IoT Hub. Deployments are dynamic processes that enable you to deploy multiple modules to multiple devices, track the status and health of the modules, and make changes when necessary.
For more information, see [Understand IoT Edge automatic deployments for single devices or at scale](module-deployment-monitoring.md).
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-cli-at-scale.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-Create an [Azure IoT Edge automatic deployment](module-deployment-monitoring.md) by using the Azure CLI to manage ongoing deployments for many devices at once. Automatic deployments for IoT Edge are part of the [automatic device management](../iot-hub/iot-hub-automatic-device-management.md) feature of Azure IoT Hub. Deployments are dynamic processes that enable you to deploy multiple modules to multiple devices, track the status and health of the modules, and make changes when necessary.
+Create an [Azure IoT Edge automatic deployment](module-deployment-monitoring.md) by using the Azure CLI to manage ongoing deployments for many devices at once. Automatic deployments for IoT Edge are part of the [device management](../iot-hub/iot-hub-automatic-device-management.md) feature of Azure IoT Hub. Deployments are dynamic processes that enable you to deploy multiple modules to multiple devices, track the status and health of the modules, and make changes when necessary.
In this article, you set up the Azure CLI and the IoT extension. You then learn how to deploy modules to a set of IoT Edge devices and monitor the progress by using the available CLI commands.
iot-edge How To Deploy Modules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-portal.md
Verify that the module is deployed in your IoT Hub in the Azure portal. Select y
You can quickly deploy a module from the Azure Marketplace onto your device in your IoT Hub in the Azure portal. 1. In the Azure portal, navigate to your IoT Hub.
-1. On the left pane, under **Automatic Device Management**, select **IoT Edge**.
+1. On the left pane, under **Device Management**, select **IoT Edge**.
1. Select the IoT Edge device that is to receive the deployment. 1. On the upper bar, select **Set Modules**. 1. In the **IoT Edge Modules** section, click **Add**, and select **Marketplace Module** from the drop-down menu.
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
This method accepts a JSON payload with the following schema:
|-|-|-| | schemaVersion | string | Set to `1.0` | | items | JSON array | An array with `id` and `filter` tuples. |
-| ID | string | A regular expression that supplies the module name. It can match multiple modules on an edge device. [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format is expected. |
+| id | string | A regular expression that supplies the module name. It can match multiple modules on an edge device. [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format is expected. In case there are multiple items whose ID matches the same module, only the filter options of the first matching ID will be applied to that module. |
| filter | JSON section | Log filters to apply to the modules matching the `id` regular expression in the tuple. | | tail | integer | Number of log lines in the past to retrieve starting from the latest. OPTIONAL. | | since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Supported query syntax:
Not supported query syntax: * [Message routing query based on device twin](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-device-twin)
+### Restart policies
+ Don't use `on-unhealthy` or `on-failure` as values in modules' `restartPolicy` because they are unimplemented and won't initiate a restart. Only `never` and `always` restart policies are implemented.
+
+The recommended way to automatically restart unhealthy IoT Edge modules is noted in [this workaround](https://github.com/Azure/iotedge/issues/6358#issuecomment-1144022920). Configure the `Healthcheck` property in the module's `createOptions` to handle a failed health check.
+ ### File upload IoT Hub only supports file upload APIs for device identities, not module identities. Since IoT Edge exclusively uses modules, file upload isn't natively supported in IoT Edge.
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-development.md
With the IoT Edge MQTT broker, you can publish messages on any user-defined topi
With the IoT Edge MQTT broker, receiving messages is similar. First make sure that your module is authorized to subscribe to specific topics, then get a token from the workload API to use as a password when connecting to the MQTT broker, and finally subscribe to messages on the authorized topics with the MQTT client of your choice.
+> [!NOTE]
+> IoT Edge MQTT broker (currently in preview) will not move to general availability and will be removed from the future version of IoT Edge Hub. We appreciate the feedback we received on the preview, and we are continuing to refine our plans for an MQTT broker. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like [Mosquitto](https://mosquitto.org/) as an IoT Edge module.
+ ::: moniker-end ### IoT Hub primitives
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-edgeagent-edgehub.md
The module twin for the IoT Edge agent is called `$edgeAgent` and coordinates th
| modules.{moduleId}.version | A user-defined string representing the version of this module. | Yes | | modules.{moduleId}.type | Has to be "docker" | Yes | | modules.{moduleId}.status | {"running" \| "stopped"} | Yes |
-| modules.{moduleId}.restartPolicy | {"never" \| "on-failure" \| "on-unhealthy" \| "always"} | Yes |
+| modules.{moduleId}.restartPolicy | {"never" \| "always"} | Yes |
| modules.{moduleId}.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No | | modules.{moduleId}.imagePullPolicy | {"on-create" \| "never"} | No | | modules.{moduleId}.env | A list of environment variables to pass to the module. Takes the format `"<name>": {"value": "<value>"}` | No |
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
Follow these steps to start the **Set Modules** wizard to deploy your first modu
1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
-1. From the menu on the left, under **Automatic Device Management**, select **IoT Edge**.
+1. From the menu on the left, under **Device Management**, select **IoT Edge**.
1. Select the device ID of the target device from the list of devices.
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
Follow these steps to deploy your first module from Azure Marketplace.
1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
-1. From the menu on the left, under **Automatic Device Management**, select **IoT Edge**.
+1. From the menu on the left, under **Device Management**, select **IoT Edge**.
1. Select the device ID of the target device from the list of devices.
iot-edge Tutorial Machine Learning Edge 02 Prepare Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-02-prepare-environment.md
As part of creating the IoT hub, the script that we ran in the previous section
1. In the list of resources, select the IoT Hub that the script created. It will have a name ending with random characters such as `IotEdgeAndMlHub-jrujej6de6i7w`.
-1. From the left pane menu, under **Messaging**, select **Message routing**.
+1. From the left pane menu, under **Hub settings**, select **Message routing**.
1. On the **Message routing** page, select the **Custom endpoints** tab. 1. Expand the **Storage** section:
- ![Verify turbofanDeviceStorage is in the custom endpoints list](media/tutorial-machine-learning-edge-02-prepare-environment/custom-endpoints.png)
+ :::image type="content" source="./media/tutorial-machine-learning-edge-02-prepare-environment/custom-endpoints.png" alt-text="Screenshot of the storage called turbofanDeviceStorage in the custom endpoints list in the I o T Hub portal." lightbox="media/tutorial-machine-learning-edge-02-prepare-environment/custom-endpoints.png":::
We see **turbofanDeviceStorage** is in the custom endpoints list. Note the following characteristics about this endpoint: * It points to the blob storage container you created named `devicedata` as indicated by **Container name**.
- * Its **Filename format** has partition as the last element in the name. We find this format is more convenient for the file operations we will do with Azure Notebooks later in the tutorial.
+ * Its **Filename format** has the word "partition" in the name. We find this format is more convenient for the file operations we'll do with Azure Notebooks later in this tutorial.
* Its **Status** should be healthy. 1. Select the **Routes** tab.
As part of creating the IoT hub, the script that we ran in the previous section
1. On the **Routes details** page, note that the route's endpoint is the **turbofanDeviceStorage** endpoint.
- ![Review details about the turbofanDeviceDataToStorage route](media/tutorial-machine-learning-edge-02-prepare-environment/route-details.png)
+ :::image type="content" source="./media/tutorial-machine-learning-edge-02-prepare-environment/route-details.png" alt-text="Screenshot that shows detail about the turbofanDeviceDataToStorage route.":::
1. Look at the **Routing query**, which is set to **true**. This setting means that all device telemetry messages will match this route; and therefore all messages will be sent to the **turbofanDeviceStorage** endpoint.
iot-edge Tutorial Machine Learning Edge 06 Custom Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-06-custom-modules.md
With the router and classifier in place, we expect to receive regular messages c
1. In the Azure portal, navigate to your IoT Hub.
-1. From the menu on the left pane, under **Messaging**, select **Message routing**.
+1. From the menu on the left pane, under **Hub settings**, select **Message routing**.
1. On the **Routes** tab, select **Add**.
We don't want to route the new prediction data to our old storage location, so u
Configure the IoT Hub file upload feature to enable the file writer module to upload files to storage.
-1. From the left pane menu in your IoT Hub, under **Messaging**, choose **File upload**.
+1. From the left pane menu in your IoT Hub, under **Hub settings**, choose **File upload**.
1. Select **Azure Storage Container**.
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-proxy-updates.md
# Proxy Updates and multi-component updating
-Proxy Updates can support updating multiple **component(s)** on a target IoT device connected to IoT Hub. With Proxy updates, you can (1) target over-the-air updates to multiple components on the IoT device or (2) target over-the-air updates to multiple sensors connected to the IoT device. Use cases where proxy updates is applicable include:
+With Proxy updates, you can (1) target over-the-air updates to multiple components on the IoT device or (2) target over-the-air updates to multiple sensors connected to the IoT device. Use cases where proxy updates is applicable include:
* Targeting specific update files to different partitions on the device. * Targeting specific update files to different apps/components on the device
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
|Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) |Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md) |Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md)
-|Azure Front Door|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-azure-key-vault-account-and-certificate)
+|Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate)
+|Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate)
|Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) |Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
lab-services Quick Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-bicep.md
+
+ Title: Azure Lab Services Quickstart - Create a lab using Bicep
+description: In this quickstart, you learn how to create an Azure Lab Services lab using Bicep
+ Last updated : 05/23/2022+++
+# Quickstart: Create a lab using a Bicep file
+
+In this quickstart, you, as the educator, create a lab using a Bicep file. For detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/lab/).
++
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with a unique username. You'll also be prompted to enter adminPassword. The minimum password length is 12 characters.
+
+ When the deployment finishes, you should see a messaged indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+
+> [!div class="nextstepaction"]
+> [Configure a template VM](how-to-create-manage-template.md)
lab-services Quick Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-bicep.md
+
+ Title: Azure Lab Services Quickstart - Create a lab plan using Bicep
+description: In this quickstart, you learn how to create an Azure Lab Services lab plan using Bicep
+ Last updated : 05/23/2022+++
+# Quickstart: Create a lab plan using a Bicep file
+
+In this quickstart, you, as the educator, create a lab plan using a Bicep file. For detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/lab-plan/).
++
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with a unique username. You'll also be prompted to enter adminPassword. The minimum password length is 12 characters.
+
+ When the deployment finishes, you should see a messaged indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+
+> [!div class="nextstepaction"]
+> [Managing Labs](how-to-manage-labs.md)
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 05/26/2022 Last updated : 06/06/2022
The following diagram shows how communications flow through private endpoints to
* You must have an Azure Machine Learning workspace, and the workspace must use a private endpoint. If you don't have one, the steps in this article create an example workspace, VNet, and VM. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](/azure/machine-learning/how-to-configure-private-link).
-* The Azure Container Registry for your workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
+ The workspace can be configured to allow or disallow public network access. If you plan on using managed online endpoint deployments that use __public outbound__, then you must also [configure the workspace to allow public access](how-to-configure-private-link.md#enable-public-access).
+
+ Outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
+
+* When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](/azure/container-registry/container-registry-skus).
* The Azure Container Registry and Azure Storage Account must be in the same Azure Resource Group as the workspace.
The following diagram shows how communications flow through private endpoints to
## Limitations * The `v1_legacy_mode` flag must be disabled (false) on your Azure Machine Learning workspace. If this flag is enabled, you won't be able to create a managed online endpoint. For more information, see [Network isolation with v2 API](how-to-configure-network-isolation-with-v2.md).
-* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](/azure/machine-learning/how-to-configure-private-link).
+
+* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
* Secure outbound communication creates three private endpoints per deployment. One to Azure Blob storage, one to Azure Container Registry, and one to your workspace. * Azure Log Analytics and Application Insights aren't supported when using network isolation with a deployment. To see the logs for the deployment, use the [az ml online-deployment get_logs](/cli/azure/ml/online-deployment#az-ml-online-deployment-get-logs) command instead.
+* You can configure public access to a __managed online endpoint__ (_inbound_ and _outbound_). You can also configure [public access to an Azure Machine Learning workspace](how-to-configure-private-link.md#enable-public-access).
+
+ Outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
+ > [!NOTE] > Requests to create, update, or retrieve the authentication keys are sent to the Azure Resource Manager over the public network.
The following table lists the supported configurations when configuring inbound
| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? | | -- | -- | | | | secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
-| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes |
+| secure inbound with public outbound | `public_network_access` is disabled</br>The workspace must also allow public access. | `egress_public_network_access` is enabled | Yes |
| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
-| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes |
+| public inbound with public outbound | `public_network_access` is enabled</br>The workspace must also allow public access. | `egress_public_network_access` is enabled | Yes |
+
+> [!IMPORTANT]
+> Outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
## End-to-end example
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
The code that you've executed so far has create and controlled Azure resources.
If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
-If you're working from scratch, create a subdirectory called `kera-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
+If you're working from scratch, create a subdirectory called `keras-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
```python # prepare.py
marketplace Private Offers Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-offers-api.md
+
+ Title: Private offer APIs in the commercial marketplace
+description: Private offer APIs in the commercial marketplace (Azure Marketplace).
+++++ Last updated : 06/07/2022++
+# Create and manage private offers via API (preview)
+
+> [!NOTE]
+> This API is in preview. If you have any questions about the preview program, contact [privateofferspreview@microsoft.com](mailto:privateofferspreview@microsoft.com).
+
+Private offers allow publishers and customers to transact one or more products in Azure Marketplace by creating time-bound pricing with customized terms. The private offers API enables ISVs to programmatically create and manage private offers for customers and resellers. This API is useful if your account manages many private offers and you want to automate and optimize their management workflows. This API uses Azure Active Directory (Azure AD) to authenticate the calls from your app or service.
+
+## Terminology
+
+- **Private offer** ΓÇô A custom deal between an ISV and a specific customer with customized terms and pricing for a specific product in Azure Marketplace.
+- **Product** ΓÇô A single unit representing an offer in Azure Marketplace. There's one product per listing page.
+- **Plan** ΓÇô A single version of a particular product. There can be multiple plans for a given product that represent various levels of pricing or terms.
+- **Job** ΓÇô A task created when making a request in this API. When using this API to manage private offers, a job is created to complete the request. Once the job is completed, you can get more information about the relevant private offer.
+
+## Supported scenarios
+
+- Create a private offer for a customer
+- Create a private offer for a reseller
+- Delete a private offer
+- Withdraw a private offer
+- Upgrade a private offer
+- Query for a list of private offers
+- Query for a list of products and plans
+
+## Scenarios not supported via API
+
+These scenarios are only available through Partner Center:
+
+- **Creating in draft state** ΓÇô All private offers created through the API will be published.
+- **Republishing** ΓÇô Private offers withdrawn via API can't be republished via API.
+- **Publishing drafts** ΓÇô Private offers in draft state can't be published via API.
+
+## Get ready to use this API
+
+Before you write code to call the private offers API, ensure you've completed the following prerequisites.
+
+### Step 1: Complete prerequisites for using the Microsoft Product Ingestion API (one-time)
+
+You or your organization must have an Azure AD directory and global administrator permission. If you already use Microsoft 365 or other business services from Microsoft, you already have Azure AD directory. If not, you can create a new Azure AD in Partner Center for free.
+
+You must [associate an Azure AD](https://aka.ms/PCtoAzureAD) application with your Partner Center account and obtain your tenant ID, client ID, and key. You need these values to obtain the Azure AD access token you'll use in calls to the private offers API.
+
+### Step 2: Obtain an Azure AD access token (every time)
+
+Before you call any of the methods in the Microsoft Store submission API, you need an Azure AD access token to pass to the authorization header of each method in the API. You have 60 minutes to use a token before it expires. After expiration, you can refresh a token so you can continue to use it in further calls to the API.
+
+To obtain the access token, see [Service to Service Calls Using Client Credentials](https://aka.ms/AADAccesstoken) to send an HTTP POST to the [https://login.microsoftonline.com/<tenant_id>/oauth2/token](https://login.microsoftonline.com/%3Ctenant_id%3E/oauth2/token) endpoint. Here's a sample request:
+
+```json
+POST https://login.microsoftonline.com/<tenant_id>/oauth2/token HTTP/1.1
+Host: login.microsoftonline.com
+Content-Type: application/x-www-form-urlencoded; charset=utf-8
+grant_type=client_credentials
+&client_id=<your_client_id>
+&client_secret=<your_client_secret>
+&resource=https://graph.microsoft.com/
+```
+
+For the tenant_id value in the POST URI and the client_id and client_secret parameters, specify the tenant ID, client ID, and key for your application that you retrieved from Partner Center in the previous section. For the resource parameter, you must specify `https://graph.microsoft.com/`.
+
+### Find product, plan, and private offer IDs
+
+To use this API, you may need to reference several different types of IDs associated with your seller account.
+
+| ID | Where to find them |
+| | |
+| client_id | See [Associate an Azure AD application with your Partner Center account](https://aka.ms/PCtoAzureAD) |
+| tenant_id | See [Associate an Azure AD application with your Partner Center account](https://aka.ms/PCtoAzureAD) |
+| client_secret | See [Associate an Azure AD application with your Partner Center account](https://aka.ms/PCtoAzureAD) |
+| productId | See [Retrieve products](#retrieve-products) below |
+| planId | See [Retrieve plans for a specific product](#retrieve-plans-for-a-specific-product) below |
+| privateofferId | See [Retrieve private offers](#retrieve-private-offers) below |
+
+#### Retrieve products
+
+A private offer is based on an existing product in your Partner Center account. To see a list of products associated with your Partner Center account, use this API call:
+
+`GET https://graph.microsoft.com/rp/product-ingestion/product/`
+
+The response appears in the following sample format:
+
+```json
+{
+ "value": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/product/2022-03-01-preview2",
+ "id": "string",
+ "identity": {
+ "externalId": "string"
+ },
+ "type": "enum",
+ "alias": "string"
+ }
+ ],
+ "@nextLink": "opaque_uri"
+}
+```
+
+#### Retrieve plans for a specific product
+
+For products that contain more than one plan, you may want to create a private offer based on one specific plan. If so, you'll need that plan's ID. Obtain a list of the plans (such as variants or SKUs) for the product using the following API call:
+
+`GET https://graph.microsoft.com/rp/product-ingestion/plan/?product=<product-id>`
+
+The response appears in the following sample format:
+
+```json
+{
+ "value": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/plan/2022-03-01-preview2",
+ "product": "string",
+ "id": "string",
+ "identity": {
+ "externalId": "string"
+ },
+ "alias": "string"
+ }
+ ]
+}
+```
+
+#### Retrieve private offers
+
+To see a list of all private offers associated with your seller account, use the following API call:
+
+`GET https://graph.microsoft.com/rp/product-ingestion/private-offer/query?`
+
+## How to use the API
+
+The private offers API lets you create and manage private offers associated with products and plans within your Partner Center account. Here's a summary of the typical calling pattern when using this API.
+
+![Illustrates a three-step flow of the typical calling pattern when using this API.](media/api-call-pattern.svg)
+
+### Step 1. Make request
+
+When you make an API call to create, delete, withdraw, or upgrade a private offer, a new job is created to complete the requested task. The API response will contain a **jobId** associated with the job.
+
+### Step 2. Poll for job status
+
+Using the **jobId** from the initial API response, poll to get the status of the job. The status of the job will either be **running** or **completed**. Once the job is completed, the result will either be **succeeded** or **failed**. To avoid performance issues, don't poll a job more than once per minute.
+
+| jobStatus | Description |
+| | |
+| NotStarted | The job hasn't yet started; this is part of the response on the initial request. |
+| Running | The job is still running. |
+| Completed | The job has completed. See jobResult for more details. |
+
+| jobResult | Description |
+| | |
+| Pending | The job hasn't yet completed. |
+| Succeeded | The job has completed successfully. This will also return a resourceURI<br>that refers to the private offer related to the job. Use this resourceURI<br>to obtain the full details of a private offer. |
+| Failed | The job has failed. This will also return any relevant errors to help determine the cause of failure. |
+
+For more information, see [Querying the status of an existing job](#query-the-status-of-an-existing-job) later in this article.
+
+### Step 3. Obtain information from completed jobs
+
+A successful job will return a resourceUri referencing the relevant private offer. Use this resource Uri to obtain more details about the private offer in the future, such as the privateofferId.
+
+A failed job will contain errors that provide detail on why the job failed and how to resolve the issue.
+
+For more information, see [Obtaining details of an existing private offer](#obtaining-details-of-an-existing-private-offer) later in this article.
+
+## Create a private offer for a customer
+
+Use this method to create a private offer for a customer.
+
+### Request
+
+`POST https://graph.microsoft.com/rp/product-ingestion/configure`
+
+#### Request Header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+Optional: clientID
+
+#### Request parameters
+
+There are no parameters for this method.
+
+#### Request body
+
+Provide the details of the private offer using the ISV to Customer private offer schema. This must include a name.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2",
+ "resources": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/private-offer/2022-03-01-preview2",
+ "name": "privateOffercustomer1705",
+ "state": "live",
+ "privateOfferType": "customerPromotion",
+ "variableStartDate": true,
+ "end": "2022-01-31",
+ "acceptBy": "2022-02-28",
+ "preparedBy": "amy@xyz.com",
+ "termsAndConditionsDocSasUrl": "https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4rFOA",
+ "notificationContacts": [ "amy@xyz.com" ],
+ "beneficiaries": [
+ { "id": "xxxxxx-2163-5eea-ae4e-d6e88627c26b:6ea018a9-da9d-4eae-8610-22b51ebe260b_2019-05-31", "description": "Top First Customer"}
+ ],
+ "pricing": [
+ { "product": "product/34771906-9711-4196-9f60-4af380fd5042", "plan":"plan/123456","discountType": "percentage", "discountPercentage": 5 }
+ ]
+ }
+ ]
+}
+```
+
+##### Sample request body using absolute pricing
+
+If you're using absolute pricing instead of percentage-based discounting, you can create a new resource above the private offer resource that defines the absolute pricing, then include that newly created resource as an additional object in the resources list of the configure schema.
+
+Use this method to obtain the pricing resource for your existing public plan, edit the prices, and then use the edited resource for your private offer.
+
+`GET https://graph.microsoft.com/rp/product-ingestion/price-and-availability-private-offer-plan/{productId}?plan={planId}`
+
+Sample absolute pricing resource:
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/price-and-availability-private-offer-plan/2022-03-01-preview2",
+ "resourceName": "newSimpleAbsolutePricing",
+ "product": "product/7ba807c8-386a-4efe-80f1-b97bf8a554f8",
+ "plan": "plan/987654",
+ "pricing": {
+ "recurrentPrice": {
+ "priceInputOption": "usd",
+ "prices": [
+ {
+ "pricePerPaymentInUsd": 1,
+ "billingTerm": {
+ "type": "month",
+ "value": 1
+ }
+ },
+ {
+ "pricePerPaymentInUsd": 2,
+ "paymentOption": {
+ "type": "month",
+ "value": 1
+ }
+ "billingTerm": {
+ "type": "year",
+ "value": 1
+ }
+ }
+ ]
+ },
+ "customMeters": {
+ "priceInputOption": "usd",
+ "meters": {
+ "meter1": {
+ "pricePerPaymentInUsd": 1
+ }
+ }
+ }
+ }
+}
+
+```
+
+Include that resource as an object in the pricing module:
+
+```json
+[
+ {
+ "product": "product/34771906-9711-4196-9f60-4af380fd5042",
+ "plan": "plan/123456",
+ "discountType": "percentage",
+ "discountPercentage": 5
+ },
+ {
+ "product": "product/7ba807c8-386a-4efe-80f1-b97bf8a554f8",
+ "plan": "plan/987654",
+ "discountType": "absolute",
+ "priceDetails": {
+ "resourceName": "newSimpleAbsolutePricing"
+ }
+ }
+]
+```
+
+#### Response
+
+The response will contain the jobId you can use later to poll the status:
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "notStarted",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "0001-01-01",
+ "errors": []
+}
+```
+
+#### Error codes
+
+| HTTP Status Code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 400 | Schema Validation. Ensure your request body is following the correct schema and includes all required fields. |
+
+## Create a private offer for a reseller
+
+Use this method to create a new private offer for a customer.
+
+### Request
+
+`POST https://graph.microsoft.com/rp/product-ingestion/configure`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+There are no parameters for this method.
+
+#### Request body
+
+Provide the details of the private offer using the **ISV to reseller margin private offer** schema. You must include a name.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2",
+ "resources": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/private-offer/2022-03-01-preview2",
+ "privateOfferType": "cspPromotion",
+ "name": "privateOffercsp1034",
+ "state": "live",
+ "variableStartDate": false,
+ "start": "2022-01-31",
+ "end": "2022-02-28",
+ "preparedBy": "amy@xyz.com",
+ "notificationContacts": [ "amy@xyz.com" ],
+ "beneficiaries": [
+ { "id": "xxxxxxx-0a32-4b44-b904-39dd964dd790", "description": "Top First CSP"}
+ ],
+ "pricing": [
+ { "product": "product/34771906-9711-4196-9f60-4af380fd5042", "plan":"plan/123456","discountType": "percentage","discountPercentage": 5 }
+ ]
+ }
+ ]
+}
+```
+
+#### Sample request for a reseller offer restricted to a specified beneficiary
+
+If you're creating a margin for a reseller that applies to a specific customer, add that information as an object in the `beneficiaryRecipients` parameter array under beneficiaries.
+
+The request body will look like the sample below:
+
+```json
+[
+ {
+ "id": "xxxxxxx-0a32-4b44-b904-39dd964dd790",
+ "description": "Top First CSP",
+ "beneficiaryRecipients": [
+ {
+ "id": "xxxxxxx-48b4-af80-66333cd9c609",
+ "recipientType": "cspCustomer"
+ }
+ ]
+ }
+],
+```
+
+### Response
+
+The response will contain the jobId you can use later to poll the status.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "notStarted",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "0001-01-01",
+ "errors": []
+}
+```
+
+### Error codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 400 | Schema Validation. Ensure your request body is following the correct schema and includes all required fields. |
+
+## Delete an existing private offer
+
+Use this method to delete an existing private offer while it's still in draft state. You must use the private offer ID to specify which private offer to delete.
+
+### Request
+
+`POST https://graph.microsoft.com/rp/product-ingestion/configure`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+There are no parameters for this method.
+
+#### Request body
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2"
+ "resources": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/private-offer/2022-03-01-preview2",
+ "id": "private-offer/456e-a345-c457-1234",
+ "name": "privateOffercustomer1705",
+ "state": "deleted"
+ }
+ ]
+}
+```
+
+### Response
+
+The response will contain the jobId you can use later to poll the status.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "notStarted",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "0001-01-01",
+ "errors": []
+}
+```
+
+### Error codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 400 | Schema Validation. Ensure your request body is following the correct schema and includes all required fields. |
+
+## Withdraw an existing private offer
+
+Use this method to withdraw an existing private offer. Withdrawing a private offer means your customer will no longer be able to access it. A private offer can only be withdrawn if your customer hasn't accepted it.
+
+You must use the private offer ID to specify which private offer you want to withdraw.
+
+### Request
+
+`POST https://graph.microsoft.com/rp/product-ingestion/configure`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+There are no parameters for this method.
+
+#### Request body
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2"
+ "resources": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/private-offer/2022-03-01-preview2",
+ "id": "private-offer/456e-a345-c457-1234",
+ "name": "privateOffercustomer1705",
+ "state": "withdrawn"
+ }
+ ]
+}
+```
+
+### Response
+
+The response will contain the jobId you can later use to poll the status.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "notStarted",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "0001-01-01",
+ "errors": []
+}
+```
+
+### Error Codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 400 | Schema Validation. Ensure your request body is following the correct schema and includes all required fields. |
+
+## Upgrade an existing customer private offer
+
+Use this method to upgrade an existing customer private offer. You must provide the ID of the customer private offer you wish to use as the basis for the upgrade as well as the new name of the offer.
+
+### Request
+
+`POST https://graph.microsoft.com/rp/product-ingestion/configure`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+There are no parameters for this method.
+
+#### Request body
+
+You can use the same schemas as the two methods to create a new private offer depending on whether it is for a customer or a margin reseller. When upgrading, you must specify the existing private offer to be used as the basis for the upgrade in the `upgradedFrom` property.
+
+> [!NOTE]
+> If you provide pricing information in the upgrade request for a given product or plan, it will override the pricing information from the original private offer for that product or plan. If you do not provide new pricing information, the pricing information from the original private offer will be carried over.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2",
+ "resources": [
+ {
+ "$schema": "https://product-ingestion.azureedge.net/schema/private-offer/2022-03-01-preview2",
+ "name": "publicApiCustAPIUpgrade1",
+ "state": "live",
+ "privateOfferType": "customerPromotion",
+ "upgradedFrom": {
+ "name": "publicApiCustAPI",
+ "id": "private-offer/97ac19ce-04f9-40e7-934d-af41124a079d"
+ },
+ "variableStartDate": false,
+ "start":"2022-11-01",
+ "end": "2022-12-31",
+ "acceptBy": "2022-10-31",
+ "pricing": [
+ { "product": "product/4ce67c07-614f-4a5b-8627-95b16dbdbf2b", "discountType": "percentage", "discountPercentage": 20 },
+ { "product": "product/92931a1c-f8ac-4bb8-a66f-4abcb9145852", "discountType": "percentage", "discountPercentage": 20 }
+ ]
+ }
+ ]
+ }
+```
+
+### Response
+
+The response will contain the jobId you can use later to poll the status.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "notStarted",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "0001-01-01",
+ "errors": []
+}
+```
+
+### Error codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 400 | Schema Validation. Ensure your request body is following the correct schema and includes all required fields. |
+
+## Query the status of an existing job
+
+Use this method to query the status of an existing job. You can poll the status of an existing job with a polling interval with a maximum frequency of one request per minute.
+
+### Request
+
+`GET https://graph.microsoft.com/rp/product-ingestion/configure/<jobId>/status`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+jobId ΓÇô required. This is the ID of the job you want to query the status of. It's available in the response data generated during a previous request to either create, delete, withdraw, or upgrade a private offer.
+
+#### Request body
+
+Don't provide a request body for this method.
+
+### Response
+
+There are three possible responses for a completed job.
+
+| jobResult | Description |
+| | |
+| Running | The job hasn't yet completed. |
+| Succeeded | The job completed successfully. This will also return a resourceURI that refers to the private offer related to the job. Use this resourceURI to obtain the full details of a private offer. |
+| Failed | The job failed. This will also return any relevant errors to help determine the cause of failure. |
+
+Sample outputs:
+
+**Running**
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "running",
+ "jobResult": "pending",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "2021-12-21T21:30:10.3649551Z",
+ "errors": []
+}
+```
+
+**Succeeded**
+
+```json
+{
+ "$schema": " https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "b3f49dff-381f-480d-a10e-17f4ce49b65f",
+ "jobStatus": "completed",
+ "jobResult": "succeeded",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "2021-12-21T21:30:10.3649551Z",
+ "resourceUri": "https://product-ingestion.mp.microsoft.com/configure/b3f49dff-381f-480d-a10e-17f4ce49b65f",
+ "errors": []
+}
+```
+
+> [!NOTE]
+> If the job was created by a request to delete a private offer, then there will be no resourceURI in the response.
+
+**Failure**
++
+```json
+{
+ "$schema": " https://product-ingestion.azureedge.net/schema/configure-status/2022-03-01-preview2",
+ "jobId": "c32dd7e8-8619-462d-a96b-0ac1974bace5",
+ "jobStatus": "completed",
+ "jobResult": "failed",
+ "jobStart": "2021-12-21T21:29:54.9702903Z",
+ "jobEnd": "2021-12-21T21:30:10.3649551Z",
+ "errors": [
+ {
+ "code": "Conflict",
+ "message": "The start date should be defined"
+ }
+ ]
+}
+```
+
+### Error codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: ensure you're using a valid Azure AD access token. |
+
+## Obtaining details of an existing private offer
+
+There are two methods to do this depending whether you have the resourceURI or the private offer ID.
+
+### Request
+
+`GET https://graph.microsoft.com/rp/product-ingestion/private-offer/<id>`
+
+or
+
+`GET https://graph.microsoft.com/rp/product-ingestion/configure/<jobId>`
+
+#### Request header
+
+| Header | Type | Description |
+| | | |
+| Authorization | String | Required. The Azure AD access token in the form **`Bearer <token>`**. |
+
+#### Request parameters
+
+ID - required. This is the ID of the private offer you want the full details of. This ID is available in the response data generated during a previous request to obtain the details of an existing private offer using the jobId.
+
+jobId - required. This is the ID of the job you want the full details of. This ID is available in the response data generated during a previous request to either create, delete, withdraw, or upgrade a private offer.
+
+#### Request body
+
+Don't provide a request body for this method.
+
+### Response
+
+You'll receive the full details of the private offer.
+
+```json
+{
+ "$schema": "https://product-ingestion.azureedge.net/schema/configure/2022-03-01-preview2",
+ "resources": [
+ {
+ "id": "private-offer/07380dd9-bcbb-cccbb-bbccbc",
+ "name": "privateOffercsp1015",
+ "privateOfferType": "cspPromotion",
+ "upgradedFrom": null,
+ "variableStartDate": false,
+ "start": "2021-12-01",
+ "end": "2022-01-31",
+ "acceptBy": null,
+ "preparedBy": "amy@xyz.com",
+ "notificationContacts": [
+ "amy@xyz.com"
+ ],
+ "state": "Live",
+ "termsAndConditionsDocSasUrl": null,
+ "beneficiaries": [
+ {
+ "id": "xxxxyyyzz",
+ "description": "Top First CSP",
+ "beneficiaryRecipients": null
+ }
+ ],
+ "pricing": [
+ {
+ "product": "product/xxxxxyyyyyyzzzzz",
+ "plan": "plan/123456",
+ "discountType": "Percentage",
+ "discountPercentage": 5.0,
+ "featureAvailabilityId": null,
+ "availabilityInstanceId": null
+ }
+ ],
+ "lastModified": "0001-01-01",
+ "acceptanceLinks": null,
+ "_etag": "\"9600487b-0000-0800-0000-61c24c7f0000\"",
+ "schema": null,
+ "resourceName": null,
+ "validations": null
+ }
+ ]
+}
+```
+
+### Error codes
+
+| Error code | Description |
+| | |
+| 401 | Authentication Error: Ensure you're using a valid Azure AD access token. |
+| 404 | Resource not found. Ensure you're using the correct ID in the request. |
+
+## How to parse error messages in the response body
+
+![Screenshot showing error messages in a response body.](media/parsing-error-messages.png)
+
+## Schemas
+
+[Private offer](https://aka.ms/POSchema)
+
+[ISV to customer private offer](https://aka.ms/POCustomerSchema)
+
+[ISV to reseller margin private offer](https://aka.ms/POCSPSchema)
+
+[Private offer acceptance link](https://aka.ms/POacceptlinkschema)
+
+[Private offer beneficiary](https://aka.ms/PObeneficiary)
+
+[Private offer pricing](https://aka.ms/POpricing)
+
+[Private offer promotion reference](https://aka.ms/POpromoref)
+
+## Next steps
+
+- To start using private offers, follow the steps in [ISV to customer private offers](isv-customer.md).
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
If you have enabled the appliance for private endpoint connectivity, use the fol
|**DNS mappings containing Private endpoint URLs** | **Details** | | | |
-|*.disc.privatelink.test.migration.windowsazure.com | Azure Migrate Discovery service endpoint
-|*.asm.privatelink.test.migration.windowsazure.com | Azure Migrate Assessment service endpoint
-|*.hub.privatelink.test.migration.windowsazure.com | Azure Migrate hub endpoint to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings
+|*.disc.privatelink.prod.migration.windowsazure.com | Azure Migrate Discovery service endpoint
+|*.asm.privatelink.prod.migration.windowsazure.com | Azure Migrate Assessment service endpoint
+|*.hub.privatelink.prod.migration.windowsazure.com | Azure Migrate hub endpoint to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings
+|*.privatelink.siterecovery.windowsazure.com | Azure Site Recovery service endpoint to orchestrate replications
|*.vault.azure.net | Key Vault endpoint |*.blob.core.windows.net | Storage account endpoint for dependency and performance data
network-watcher Network Watcher Packet Capture Manage Portal Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal-vmss.md
+
+ Title: Manage packet captures in Virtual machine scale sets with Azure Network Watcher - Azure portal
+
+description: Learn how to manage the packet capture feature of Network Watcher in virtual machine scale set using the Azure portal.
+
+documentationcenter: na
+++
+ na
+ Last updated : 01/07/2021+++
+# Manage packet captures in Virtual machine scale sets with Azure Network Watcher using the portal
+
+Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine scale set instance/(s). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine scale set instance/(s), which saves valuable time.
+
+In this article, you learn to start, stop, download, and delete a packet capture.
+
+## Before you begin
+
+Packet capture requires the following outbound TCP connectivity:
+- to the chosen storage account over port 443
+- to 169.254.169.254 over port 80
+- to 168.63.129.16 over port 8037
+
+> [!NOTE]
+> The ports mentioned in the latter two cases above are common across all Network Watcher features that involve the Network Watcher extension and might occasionally change.
++
+If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist that allow the previous ports. Similarly, adding user-defined traffic routes to your network may prevent connectivity to the above mentioned IPs and ports. Ensure they're reachable.
+
+## Start a packet capture
+
+1. In your browser, navigate to the [Azure portal](https://portal.azure.com) and select **All services**, and then select **Network Watcher** in the **Networking section**.
+2. Select **Packet capture** under **Network diagnostic tools**. Any existing packet captures are listed, regardless of their status.
+3. Select **Add** to create a packet capture. You can select values for the following properties:
+ - **Subscription**: The subscription that the virtual machine scale set you want to create the packet capture for is in.
+ - **Resource group**: The resource group of the virtual machine scale set.
+ - **Target Type**: Choose Virtual Machine Scale Set from the drop-down.
+ - **Target Instance**: The specific instance/(s) where you want to run captures on. You can choose Select all, if you wish to run captures on all the instances.
+ - **Packet capture name**: Name gets autopoulated and can be overwritten as per the user's convenience
+ - **Storage account or file**: Select **Storage account**, **File**, or both. Recommended option is to choose storage account option. If you select **File**, the capture is written to a path within the virtual machine instance.
+ - **Storage accounts**: Select an existing storage account, if you selected *Storage account*. This option is only available if you selected **Storage**.
+ - **Local file path**: The local path on the virtual machine where the packet capture will be saved (valid only when *File* is selected). The path must be a valid path. If you're using a Linux virtual machine scale set, the path must start with */var/captures*.
+
+
+ > [!NOTE]
+ > Premium storage accounts are currently not supported for storing packet captures.
+
+ - **Maximum bytes per packet**: The number of bytes from each packet that are captured. If left blank, all bytes are captured.
+ - **Maximum bytes per session**: The total number of bytes that are captured. By default the value is 1.07 GB
+ - **Time limit (seconds)**: The time limit before the packet capture is stopped. The default is 18,000 seconds(5 hours).
+ - Filtering (Optional). Select **+ Add filter**
+ - **Protocol**: The protocol to filter for the packet capture. The available values are TCP, UDP, and Any.
+ - **Local IP address**: Filters the packet capture for packets where the local IP address matches this value.
+ - **Local port**: Filters the packet capture for packets where the local port matches this value.
+ - **Remote IP address**: Filters the packet capture for packets where the remote IP address matches this value.
+ - **Remote port**: Filters the packet capture for packets where the remote port matches this value.
+
+ > [!NOTE]
+ > Port and IP address values can be a single value, range of values, or a range, such as 80-1024, for port. You can define as many filters as you need.
+
+4. Select **OK**.
+
+After the time limit set on the packet capture has expired, the packet capture is stopped, and can be reviewed. You can also manually stop a packet capture session.
+
+> [!NOTE]
+> The portal automatically:
+> * Creates a network watcher in the same region as the region the virtual machine scale set you selected exists in, if the region doesn't already have a network watcher.
+> * Adds the *AzureNetworkWatcherExtension* Linux or Windows to the virtual machine scale set, if it's not already installed.
+
+## Delete a packet capture
+
+1. In the packet capture view, select **...** on the right-side of the packet capture, or right-click an existing packet capture, and select **Delete**.
+2. You're asked to confirm you want to delete the packet capture. Select **Yes**.
+
+> [!NOTE]
+> Deleting a packet capture does not delete the capture file in the storage account or on the virtual machine scale set instance/(s).
+
+## Stop a packet capture
+
+In the packet capture view, select **...** on the right-side of the packet capture, or right-click an existing packet capture, and select **Stop**.
+
+## Download a packet capture
+
+Once your packet capture session has completed, the capture file is uploaded to blob storage or to a local file on the virtual machine scale set instance. The storage location of the packet capture is defined during creation of the packet capture. A convenient tool to access capture files saved to a storage account is Microsoft Azure Storage Explorer, which you can [download](https://storageexplorer.com/).
+
+If a storage account is specified, packet capture files are saved to a storage account at the following location:
+
+```
+https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
+```
+
+If you selected **File** when you created the capture, you can view or download the file from the path you configured on the virtual machine scale set instance.
+
+## Next steps
+
+- To determine whether specific traffic is allowed in or out of a virtual machine/ virtual machine scale set, see [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
network-watcher Network Watcher Packet Capture Manage Powershell Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell-vmss.md
+
+ Title: Manage packet captures in Virtual machine scale sets - Azure PowerShell
+
+description: This page explains how to manage the packet capture feature of Network Watcher in virtual machine scale set using PowerShell
+
+documentationcenter: na
+++
+ na
+ Last updated : 01/07/2021++++
+# Manage packet captures in Virtual machine scale set with Azure Network Watcher using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](network-watcher-packet-capture-manage-portal-vmss.md)
+> - [Azure REST API](network-watcher-packet-capture-manage-rest-vmss.md)
+> - [PowerShell](network-watcher-packet-capture-manage-powershell-vmss.md)
++
+Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine scale set instance/(s). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine scale set instance/(s), which saves valuable time.
+
+This article takes you through the different management tasks that are currently available for packet capture.
+
+- [**Start a packet capture**](#start-a-packet-capture)
+- [**Stop a packet capture**](#stop-a-packet-capture)
+- [**Delete a packet capture**](#delete-a-packet-capture)
+- [**Download a packet capture**](#download-a-packet-capture)
+++
+## Before you begin
+
+This article assumes you have the following resources:
+
+* An instance of Network Watcher in the region you want to create a packet capture
+
+> [!IMPORTANT]
+> Packet capture requires a virtual machine scale set extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
+
+## Install virtual machine scale set extension
+
+### Step 1
+
+```powershell
+$vmss = Get-AzVmss -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet"
+```
+
+### Step 2
+
+Install networkWatcherAgent on virtual machine scale set/ virtual machine scale set instance/(s)
++
+```powershell
+Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "AzureNetworkWatcherExtension" -Publisher "Microsoft.Azure.NetworkWatcher" -Type "NetworkWatcherAgentWindows" -TypeHandlerVersion "1.4" -AutoUpgradeMinorVersion $True
+
+Update-AzVmss -ResourceGroupName "$resourceGroupName" -Name $virtualMachineScaleSetName -VirtualMachineScaleSet $vmss
+Update-AzVmssInstance -ResourceGroupName "$resourceGroupName" -VMScaleSetName $vmss.Name -InstanceId 0
+> The `Set-AzVMExtension` cmdlet may take several minutes to complete.
+```
+
+### Step 3
+
+To ensure that the agent is installed, follow Step 1
++
+```powershell
+Get-AzVMss -ResourceGroupName $vmss.ResourceGroupName -VMNScaleSetName $vmss.Name
+```
++
+## Start a packet capture
+
+Once the preceding steps are complete, the packet capture agent is installed on the virtual machine scale set.
+
+### Step 1
+
+The next step is to retrieve the Network Watcher instance. This variable is passed to the `New-AzNetworkWatcherPacketCapture` cmdlet in step 4.
+
+```powershell
+$networkWatcher = Get-AzNetworkWatcher | Where {$_.Location -eq "westcentralus" }
+```
+
+### Step 2
+
+Retrieve a storage account. This storage account is used to store the packet capture file.
+
+```powershell
+$storageAccount = Get-AzStorageAccount -ResourceGroupName testrg -Name testrgsa123
+```
+
+### Step 3
+
+Filters can be used to limit the data that is stored by the packet capture. The following example sets up two filters. One filter collects outgoing TCP traffic only from local IP 10.0.0.3 to destination ports 20, 80 and 443. The second filter collects only UDP traffic.
+
+```powershell
+$filter1 = New-AzPacketCaptureFilterConfig -Protocol TCP -RemoteIPAddress "1.1.1.1-255.255.255.255" -LocalIPAddress "10.0.0.3" -LocalPort "1-65535" -RemotePort "20;80;443"
+$filter2 = New-AzPacketCaptureFilterConfig -Protocol UDP
+```
+
+> [!NOTE]
+> Multiple filters can be defined for a packet capture.
+
+### Step 4
+
+Create Scope for packet capture
+```powershell
+$s1 = New-AzPacketCaptureScopeConfig -Include "0", "1"
+```
++
+### Step 5
+
+Run the `New-AzNetworkWatcherPacketCaptureV2` cmdlet to start the packet capture process, passing the required values retrieved in the preceding steps.
+```powershell
+
+New-AzNetworkWatcherPacketCaptureV2 -NetworkWatcher $networkwatcher -PacketCaptureName $pcName -TargetId $vmss.Id -TargetType "azurevmss" -StorageAccountId $storageAccount.id -Filter $filter1, $filter2
+```
+## Get a packet capture
+
+Running the `Get-AzNetworkWatcherPacketCapture` cmdlet, retrieves the status of a currently running, or completed packet capture.
+
+```powershell
+Get-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
+```
+
+The following example is the output from the `Get-AzNetworkWatcherPacketCapture` cmdlet. The following example is after the capture is complete. The PacketCaptureStatus value is Stopped, with a StopReason of TimeExceeded. This value shows that the packet capture was successful and ran its time.
+```
+Name : PacketCaptureTest
+Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatcher
+ s/NetworkWatcher_westcentralus/packetCaptures/PacketCaptureTest
+Etag : W/"4b9a81ed-dc63-472e-869e-96d7166ccb9b"
+ProvisioningState : Succeeded
+Target : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Compute/virtualMachines/testvm1
+BytesToCapturePerPacket : 0
+TotalBytesPerSession : 1073741824
+TimeLimitInSeconds : 60
+StorageLocation : {
+ "StorageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Storage/storageA
+ ccounts/examplestorage",
+ "StoragePath": "https://examplestorage.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-00000
+ 0000000/resourcegroups/testrg/providers/microsoft.compute/virtualmachines/testvm1/2017/02/01/packetcapture_22_42_48_238.cap"
+ }
+Filters : [
+ {
+ "Protocol": "TCP",
+ "RemoteIPAddress": "1.1.1.1-255.255.255",
+ "LocalIPAddress": "10.0.0.3",
+ "LocalPort": "1-65535",
+ "RemotePort": "20;80;443"
+ },
+ {
+ "Protocol": "UDP",
+ "RemoteIPAddress": "",
+ "LocalIPAddress": "",
+ "LocalPort": "",
+ "RemotePort": ""
+ }
+ ]
+CaptureStartTime : 2/1/2017 10:43:01 PM
+PacketCaptureStatus : Stopped
+StopReason : TimeExceeded
+PacketCaptureError : []
+```
+
+## Stop a packet capture
+
+By running the `Stop-AzNetworkWatcherPacketCapture` cmdlet, if a capture session is in progress it's stopped.
+
+```powershell
+Stop-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
+```
+
+> [!NOTE]
+> The cmdlet returns no response when ran on a currently running capture session or an existing session that has already stopped.
+
+## Delete a packet capture
+
+```powershell
+Remove-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
+```
+
+> [!NOTE]
+> Deleting a packet capture does not delete the file in the storage account.
+
+## Download a packet capture
+
+Once your packet capture session has completed, the capture file can be uploaded to blob storage or to a local file on the instance/(s). The storage location of the packet capture is defined at creation of the session. A convenient tool to access these capture files saved to a storage account is Microsoft Azure Storage Explorer, which can be downloaded here: https://storageexplorer.com/
+
+If a storage account is specified, packet capture files are saved to a storage account at the following location:
+
+If multiple instances are selected
+```
+https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachinescalesets/{VMSSName}/{year}/{month}/{day}/packetCapture_{creationTime}
+```
+If single instance is selected
+```
+https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachinescalesets/{VMSSName}/virtualMachines/{instance}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
+```
+++
+## Next steps
+
+Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
+
+<!-- Image references -->
network-watcher Network Watcher Packet Capture Manage Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest-vmss.md
+
+ Title: Manage packet captures in Virtual machine scale sets with Azure Network Watcher- REST API | Microsoft Docs
+description: This page explains how to manage the packet capture feature of virtual machine scale set in Network Watcher using Azure REST API
+
+documentationcenter: na
+++
+ na
+ Last updated : 01/07/2021+++++
+# Manage packet captures in Virtual machine scale set with Azure Network Watcher using Azure REST API
+
+> [!div class="op_single_selector"]
+> - [Azure portal](network-watcher-packet-capture-manage-portal-vmss.md)
+> - [PowerShell](network-watcher-packet-capture-manage-powershell-vmss.md)
+> - [Azure REST API](network-watcher-packet-capture-manage-rest-vmss.md)
+
+Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine scale set instance/(s). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine, which saves valuable time.
+
+This article takes you through the different management tasks that are currently available for packet capture.
+
+- [**Get a packet capture**](#get-a-packet-capture)
+- [**List all packet captures**](#list-all-packet-captures)
+- [**Query the status of a packet capture**](#query-packet-capture-status)
+- [**Start a packet capture**](#start-packet-capture)
+- [**Stop a packet capture**](#stop-packet-capture)
+- [**Delete a packet capture**](#delete-packet-capture)
+++
+## Before you begin
+
+ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
+
+This scenario assumes you've already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
+
+> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
+
+## Log in with ARMClient
+
+```powershell
+armclient login
+```
+
+## Retrieve a virtual machine
+
+Run the following script to return a virtual machine. This information is needed for starting a packet capture.
+
+The following code needs variables:
+
+- **subscriptionId** - The subscription id can also be retrieved with the **Get-AzSubscription** cmdlet.
+- **resourceGroupName** - The name of a resource group that contains virtual machines.
+
+```powershell
+$subscriptionId = "<subscription id>"
+$resourceGroupName = "<resource group name>"
+
+Get List of all VM scale sets under a resource group
+
+armclient get https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets?api-version=2022-03-01
+
+Display information about a virtual machine scale set
+
+armclient GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2022-03-01
+```
++
+## Get a packet capture
+
+The following example gets the status of a single packet capture
+
+```powershell
+$subscriptionId = "<subscription id>"
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
+```
+
+The following responses are examples of a typical response returned when querying the status of a packet capture.
+
+```json
+{
+ "name": "TestPacketCapture5",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
+ "captureStartTime": "2016-12-06T17:20:01.5671279Z",
+ "packetCaptureStatus": "Running",
+ "packetCaptureError": []
+}
+```
+
+```json
+{
+ "name": "TestPacketCapture5",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
+ "captureStartTime": "2016-12-06T17:20:01.5671279Z",
+ "packetCaptureStatus": "Stopped",
+ "stopReason": "TimeExceeded",
+ "packetCaptureError": []
+}
+```
+
+## List all packet captures
+
+The following example gets all packet capture sessions in a region.
+
+```powershell
+$subscriptionId = "<subscription id>"
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures?api-version=2016-12-01"
+```
+
+The following response is an example of a typical response returned when getting all packet captures
+
+```json
+{
+ "value": [
+ {
+ "name": "TestPacketCapture6",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
+ "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
+ "bytesToCapturePerPacket": 0,
+ "totalBytesPerSession": 1073741824,
+ "timeLimitInSeconds": 60,
+ "storageLocation": {
+ "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
+ "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
+ture_17_19_53_056.cap",
+ "filePath": "c:\\temp\\packetcapture.cap"
+ },
+ "filters": [
+ {
+ "protocol": "Any",
+ "localIPAddress": "",
+ "localPort": "",
+ "remoteIPAddress": "",
+ "remotePort": ""
+ }
+ ]
+ }
+ },
+ {
+ "name": "TestPacketCapture7",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture7",
+ "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
+ "properties": {
+ "provisioningState": "Failed",
+ "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
+ "bytesToCapturePerPacket": 0,
+ "totalBytesPerSession": 1073741824,
+ "timeLimitInSeconds": 60,
+ "storageLocation": {
+ "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
+ "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
+ture_17_23_15_364.cap",
+ "filePath": "c:\\temp\\packetcapture.cap"
+ },
+ "filters": [
+ {
+ "protocol": "Any",
+ "localIPAddress": "",
+ "localPort": "",
+ "remoteIPAddress": "",
+ "remotePort": ""
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Query packet capture status
+
+The following example gets all packet capture sessions in a region.
+
+```powershell
+$subscriptionId = "<subscription id>"
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+$packetCaptureName = "TestPacketCapture5"
+armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
+```
+
+The following response is an example of a typical response returned when querying the status of a packet capture.
+
+```json
+{
+ "name": "vm1PacketCapture",
+ "id": "/subscriptions/{guid}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}",
+ "captureStartTime" : "9/7/2016 12:35:24PM",
+ "packetCaptureStatus" : "Stopped",
+ "stopReason" : "TimeExceeded",
+ "packetCaptureError" : [ ]
+}
+```
+
+## Start packet capture
+
+The following example creates a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
+
+```powershell
+$subscriptionId = '<subscription id>'
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+$packetCaptureName = "TestPacketCapture5"
+$storageaccountname = "contosoexamplergdiag374"
+$vmssName = "ContosoVMSS"
+$targetType = "AzureVMSS"
+$bytestoCaptureperPacket = "0"
+$bytesPerSession = "1073741824"
+$captureTimeinSeconds = "60"
+$localIP = ""
+$localPort = "" # Examples are: 80, or 80-120
+$remoteIP = ""
+$remotePort = "" # Examples are: 80, or 80-120
+$protocol = "" # Valid values are TCP, UDP and Any.
+$targetUri = "" # Example: /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.compute/virtualMachineScaleSet/$vmssName
+$storageId = "" #Example "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374"
+$storagePath = "" # Example: "https://mytestaccountname.blob.core.windows.net/capture/vm1Capture.cap"
+$localFilePath = "c:\\temp\\packetcapture.cap" # Example: "d:\capture\vm1Capture.cap"
+
+$requestBody = @"
+{
+ 'properties': {
+ 'target': '/${targetUri}',
+ 'targetType': '/${targetType}',
+ 'bytesToCapturePerPacket': '${bytestoCaptureperPacket}',
+ 'totalBytesPerSession': '${bytesPerSession}',
+ 'scope': {
+ 'include': [ "1", "2" ],
+ 'exclude': [ "3", "4" ],
+ },
+ 'timeLimitinSeconds': '${captureTimeinSeconds}',
+ 'storageLocation': {
+ 'storageId': '${storageId}',
+ 'storagePath': '${storagePath}',
+ 'filePath': '${localFilePath}'
+ },
+ 'filters': [
+ {
+ 'protocol': '${protocol}',
+ 'localIPAddress': '${localIP}',
+ 'localPort': '${localPort}',
+ 'remoteIPAddress': '${remoteIP}',
+ 'remotePort': '${remotePort}'
+ }
+ ]
+ }
+}
+"@
+
+armclient PUT "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-07-01" $requestbody
+
+```
+
+## Stop packet capture
+
+The following example stops a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
+
+```powershell
+$subscriptionId = '<subscription id>'
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+$packetCaptureName = "TestPacketCapture5"
+armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/stop?api-version=2016-12-01"
+```
+
+## Delete packet capture
+
+The following example deletes a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
+
+```powershell
+$subscriptionId = '<subscription id>'
+$resourceGroupName = "NetworkWatcherRG"
+$networkWatcherName = "NetworkWatcher_westcentralus"
+$packetCaptureName = "TestPacketCapture5"
+
+armclient delete "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-12-01"
+```
+
+> [!NOTE]
+> Deleting a packet capture does not delete the file in the storage account
+
+## Next steps
+
+For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
Title: Introduction to Packet capture in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher packet capture capability
+description: This page provides an overview of the Network Watcher packet capture's capability
documentationcenter: na
# Introduction to variable packet capture in Azure Network Watcher
+>[!IMPORTANT]
+>[!NEW]
+> Packet capture ia now also available for `Virtual Machine Scale Sets`. To checkout please visit [Manage packet capture in the Azure portal for VMSS](network-watcher-packet-capture-manage-portal-vmss.md)
+ Network Watcher variable packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
-Packet capture is a virtual machine extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine, which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
+Packet capture is an extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine or virtual machine scale set instance/(S), which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
> [!IMPORTANT] > Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
-To reduce the information you capture to only the information you want, the following options are available for a packet capture session:
+To reduce the information in order to capture only required information, following options are available for a packet capture session:
**Capture configuration**
To reduce the information you capture to only the information you want, the foll
## Considerations
-There is a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and does not apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#network-watcher-limits) for a full list of limits.
+There's a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and doesn't apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#network-watcher-limits) for a full list of limits.
### Next steps
-Learn how you can manage packet captures through the portal by visiting [Manage packet capture in the Azure portal](network-watcher-packet-capture-manage-portal.md) or with PowerShell by visiting [Manage Packet Capture with PowerShell](network-watcher-packet-capture-manage-powershell.md).
+Learn how you can manage packet captures through the portal by visiting [Manage packet capture in the Azure portal for VM](network-watcher-packet-capture-manage-portal.md)and [Manage packet capture in the Azure portal for virtual machine scale set](network-watcher-packet-capture-manage-portal-vmss.md) or with PowerShell by visiting [Manage Packet Capture with PowerShell for VM](network-watcher-packet-capture-manage-powershell.md)and [Manage Packet Capture with PowerShell for virtual machine scale set](network-watcher-packet-capture-manage-powershell-vmss.md)
Learn how to create proactive packet captures based on virtual machine alerts by visiting [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
The output:
machineconfig.machineconfiguration.openshift.io "25-machineconfig-master-reboot" deleted ```
-Wait for all of the master nodes to reboot and return to a Ready state.
+Wait for all of the master nodes to reboot and return to a Ready state.
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Title: Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster
-description: Deploy a Java application with Red Hat JBoss Enterprise Application Platform on an Azure Red Hat OpenShift 4 cluster.
+ Title: Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift (ARO) 4 cluster
+description: Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift (ARO) 4 cluster.
Previously updated : 01/11/2022 Last updated : 06/06/2022 keywords: java, jakartaee, microprofile, EAP, JBoss EAP, ARO, OpenShift, JBoss Enterprise Application Platform
-# Deploy a Java application with Red Hat JBoss Enterprise Application Platform on an Azure Red Hat OpenShift 4 cluster
+# Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift (ARO) 4 cluster
-This article shows you how to deploy a Red Hat JBoss Enterprise Application Platform (JBoss EAP) app to an Azure Red Hat OpenShift (ARO) 4 cluster. The application is a Jakarta EE application that uses Microsoft SQL server database. The app is deployed using [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts).
+This article shows you how to deploy a Red Hat JBoss Enterprise Application Platform (JBoss EAP) app to an Azure Red Hat OpenShift (ARO) 4 cluster. The application is a Jakarta EE application backed by an SQL database. The app is deployed using [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts).
The guide takes a traditional Jakarta EE application and walks you through the process of migrating it to a container orchestrator such as Azure Red Hat OpenShift. First, it describes how you can package your application as a [Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/the-bootable-jar_default) to run it locally. Finally, it shows you how you can deploy on OpenShift with three replicas of the JBoss EAP application by using Helm Charts.
The application is a stateful application that stores information in an HTTP Ses
* Jakarta Persistence * MicroProfile Health
-> [!IMPORTANT]
-> This article assumes you have access to a Microsoft SQL Server instance accessible to your ARO cluster. Please review the [support policy for SQL Server Containers](https://support.microsoft.com/help/4047326/support-policy-for-microsoft-sql-server) to ensure that you are running on a supported configuration.
- > [!IMPORTANT] > This article deploys an application by using JBoss EAP Helm Charts. At the time of writing, this feature is still offered as a [Technology Preview](https://access.redhat.com/articles/6290611). Before choosing to deploy applications with JBoss EAP Helm Charts on production environments, ensure that this feature is a supported feature for your JBoss EAP/XP product version.
The application is a stateful application that stores information in an HTTP Ses
[!INCLUDE [aro-quota](includes/aro-quota.md)]
-1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed.
+1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed (such as [WSL](/windows/wsl/) on Windows).
1. Install a Java SE implementation (for example, [Oracle JDK 11](https://www.oracle.com/java/technologies/downloads/#java11)). 1. Install [Maven](https://maven.apache.org/download.cgi) 3.6.3 or higher. 1. Install [Azure CLI](/cli/azure/install-azure-cli) 2.29.2 or later.
The application is a stateful application that stores information in an HTTP Ses
1. Execute the following command to create the OpenShift project for this demo application: ```bash
- $ oc new-project eap-demo
- Now using project "eap-demo" on server "https://api.zhbq0jig.northeurope.aroapp.io:6443".
-
- You can add applications to this project with the 'new-app' command. For example, try:
-
- oc new-app rails-postgresql-example
-
- to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
-
- kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname
+ oc new-project eap-demo
``` 1. Execute the following command to add the view role to the default service account. This role is needed so the application can discover other pods and form a cluster with them: ```bash
- $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
- clusterrole.rbac.authorization.k8s.io/view added: "system:serviceaccount:eap-demo:default"
+ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
``` ## Prepare the application
However, when you are targeting OpenShift, you might want to trim the capabiliti
Navigate to your demo application local repository and change the branch to `bootable-jar`: ```bash
-$ git checkout bootable-jar
-Switched to branch 'bootable-jar'
-$
+git checkout bootable-jar
```
-Let's do a quick review about what we have changed:
+Let's do a quick review of what we changed in this branch:
* We have added the `wildfly-jar-maven` plugin to provision the server and the application in a single executable JAR file. The OpenShift deployment unit is our server with our application. * On the maven plugin, we have specified a set of Galleon layers. This configuration allows us to trim the server capabilities to only what we need. For complete documentation on Galleon, see [the WildFly documentation](https://docs.wildfly.org/galleon/).
Let's do a quick review about what we have changed:
## Run the application locally
-Before deploying the application on OpenShift, we are going to run it locally to verify how it works. The following steps assume you have a Microsoft SQL Server running and available from your local environment. This database must be created using the following information:
-
-* Database name: `todos_db`
-* SA password: `Passw0rd!`
+Before deploying the application on OpenShift, we are going to run it locally to verify how it works. The following steps assume you have a Microsoft SQL Server running and available from your local environment.
To create the database, follow the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal), but use the following substitutions. * For **Database name** use `todos_db`.
+* For **Server admin login** use `azureuser`.
* For **Password** use `Passw0rd!`.
+* In the **Firewall rules** section, toggle the **Allow Azure services and resources to access this server** to **Yes**.
+
+All of the other settings can be safely used from the linked article.
On the **Additional settings** page, you don't have to choose the option to pre-populate the database with sample data, but there is no harm in doing so.
-Once the database has been created with the above database name and password, obtain the value for the `MSSQLSERVER_HOST` from the overview page for the database resource in the portal. Hover the mouse over the value of the **Server name** field and select the copy icon that appears beside the value. Save this aside for use later.
+Once the database has been created with the above database name, Server admin login and password, get the value for the server name from the overview page for the newly created database resource in the portal. Hover the mouse over the value of the **Server name** field and select the copy icon that appears beside the value. Save this aside for use later (we will set a variable named `MSSQLSERVER_HOST` to this value).
+
+> [!NOTE]
+> To keep monetary costs low, the Quickstart directs the reader to select the serverless compute tier. This tier scales to zero when there is no activity. When this happens, the database is not immediately responsive. If, at any point when executing the steps in this article, you observe database problems, consider disabling Auto-pause. To learn how, search for Auto-pause in [Azure SQL Database serverless](/azure/azure-sql/database/serverless-tier-overview).
Follow the next steps to build and run the application locally. 1. Build the Bootable JAR. When we are building the Bootable JAR, we need to specify the database driver version we want to use: ```bash
- $ MSSQLSERVER_DRIVER_VERSION=7.4.1.jre11 \
+ export MSSQLSERVER_DRIVER_VERSION=7.4.1.jre11
mvn clean package ```
-1. Launch the Bootable JAR by using the following command. When we are launching the application, we need to pass the required environment variables to configure the data source:
+1. Launch the Bootable JAR by using the following commands.
+
+ You must ensure that the remote MSSQL database permits network traffic from the host on which this server is running. Because you selected **Add current client IP address** when performing the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart), if the host on which the server is running is the same host from which your browser is connecting to the Azure portal, the network traffic should be permitted. If host on which the server is running is some other host, you'll need to refer to [Use the Azure portal to manage server-level IP firewall rules](/azure/azure-sql/database/firewall-configure?view=azuresql&preserve-view=true#use-the-azure-portal-to-manage-server-level-ip-firewall-rules).
+
+ When we are launching the application, we need to pass the required environment variables to configure the data source:
```bash
- $ MSSQLSERVER_USER=SA \
- MSSQLSERVER_PASSWORD=Passw0rd! \
- MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \
- MSSQLSERVER_DATABASE=todos_db \
- MSSQLSERVER_HOST=<server name saved aside earlier> \
- MSSQLSERVER_PORT=1433 \
+ export MSSQLSERVER_USER=azureuser
+ export MSSQLSERVER_PASSWORD='Passw0rd!'
+ export MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds
+ export MSSQLSERVER_DATABASE=todos_db
+ export MSSQLSERVER_HOST=<server name saved aside earlier>
+ export MSSQLSERVER_PORT=1433
mvn wildfly-jar:run ```
- Check the [Galleon Feature Pack for integrating datasources](https://github.com/jbossas/eap-datasources-galleon-pack/blob/main/doc/mssqlserver/README.md) documentation to get a complete list of available environment variables. For details on the concept of feature-pack, see [the WildFly documentation](https://docs.wildfly.org/galleon/#_feature_packs).
+ If you want to learn more about the underlying runtime used by this demo, the [Galleon Feature Pack for integrating datasources](https://github.com/jbossas/eap-datasources-galleon-pack/blob/main/doc/mssqlserver/README.md) documentation has a complete list of available environment variables. For details on the concept of feature-pack, see [the WildFly documentation](https://docs.wildfly.org/galleon/#_feature_packs).
+
+ If you receive an error with text similar to the following:
+
+ ```bash
+ Cannot open server '<your prefix>mysqlserver' requested by the login. Client with IP address 'XXX.XXX.XXX.XXX' is not allowed to access the server.
+ ```
+
+ Your steps to ensure the network traffic is permitted above were ineffective. Ensure the IP address from the error message is included in the firewall rules.
+
+ If you receive an message with text similar to the following:
+
+ ```bash
+ Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object named 'TODOS' in the database.
+ ```
+
+ This message indicates the sample data is already in the database. This message can be ignored.
1. (Optional) If you want to verify the clustering capabilities, you can also launch more instances of the same application by passing to the Bootable JAR the `jboss.node.name` argument and, to avoid conflicts with the port numbers, shifting the port numbers by using `jboss.socket.binding.port-offset`. For example, to launch a second instance that will represent a new pod on OpenShift, you can execute the following command in a new terminal window: ```bash
- $ MSSQLSERVER_USER=SA \
- MSSQLSERVER_PASSWORD=Passw0rd! \
- MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \
- MSSQLSERVER_DATABASE=todos_db \
- MSSQLSERVER_HOST=<server name saved aside earlier> \
- MSSQLSERVER_PORT=1433 \
+ export MSSQLSERVER_USER=azureuser
+ export MSSQLSERVER_PASSWORD='Passw0rd!'
+ export MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds
+ export MSSQLSERVER_DATABASE=todos_db
+ export MSSQLSERVER_HOST=<server name saved aside earlier>
+ export MSSQLSERVER_PORT=1433
mvn wildfly-jar:run -Dwildfly.bootable.arguments="-Djboss.node.name=node2 -Djboss.socket.binding.port-offset=1000" ```
Follow the next steps to build and run the application locally.
``` > [!NOTE]
- > By default the Bootable JAR configures the JGroups subsystem to use the UDP protocol and sends messages to discover other cluster members to the 230.0.0.4 multicast address. To properly verify the clustering capabilities on your local machine, your Operating System should be capable of sending and receiving multicast datagrams and route them to the 230.0.0.4 IP through your ethernet interface. If you see warnings related to the cluster on the server logs, check your network configuration and verify whether is working with the multicast address.
+ > By default the Bootable JAR configures the JGroups subsystem to use the UDP protocol and sends messages to discover other cluster members to the 230.0.0.4 multicast address. To properly verify the clustering capabilities on your local machine, your Operating System should be capable of sending and receiving multicast datagrams and route them to the 230.0.0.4 IP through your ethernet interface. If you see warnings related to the cluster on the server logs, check your network configuration and verify it supports multicast on that address.
1. Open `http://localhost:8080/` in your browser to visit the application home page. If you have created more instances, you can access them by shifting the port number, for example `http://localhost:9080/`. The application will look similar to the following image: :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/todo-demo-application.png" alt-text="Screenshot of ToDo EAP demo Application.":::
-1. Check the application health endpoints (live and ready). These endpoints will be used by OpenShift to verify when your pod is live and ready to receive user requests:
+1. Check the liveness and readiness probes for the application. These endpoints will be used by OpenShift to verify when your pod is live and ready to receive user requests:
- ```bash
- $ curl http://localhost:9990/health/live
- {"status":"UP","checks":[{"name":"SuccessfulCheck","status":"UP"}]}
+ To check the status of liveness, run:
- $ curl http://localhost:9990/health/ready
+ ```bash
+ curl http://localhost:9990/health/live
+ ```
+
+ You should see this output:
+
+ ```json
+ {"status":"UP","checks":[{"name":"SuccessfulCheck","status":"UP"}]}
+ ```
+
+ To check the status of readyness, run:
+
+ ```bash
+ curl http://localhost:9990/health/ready
+ ```
+
+ You should see this output:
+
+ ```json
{"status":"UP","checks":[{"name":"deployments-status","status":"UP","data":{"todo-list.war":"OK"}},{"name":"server-state","status":"UP","data":{"value":"running"}},{"name":"boot-errors","status":"UP"},{"name":"DBConnectionHealthCheck","status":"UP"}]}
- ```
+ ```
1. Press **Control-C** to stop the application. ## Deploy to OpenShift
-To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. The following steps assume you have a MicrosoftSQL database server running and exposed by an OpenShift service, and you have stored the database user name, password and database name in an [OpenShift Secret object](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) under the following name `mssqlserver-secret`.
-
-> [!NOTE]
-> You can also use the [JBoss EAP Operator](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default) to deploy this example, however, notice that the JBoss EAP Operator will deploy the application as `StatefulSets`. Use the JBoss EAP Operator if your application requires one or more one of the following.
->
-> * Stable, unique network identifiers.
-> * Stable, persistent storage.
-> * Ordered, graceful deployment and scaling.
-> * Ordered, automated rolling updates.
-> * Transaction recovery facility when a pod is scaled down or crashes.
+To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. The following steps assume you have a Microsoft SQL database server running and accessible from your OpenShift cluster, and you have stored the database user name, password, hostname, port and database name in an OpenShift [OpenShift Secret object](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) named `mssqlserver-secret`.
Navigate to your demo application local repository and change the current branch to `bootable-jar-openshift`: ```bash
-$ git checkout bootable-jar-openshift
-Switched to branch 'bootable-jar-openshift'
-$
+git checkout bootable-jar-openshift
```
-Let's do a quick review about what we have changed:
+Let's do a quick review about what we have changed in this branch:
-- We have added a new maven profile named `bootable-jar-openshift` that prepares the Bootable JAR with a specific configuration for running the server on the cloud, for example, it enables the JGroups subsystem to use TCP requests to discover other pods by using the KUBE_PING protocol.-- We have added a set of configuration files in the _jboss-on-aro-jakartaee/deployment_ directory. In this directory, you will find the configuration files to deploy the application.
+* We have added a new maven profile named `bootable-jar-openshift` that prepares the Bootable JAR with a specific configuration for running the server on the cloud. For example, it enables the JGroups subsystem to use TCP requests to discover other pods by using the KUBE_PING protocol.
+* We have added a set of configuration files in the _jboss-on-aro-jakartaee/deployment_ directory. In this directory, you will find the configuration files to deploy the application.
### Deploy the application on OpenShift
-We can deploy the demo application via JBoss EAP Helm Charts. The Helm Chart application configuration file is available at _deployment/application/todo-list-helm-chart.yaml_. You could deploy this file via the command line; however, to do so you would need to have Helm Charts installed on your local machine. Instead of using the command line, the next steps explain how you can deploy this Helm Chart by using the OpenShift web console.
+The next steps explain how you can deploy the application with a Helm chart using the OpenShift web console. Avoid hard coding sensitive values into your Helm chart using a feature called "secrets". A secret is simply a collection of name=value pairs, where the values are specified in some known place in advance of when they are needed. In our case, the Helm chart uses two secrets, with the following name=value pairs from each.
-Before deploying the application, let's create the expected Secret object that will hold specific application configuration. The Helm Chart will get the database user, password and name from a secret named `mssqlserver-secret`, and the driver version, the datasource JNDI name and the cluster password from the following Secret:
+* `mssqlserver-secret`
-1. Execute the following to create the OpenShift secret object that will hold the application configuration:
+ * `db-host` conveys the value of `MSSQLSERVER_HOST`.
+ * `db-name` conveys the value of `MSSQLSERVER_DATABASE`
+ * `db-password` conveys the value of `MSSQLSERVER_PASSWORD`
+ * `db-port` conveys the value of `MSSQLSERVER_PORT`.
+ * `db-user` conveys the value of `MSSQLSERVER_USER`.
+
+* `todo-list-secret`
+
+ * `app-cluster-password` conveys an arbitrary, user-specified password so that cluster nodes can form more securely.
+ * `app-driver-version` conveys the value of `MSSQLSERVER_DRIVER_VERSION`.
+ * `app-ds-jndi` conveys the value of `MSSQLSERVER_JNDI`.
+
+1. Create `mssqlserver-secret`.
```bash
- $ oc create secret generic todo-list-secret \
- --from-literal app-driver-version=7.4.1.jre11 \
- --from-literal app-ds-jndi=java:/comp/env/jdbc/mssqlds \
- --from-literal app-cluster-password=mut2UTG6gDwNDcVW
+ oc create secret generic mssqlserver-secret \
+ --from-literal db-host=${MSSQLSERVER_HOST} \
+ --from-literal db-name=${MSSQLSERVER_DATABASE} \
+ --from-literal db-password=${MSSQLSERVER_PASSWORD} \
+ --from-literal db-port=${MSSQLSERVER_PORT} \
+ --from-literal db-user=${MSSQLSERVER_USER}
```
- > [!NOTE]
- > You decide the cluster password you want to use, the pods that want to join to your cluster need such a password. Using a password prevents that any pods that are not under your control can join to your JBoss EAP cluster.
+1. Create `todo-list-secret`.
- > [!NOTE]
- > You may have noticed from the above Secret that we are not supplying the database Hostname and Port. That's not necessary. If you take a closer look at the Helm Chart application file, you will see that the database Hostname and Port are passed by using the following notations \$(MSSQLSERVER_SERVICE_HOST) and \$(MSSQLSERVER_SERVICE_PORT). This is a standard OpenShift notation that will ensure the application variables (MSSQLSERVER_HOST, MSSQLSERVER_PORT) get assigned to the values of the pod environment variables (MSSQLSERVER_SERVICE_HOST, MSSQLSERVER_SERVICE_PORT) that are available at runtime. These pod environment variables are passed by OpenShift when the pod is launched. These variables are available to any pod when you create an OpenShift service exposing the database server.
+ ```bash
+ oc create secret generic todo-list-secret \
+ --from-literal app-cluster-password=mut2UTG6gDwNDcVW \
+ --from-literal app-driver-version=${MSSQLSERVER_DRIVER_VERSION} \
+ --from-literal app-ds-jndi=${MSSQLSERVER_JNDI}
+ ```
-2. Open the OpenShift console and navigate to the developer view (in the **</> Developer** perspective in the left hand menu)
+1. Open the OpenShift console and navigate to the developer view. Select the **</> Developer** perspective from the drop down menu at the top of the navigation pane.
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-developer-view.png" alt-text="Screenshot of OpenShift console developer view.":::
-3. Once you are in the **</> Developer** perspective, ensure you have selected the **eap-demo** project at the **Project** combo box.
+1. In the **</> Developer** perspective, select the **eap-demo** project from the **Project** drop down menu.
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-project-combo-box.png" alt-text="Screenshot of OpenShift console project combo box.":::
-4. Go to **+Add**, then select **Helm Chart**. You will arrive at the Helm Chart catalog available on your ARO cluster. Write **eap** on the filter input box to filter all the Helm Charts and get the EAP ones. At this stage, you should see two options:
+1. Select **+Add**. In the **Developer Catalog** section, select **Helm Chart**. You'll arrive at the Helm Chart catalog available on your ARO cluster. In the **Filter by keyword** box, type **eap**. You should see two options similar to this:
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts.png" alt-text="Screenshot of OpenShift console EAP Helm Charts.":::
-5. Since our application uses MicroProfile capabilities, we are going to select for this demo the Helm Chart for EAP XP (at the time of this writing, the exact version of the Helm Chart is **EAP Xp3 v1.0.0**). The `Xp3` stands for Expansion Pack version 3.0.0. With the JBoss Enterprise Application Platform expansion pack, developers can use Eclipse MicroProfile application programming interfaces (APIs) to build and deploy microservices-based applications.
+ Because our application uses MicroProfile capabilities, we'll select the Helm Chart for EAP Xp. The `Xp` stands for Expansion Pack. With the JBoss Enterprise Application Platform expansion pack, developers can use Eclipse MicroProfile application programming interfaces (APIs) to build and deploy microservices-based applications.
-6. Open the **EAP Xp** Helm Chart, and then select **Install Helm Chart**.
+1. Select the **EAP Xp** Helm Chart, and then select **Install Helm Chart**.
-At this point, we need to configure the chart to be able to build and deploy the application:
+At this point, we need to configure the chart to build and deploy the application:
1. Change the name of the release to **eap-todo-list-demo**.
-1. We can configure the Helm Chart either using a **Form View** or a **YAML View**. Select **YAML View** in the **Configure via** box.
-1. Then, change the YAML content to configure the Helm Chart by copying the content of the Helm Chart file available at _deployment/application/todo-list-helm-chart.yaml_ instead of the existing content:
+1. We can configure the Helm Chart either using a **Form View** or a **YAML View**. In the section labeled **Configure via**, select **YAML View**.
+1. Change the YAML content to configure the Helm Chart by copying and pasting the content of the Helm Chart file available at _deployment/application/todo-list-helm-chart.yaml_ instead of the existing content:
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts-yaml-content-inline.png" alt-text="OpenShift console EAP Helm Chart YAML content" lightbox="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts-yaml-content-expanded.png":::
At this point, we need to configure the chart to be able to build and deploy the
The Helm Release (abbreviated **HR**) is named **eap-todo-list-demo**. It includes a Deployment resource (abbreviated **D**) also named **eap-todo-list-demo**.
-1. When the build is finished (the bottom-left icon will display a green check) and the application is deployed (the circle outline is in dark blue), you can go to application the URL (using the top-right icon) from the route associated to the deployment.
+ If you select the icon with two arrows in a circle at the lower left of the **D** box, you will be taken to the **Logs** pane. Here you can observe the progress of the build. To return to the topology view, select **Topology** in the left navigation pane.
+
+1. When the build is finished (the bottom-left icon will display a green check) and the application is deployed (the circle outline is in dark blue), you can go to application the URL (using the top-right icon) from the route associated with the deployment.
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-open-application.png" alt-text="Screenshot of OpenShift console open application.":::
You can learn more from references used in this guide:
* [Red Hat JBoss Enterprise Application Platform](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) * [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) * [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts/)
-* [JBoss EAP Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/using_jboss_eap_xp_3.0.0/index#the-bootable-jar_default)
+* [JBoss EAP Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/using_jboss_eap_xp_3.0.0/index#the-bootable-jar_default)
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
+
+ Title: Create Dynatrace application - Azure partner solutions
+description: This article describes how to use the Azure portal to create an instance of Dynatrace.
+++ Last updated : 06/07/2022+++
+# QuickStart: Get started with Dynatrace
+
+In this quickstart, you create a new instance of Dynatrace. You can either create a new Dynatrace environment or [link to an existing Dynatrace environment](dynatrace-link-to-existing.md#link-to-existing-dynatrace-environment).
+
+When you use the integrated Dynatrace experience in Azure portal, the following entities are created and mapped for monitoring and billing purposes.
++
+- **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the create or linking process.
+- **Dynatrace environment** - This is the Dynatrace environment on Dynatrace SaaS. When you choose to create a new environment, the environment on Dynatrace SaaS is automatically created, in addition to the Dynatrace resource in Azure. The resource is created in the Azure subscription and resource group that you selected when you created the environment or linked to an existing environment.
+- **Marketplace SaaS resource** - The SaaS resource is created automatically, based on the plan you select from the Dynatrace Marketplace offer. This resource is used for billing purposes.
+
+## Prerequisites
+
+Before creating your first instance of Dynatrace in Azure, configure your environment. These steps must be completed before continuing with the next steps in this quickstart.
+
+### Find Offer
+
+Use the Azure portal to find Dynatrace for Azure application.
+
+1. Go to the [Azure portal](https://portal.azure.com) and sign in.
+
+1. If you've visited the **Marketplace** in a recent session, select the icon from the available options. Otherwise, search for *Marketplace*.
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-search-marketplace.png" alt-text="Screenshot showing a search for Marketplace in the Azure portal.":::
+
+1. In the Marketplace, search for _Dynatrace_.
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-subscribe.png" alt-text="Screenshot showing Dynatrace in the working pane to create a subscription.":::
+
+1. Select **Subscribe**.
+
+## Create a Dynatrace resource in Azure
+
+1. When creating a Dynatrace resource, you see two options: one to create a new Dynatrace environment, and another to link Azure subscription to an existing Dynatrace environment.
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-create.png" alt-text="Screenshot offering to create a Dynatrace resource.":::
+
+1. If you want to create a new Dynatrace environment, select **Create** action under the **Create a new Dynatrace environment** option
+ :::image type="content" source="media/dynatrace-create/dynatrace-create-new-link-existing.png" alt-text="Screenshot showing two options: new Dynatrace or existing Dynatrace.":::
+
+1. You see a form to create a Dynatrace resource in the working pane.
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-basic-properties.png" alt-text="Screenshot of basic properties needed for new Dynatrace instance.":::
+
+1. Provide the following values:
+
+ | **Property** | **Description** |
+ |--|-|
+ | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.|
+ | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution. |
+ | Resource name | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.|
+ | Location | Select the region. Both the Dynatrace resource in Azure and Dynatrace environment will be created in the selected region.|
+ | Pricing plan | Select from the list of available plans. |
+
+### Configure metrics and logs
+
+1. Your next step is to configure metrics and logs. When creating the Dynatrace resource, you can set up automatic log forwarding for two types of logs:
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
+
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
+
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+
+1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
+
+ To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories). To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags.
+
+ Rules for sending resource logs:
+
+ - When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources.
+ - Azure resources with Include tags send logs to Dynatrace.
+ - Azure resources with Exclude tags don't send logs to Dynatrace.
+ - If there's a conflict between inclusion and exclusion rules, exclusion rule applies.
+
+ The logs sent to Dynatrace are charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+
+ > [!NOTE]
+ > Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created.
+
+1. Once you have completed configuring metrics and logs, select **Next: Single sign-on**.
+
+### Configure single sign-on
+
+1. You can establish single sign-on to Dynatrace from the Azure portal when your organization uses Azure Active Directory as its identity provider. If your organization uses a different identity provider or you don't want to establish single sign-on at this time, you can skip this section.
+
+ :::image type="content" source="media/dynatrace-create/dynatrace-single-sign-on.png" alt-text="Screenshot showing options for single sign-on.":::
+
+1. To establish single sign-on through Azure Active directory, select the checkbox for **Enable single sign-on through Azure Active Directory**.
+
+ The Azure portal retrieves the appropriate Dynatrace application from Azure Active Directory. The app matches the Enterprise app you provided in an earlier step.
+
+## Next steps
+
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
+
+ Title: Configure pre-deployment to use Dynatrace with Azure.
+description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal.
+++ Last updated : 06/07/2022+++
+# Configure pre-deployment
+
+This article describes the prerequisites that must be completed before you create the first instance of Dynatrace resource in Azure.
+
+## Access control
+
+To set up the Azure Dynatrace integration, you must have **Owner** or **Contributor** access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before starting the setup.
+
+## Add enterprise application
+
+To use the Security Assertion Markup Language (SAML) based single sign-on (SSO) feature within the Dynatrace resource, you must set up an enterprise application. To add an enterprise application, you need one of these roles: Global administrator, Cloud Application Administrator, or Application Administrator.
+
+1. Go to Azure portal. Select **Azure Active Directory,** then **Enterprise App** and then **New Application**.
+
+1. Under **Add from the gallery**, type in `Dynatrace`. Select the search result then select **Create**.
+
+ :::image type="content" source="media/dynatrace-how-to-configure-prereqs/dynatrace-gallery.png" alt-text="Screenshot of the Dynatrace service in the Marketplace gallery.":::
+
+1. Once the app is created, go to properties from the side panel, and set the **User assignment required?** to **No**, then select **Save**.
+
+ :::image type="content" source="media/dynatrace-how-to-configure-prereqs/dynatrace-properties.png" alt-text="Screenshot of the Dynatrace service properties.":::
+
+1. Go to **Single sign-on** from the side panel. Then select **SAML**.
+
+ :::image type="content" source="media/dynatrace-how-to-configure-prereqs/dynatrace-single-sign-on.png" alt-text="Screenshot of the Dynatrace single sign-on settings.":::
+
+1. Select **Yes** when prompted to **Save single sign-on settings**.
+
+ :::image type="content" source="media/dynatrace-how-to-configure-prereqs/dynatrace-saml-sign-on.png" alt-text="Screenshot of the Dynatrace S A M L settings.":::
+
+## Next steps
+
+- [Quickstart: Create a new Dynatrace environment](dynatrace-create.md)
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
+
+ Title: Manage your Dynatrace for Azure integration
+description: This article describes how to manage Dynatrace on the Azure portal.
+++ Last updated : 06/07/2022+++
+# Manage the Dynatrace integration with Azure
+
+This article describes how to manage the settings for your Azure integration with Dynatrace.
+
+## Resource overview
+
+To see the details of your Dynatrace resource, select **Overview** in the left pane.
++
+The details include:
+
+- Resource group name
+- Region
+- Subscription
+- Tags
+- Single sign-on link to Dynatrace environment
+- Dynatrace billing plan
+- Billing term
+
+At the bottom, you see two tabs:
+
+- **Get started tab** also provides links to Dynatrace dashboards, logs and Smartscape Topology.
+- **Monitoring tab** provides a summary of the resources sending logs to Dynatrace.
+
+If you select the **Monitoring** pane, you see a table with information about the Dynatrace resource.
++
+The columns in the table denote important information for your resource:
+
+- **Resource type** - Azure resource type.
+- **Total resources** - Count of all resources for the resource type.
+- **Logs to Dynatrace** - Count of resources sending logs to Dynatrace through the integration.
+
+## Reconfigure rules for logs
+
+To change the configuration rules for logs, select **Metrics and logs** in the Resource menu on the left.
++
+For more information, see [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs).
+
+## View monitored resources
+
+To see the list of resources emitting logs to Dynatrace, select Monitored Resources in the left pane.
++
+You can filter the list of resources by resource type, resource group name, region and whether the resource is sending logs.
+
+The column **Logs to Dynatrace** indicates whether the resource is sending logs to Dynatrace. If the resource isn't sending logs, this field indicates why logs aren't being sent. The reasons could be:
+
+- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings).
+- _Error_ - The resource is configured to send logs to Dynatrace, but is blocked by an error.
+- _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace.
+- _Agent not configured_ - Virtual machines without the Dynatrace OneAgent installed don't emit logs to Dynatrace.
+
+## Monitor virtual machines using Dynatrace OneAgent
+
+You can install Dynatrace OneAgent on virtual machines as an extension. Select **Virtual Machines** under **Dynatrace environment config** in the Resource menu. In the working pane, you see a list of all virtual machines in the subscription.
+
+For each virtual machine, the following info is displayed:
+
+| Column header | Definition of column |
+|||
+| **Resource Name** | Virtual machine name |
+| **Resource Status** | Indicates whether the virtual machine is stopped or running. Dynatrace OneAgent can only be installed on virtual machines that are running. If the virtual machine is stopped, installing the Dynatrace OneAgent will be disabled. |
+| **Agent status** | Whether the Dynatrace OneAgent is running on the virtual machine |
+| **Agent version** | The Dynatrace OneAgent version number |
+| **Auto-update** | Whether auto-update has been enabled for the OneAgent |
+| **Log analytics** | Whether log monitoring option was selected when OneAgent was installed |
+| **Monitoring mode** | Whether the Dynatrace OneAgent is monitoring hosts in [full-stack monitoring mode or infrastructure monitoring mode](https://www.dynatrace.com/support/help/how-to-use-dynatrace/hosts/basic-concepts/get-started-with-infrastructure-monitoring) |
+
+> [!NOTE]
+> If a virtual machine shows that an agent has been configured, but the options to manage the agent through extension are disabled, it means that the agent has been configured through a different Dynatrace resource in the same Azure subscription.
+
+## Monitor App Services using Dynatrace OneAgent
+
+You can install Dynatrace OneAgent on App Services as an extension. Select **App Services** in the Resource menu. In the working pane, you see This screen a list of all App Services in the subscription.
+
+For each app service, the following information is displayed:
+
+| Column header | Definition of column |
+|||
+| **Resource name** | App service name |
+| **Resource status** | Indicates whether the App service is running or stopped. Dynatrace OneAgent can only be installed on app services that are running. |
+| **App Service plan** | The plan configured for the app service |
+| **Agent version** | The Dynatrace OneAgent version |
+| **Agent status** | status of the agent |
+
+To install the Dynatrace OneAgent, select the app service and select **Install Extension.** The application settings for the selected app service are updated and the app service is restarted to complete the configuration of the Dynatrace OneAgent.
+
+> [!NOTE]
+>App Service extensions are currently supported only for App Services that are running on Windows OS. App Services using the Linux OS are not shown in the list.
+
+> [!NOTE]
+> This screen currently only shows App Services of type Web App. Managing agents for Function apps is not supported at this time.
+
+## Reconfigure single sign-on
+
+If you would like to reconfigure single sign-on, select **Single sign-on** in the left pane.
+
+If single sign-on was already configured, you can disable it.
+
+To establish single sign-on or change the application, select **Enable single sign-on through Azure Active Directory**. The portal retrieves Dynatrace application from Azure Active Directory. The app comes from the enterprise app name selected during the [pre-configuration steps](dynatrace-how-to-configure-prereqs.md).
+
+## Delete Dynatrace resource
+
+Select **Overview** in Resource menu. Then, select **Delete**. Confirm that you want to delete the Dynatrace resource. Select **Delete**.
++
+If only one Dynatrace resource is mapped to a Dynatrace environment, logs are no longer sent to Dynatrace. All billing through Azure Marketplace stops for Dynatrace.
+
+If more than one Dynatrace resource is mapped to the Dynatrace environment using the link Azure subscription option, deleting the Dynatrace resource only stops sending logs for that Dynatrace resource. However, since other Dynatrace environment may be linked to other Dynatrace resources, billing continues through the Azure Marketplace.
+
+## Next steps
+
+For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
+
+ Title: Linking to an existing Dynatrace for Azure resource
+description: This article describes how to use the Azure portal to link to an instance of Dynatrace.
+++ Last updated : 06/07/2022+++
+# Quickstart: Link to an existing Dynatrace environment
+
+In this quickstart, you link an Azure subscription to an existing Dynatrace environment. After you link to the Dynatrace environment, you can monitor the linked Azure subscription and the resources in that subscription using the Dynatrace environment.
+
+When you use the integrated experience for Dynatrace in the Azure portal, your billing and monitoring for the following entities is tracked in the portal.
++
+- **Dynatrace resource in Azure** - Using the Dynatrace resource, you can manage the Dynatrace environment in Azure. The resource is created in the Azure subscription and resource group that you select during the linking process.
+- **Dynatrace environment** - the Dynatrace environment on Dynatrace SaaS. When you choose to link an existing environment, a new Dynatrace resource is created in Azure. The Dynatrace environment and the Dynatrace resource must reside in the same region.
+- **Marketplace SaaS resource** - the SaaS resource is used for billing purposes. The SaaS resource typically resides in a different Azure subscription from where the Dynatrace environment was first created.
+
+## Prerequisites
+
+Before you link the subscription to a Dynatrace environment, [complete pre-deployment configuration](dynatrace-how-to-configure-prereqs.md).
+
+### Find Offer
+
+1. Use the Azure portal to find Dynatrace.
+
+1. Go to the [Azure portal](https://portal.azure.com) and sign in.
+
+1. If you've visited the Marketplace in a recent session, select the icon from the available options. Otherwise, search for Marketplace.
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-search-marketplace.png" alt-text="Screenshot showing a search for Dynatrace in Marketplace.":::
+
+1. In the Marketplace, search for _Dynatrace_.
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-subscribe.png" alt-text="Screenshot showing Dynatrace in the working pane to create a subscription.":::
+
+1. In the working pane, select **Subscribe**.
+
+## Link to existing Dynatrace environment
+
+1. When creating a Dynatrace resource, you see two options: one to create a new Dynatrace environment, and another to link Azure subscription to an existing Dynatrace environment.
+
+1. If you're linking the Azure subscription to an existing Dynatrace environment, select **Create** under the **Link Azure subscription to an existing Dynatrace environment** option.
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-create-new-link-existing.png" alt-text="Screenshot where creating a link to an existing Dynatrace environment is highlighted.":::
+
+1. The process creates a new Dynatrace resource in Azure and links it to an existing Dynatrace environment hosted on Azure. You see a form to create the Dynatrace resource in the working pane.
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-create-new-link-existing.png" alt-text="Screenshot showing highlight around link Azure subscription to an existing Dynatrace environment.":::
+
+1. Provide the following values.
+
+ |**Property** | **Description** |
+ |||
+ | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. This subscription will be linked to environment for monitoring purposes. |
+ | Resource Group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview#resource-groups) is a container that holds related resources for an Azure solution. |
+ | Resource name | Specify a name for the Dynatrace resource. |
+ | Region | Select the Azure region where the Dynatrace resource should be created. |
+ | Dynatrace | The Azure portal displays a list of existing environments that can be linked. Select the desired environment from the available options. |
+
+ > [!NOTE]
+ > Linking requires that the environment and the Dynatrace resource reside in the same Azure region. The user that is performing the linking action should have administrator permissions on the Dynatrace environment being linked. If the environment that you want to link to does not appear in the dropdown list, check if any of these conditions are not satisfied.
+
+1. Select **Next: Metrics and logs** to configure metrics and logs.
+
+### Configure metrics and logs
+
+1. Your next step is to configure metrics and logs. When linking an existing Dynatrace environment, you can set up automatic log forwarding for two types of logs:
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
+
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
+
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+
+1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
+
+ To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories). To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags.
+
+ Rules for sending resource logs are:
+
+ - When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources.
+ - Azure resources with Include tags send logs to Dynatrace.
+ - Azure resources with Exclude tags don't send logs to Dynatrace.
+ - If there's a conflict between inclusion and exclusion rules, exclusion rule applies.
+
+ The logs sent to Dynatrace is charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+
+ Metrics for virtual machines and App Services can be collected by installing the Dynatrace OneAgent after the Dynatrace resource has been created and an existing Dynatrace environment has been linked to it.
+
+1. Once you have completed configuring metrics and logs, select **Next: Single sign-on**.
+
+### Configure single sign-on
+
+1. At this point, you see the next part of the form for **Single Sign-on**. If you're linking the Dynatrace resource to an existing Dynatrace environment, you cannot set up single sign-on at this step.
+
+ > [!NOTE]
+ > You cannot set up single sign-on when linking the Dynatrace resource to an existing Dynatrace environment.
+
+1. Instead, you can set up single sign-on after creating the Dynatrace resource. For more information, see [Reconfigure single sign-on](dynatrace-how-to-manage.md#reconfigure-single-sign-on).
+
+1. Select **Next: Tags**.
+
+### Add tags
+
+1. You can add tags for your new Dynatrace resource. Provide name and value pairs for the tags to apply to the Dynatrace resource.
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-custom-tags.png" alt-text="Screenshot showing list of tags for a Dynatrace resource.":::
+
+1. When you've finished adding tags, select **Next: Review+Create.**
+
+### Review and create
+
+1. Review your selections and the terms of use. After validation completes, select **Create.**
+
+ :::image type="content" source="media/dynatrace-link-to-existing/dynatrace-review-and-create.png" alt-text="Screenshot showing form to review and create a link to a Dynatrace environment.":::
+
+1. Azure deploys the Dynatrace resource. When the process completes, select **Go to resource** to see the Dynatrace resource.
+
+## Next steps
+
+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
+
+ Title: Dynatrace integration overview - Azure partner solutions
+description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace.
+++ Last updated : 06/07/2022+++
+# What is Dynatrace integration with Azure?
+
+Dynatrace is a popular monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
+
+Dynatrace for Azure offering in the Azure Marketplace enables you to create and manage Dynatrace environments using the Azure portal with a seamlessly integrated experience. This enables you to use Dynatrace as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement, all the way to configuration and management.
+
+You can create and manage the Dynatrace resources using the Azure portal through a resource provider named `Dynatrace.Observability`. Dynatrace owns and runs the software as a service (SaaS) application including the Dynatrace environments created through this experience.
+
+> [!NOTE]
+> Dynatrace for Azure only stores and processes customer data in the region where the service was deployed. No data is stored outside of that region.
+
+## Capabilities
+
+Dynatrace for Azure provides the following capabilities:
+
+- **Seamless onboarding** - Easily onboard and use Dynatrace as natively integrated service on Azure.
+
+- **Unified billing** - Get a single bill for all the resources you consume on Azure, including Dynatrace.
+
+- **Single-Sign on to Dynatrace** - You need not sign up or sign in separately to Dynatrace. Sign in once in the Azure portal and seamlessly transition to Dynatrace portal when needed.
+
+- **Log forwarder** - Enables automated forwarding of subscription activity and resource logs to Dynatrace
+
+- **Manage Dynatrace OneAgent on VMs and App Services** - Provides a single experience to install and uninstall Dynatrace OneAgent on virtual machines and App Services.
+
+## Dynatrace Links
+
+For more help using Dynatrace for Azure service, see the [Dynatrace](https://aka.ms/partners/Dynatrace/PartnerDocs) documentation.
+
+## Next steps
+
+To create an instance of Dynatrace, see [QuickStart: Get started with Dynatrace](dynatrace-create.md).
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
+
+ Title: Troubleshooting Dynatrace - Azure partner solutions
+description: This article provides information about troubleshooting Dynatrace integration with Azure
++++ Last updated : 06/07/2022+++
+# Troubleshoot Dynatrace for Azure
+
+This article describes how to contact support when working with a Dynatrace resource. Before contacting support, see [Fix common errors](#fix-common-errors).
+
+## Contact support
+
+To contact support about the Azure Datadog integration, select **New Support request** in the left pane. Select the link to the Dynatrace support website.
++
+## Fix common errors
+
+This document contains information about troubleshooting your solutions that use Dynatrace.
+
+### Purchase error
+
+- Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
+
+ - Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
+
+- The EA subscription doesn't allow _Marketplace_ purchases.
+ - Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Dynatrace support](https://support.dynatrace.com/).
+
+### Unable to create Dynatrace resource
+
+- To set up the Azure Dynatrace integration, you must have **Owner** or **Contributor** access on the Azure subscription. Ensure you have the appropriate access before starting the setup.
+
+- Create fails because Last Name is empty. This happens when the user info in Azure AD is incomplete and doesn't contain Last Name. Contact your Azure tenant's global administrator to rectify this and try again.
+
+### Single sign-on errors
+
+- **Single sign-on configuration indicates lack of permissions** - This happens when the user that is trying to configure single sign-on doesn't have tenant
+- **Unable to save single sign-on settings** - This error happens when there's another Enterprise app that is using the Dynatrace SAML identifier. To find which app is using it, select **Edit** on the Basic **SAML** configuration section.
+ To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO.
+
+- **App not showing in Single sign-on settings page** - First, search for application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
+ The following image shows the correct values.
+
+## Next steps
+
+- Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace.
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Offerings from partners - Azure partner solutions
description: Learn about solutions offered by partners on Azure. Previously updated : 05/12/2022 Last updated : 06/07/2022
Partner solutions are available through the Marketplace.
| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. | | [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. | | [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
+| [Dynatrace for Azure](./dynatrace/dynatrace-overview.md) | Use Dyntrace for Azure to create and manage Dynatrace environments using the Azure portal. |
| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview) >[!NOTE]
-> Single Server to Flexible Server migration tool is in public preview.
+> Single Server to Flexible Server migration tool is in private preview.
Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration tool enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration tool. This tool automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The tool is provided free of cost for customers.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI >[!NOTE]
-> Single Server to Flexible Server migration tool is in public preview.
+> Single Server to Flexible Server migration tool is in private preview.
This quick start article shows you how to use Single to Flexible Server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
+>[!NOTE]
+> Single Server to Flexible Server migration tool is in private preview.
+ This guide shows you how to use Single to Flexible server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server. ## Before you begin
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
Previously updated : 05/03/2022 Last updated : 06/07/2022 # Index data from Azure SQL
In this article, learn how to configure an [**indexer**](search-indexer-overview
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Azure SQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+> [!NOTE]
+> [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
+ ## Prerequisites + An [Azure SQL database](/azure/azure-sql/database/sql-database-paas-overview) with data in a single table or view. Use a table if you want the ability to [index incremental updates](#CaptureChangedRows) using SQL's native change detection capabilities.
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
-+ If you're using the [Azure portal](https://portal.azure.com/) to create the data source, make sure that access to all public networks is enabled in the Azure SQL firewall while going through the instructions below. Otherwise, you need to enable access to all public networks during this setup and then disable it again, or instead, you must use REST API from a device with an authorized IP in the firewall rules, to perform these operations. If the Azure SQL firewall has public networks access disabled, there will be errors when connecting from the portal to it.
++ If you're using the [Azure portal](https://portal.azure.com/) to create the data source, make sure that access to all public networks is enabled in the Azure SQL firewall. Alternatively, you can use REST API from a device with an authorized IP in the firewall rules to perform these operations. If the Azure SQL firewall has public networks access disabled, there will be errors when connecting from the portal to it. <!-- Real-time data synchronization must not be an application requirement. An indexer can reindex your table at most every five minutes. If your data changes frequently, and those changes need to be reflected in the index within seconds or single minutes, we recommend using the [REST API](/rest/api/searchservice/AddUpdate-or-Delete-Documents) or [.NET SDK](search-get-started-dotnet.md) to push updated rows directly.
Yes. However, you need to allow your search service to connect to your database.
**Q: Can I use Azure SQL indexer with SQL databases running on-premises?**
-Not directly. We do not recommend or support a direct connection, as doing so would require you to open your databases to Internet traffic. Customers have succeeded with this scenario using bridge technologies like Azure Data Factory. For more information, see [Push data to an Azure Cognitive Search index using Azure Data Factory](../data-factory/v1/data-factory-azure-search-connector.md).
+Not directly. We don't recommend or support a direct connection, as doing so would require you to open your databases to Internet traffic. Customers have succeeded with this scenario using bridge technologies like Azure Data Factory. For more information, see [Push data to an Azure Cognitive Search index using Azure Data Factory](../data-factory/v1/data-factory-azure-search-connector.md).
**Q: Can I use a secondary replica in a [failover cluster](/azure/azure-sql/database/auto-failover-group-overview) as a data source?**
It's not recommended. Only **rowversion** allows for reliable data synchronizati
+ You can ensure that when the indexer runs, there are no outstanding transactions on the table thatΓÇÖs being indexed (for example, all table updates happen as a batch on a schedule, and the Azure Cognitive Search indexer schedule is set to avoid overlapping with the table update schedule).
-+ You periodically do a full reindex to pick up any missed rows.
-
-**Q: Can I use Always Encrypted feature when indexing from Azure SQL database?
-
-[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
++ You periodically do a full reindex to pick up any missed rows.
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Previously updated : 03/19/2021 Last updated : 06/07/2022 # Indexer connections to SQL Server on an Azure virtual machine
A connection from Azure Cognitive Search to SQL Server on a virtual machine is a
+ Install the certificate on the virtual machine, and then enable and configure encrypted connections on the VM using the instructions in this article.
+> [!NOTE]
+> [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
+ ## Enable encrypted connections Azure Cognitive Search requires an encrypted channel for all indexer requests over a public internet connection. This section lists the steps to make this work.
To get the portal IP address, ping `stamp2.ext.search.windows.net`, which is the
Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region. - ## Next steps
-With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
--
-## FAQ
-
-**Q: Can I use Always Encrypted feature when indexing from SQL Server?
-
-[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
-
+With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
Previously updated : 05/24/2022 Last updated : 06/07/2022 # Indexer connections to Azure SQL Managed Instance through a public endpoint
If you are setting up an [Azure SQL indexer](search-howto-connecting-azure-sql-d
This article provides basic steps that include collecting information necessary for data source configuration. For more information and methodologies, see [Configure public endpoint in Azure SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure).
+> [!NOTE]
+> [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
+ ## Enable a public endpoint For a new SQL Managed Instance, create the resource with the **Enable public endpoint** option selected.
Copy the connection string to use in the search indexer's data source connection
## Next steps With configuration out of the way, you can now specify a [SQL Managed Instance as an indexer data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).-
-## FAQ
-
-**Q: Can I use Always Encrypted feature when indexing from SQL Managed Instance?
-
-[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
Previously updated : 02/01/2022 Last updated : 06/06/2022 # Indexers in Azure Cognitive Search
Indexers are cloud-only, with individual indexers for [supported data sources](#
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a ['push model'](search-what-is-data-import.md) that simultaneously updates data in both Azure Cognitive Search and your external data source.
-## How to use indexers
+## Indexer scenarios and use cases
You can use an indexer as the sole means for data ingestion, or as part of a combination of techniques that load and optionally transform or enrich content along the way. The following table summarizes the main scenarios.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 05/27/2022 Last updated : 06/07/2022 # What's new in Azure Cognitive Search
-Learn what's new in the service. Bookmark this page to keep up to date with service updates. Check out the [**Preview feature list**](search-api-preview.md) for an itemized list of features that are not yet approved for production workloads.
+Learn what's new in the service. Bookmark this page to keep up to date with service updates.
+
+* [**Preview features**](search-api-preview.md) is a list of current features that haven't been approved for production workloads.
+* [**Previous versions**](/previous-versions/azure/search/) is an archive of earlier feature announcements.
## May 2022
Learn what's new in the service. Bookmark this page to keep up to date with serv
||--|| | [Index aliases](search-how-to-alias.md) | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias. | Public preview REST APIs (no portal support at this time).|
-## December 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | Semantic configurations are a multi-part specification of the fields used during semantic ranking, captions, and answers. This is a new addition to the 2021-04-30-Preview API, and are now required for semantic queries. | Public preview in the portal and preview REST APIs.|
-
-## November 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Azure Files indexer (preview)](./search-file-storage-integration.md) | Adds REST API support for creating indexers for [Azure Files](https://azure.microsoft.com/services/storage/files/) | Public preview in the portal and preview REST APIs.|
-
-## July 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Adds REST API support for indexer connections made using [managed identities](search-howto-managed-identities-data-sources.md) and Azure Active Directory (Azure AD) authentication. | Public preview |
-| [Role-based access control for data plane (preview)](search-security-rbac.md) | Authenticate using Azure Active Directory and new built-in roles for data plane access to indexes and indexing, eliminating or reducing the dependency on hard-coded API keys on connections. | Public preview ([by request](./search-security-rbac.md?tabs=config-svc-portal%2croles-portal%2ctest-portal#step-1-preview-sign-up)). After your subscription is on-boarded, use Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication.|
-| [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview |
-
-## May 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||--||
-| [Power Query connector support (preview)](search-how-to-index-power-query-data-sources.md) | Indexers can now index from other cloud platforms. If you are using an indexer to crawl external data sources for indexing, you can now use Power Query connectors to connect to Amazon Redshift, Elasticsearch, PostgreSQL, Salesforce Objects, Salesforce Reports, Smartsheet, and Snowflake. </br></br>[Announcement (techcommunity blog)](https://techcommunity.microsoft.com/t5/azure-ai/azure-cognitive-search-indexers-allow-you-to-ingest-data-from/ba-p/2381988) | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using REST api-version=2020-06-30-Preview and Azure portal. |
-|[Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) | The ADLS Gen2 data source used by indexers is now generally available. | Generally available, using REST api-version=2020-06-30 and Azure portal. |
-|[MySQL support (preview)](search-howto-index-mysql.md) | For indexer-based indexing, announcing preview data source support for Azure MySQL. | Public preview, REST api-version=2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
-| [More queryLanguages for spell check and semantic results](/rest/api/searchservice/preview-api/search-documents#queryLanguage) | For query requests that invoke spell check or queryType=semantic, you can now set the queryLanguage to a non-English language for [38 languages](/rest/api/searchservice/preview-api/search-documents#queryLanguage). </br></br>[Announcement (techcommunity blog)](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110) | Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). </br></br>Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview, [Azure.Search.Documents 11.3.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.3.0-beta.2), or [Search explorer](search-explorer.md) in Azure portal. </br></br>[Region and tier](semantic-search-overview.md#availability-and-pricing) restrictions apply. |
-| [More regions for double encryption](search-security-manage-encryption-keys.md#double-encryption) | For search indexes and objects that are encrypted through customer-managed keys, double encryption (encryption of both static and temporary disks) is now implemented in all supported regions. | In all regions, subject to [service creation dates](search-security-manage-encryption-keys.md#double-encryption). |
-
-## April 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||||
-| [Gremlin API support (preview)](search-howto-index-cosmosdb-gremlin.md) | For indexer-based indexing, you can now create a data source that retrieves content from Cosmos DB accessed through the Gremlin API. | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. |
-
-## March 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||||
-| [Semantic search (preview)](semantic-search-overview.md) | A collection of query-related features that significantly improve the relevance of search results through minimal adjustments to a query request. </br></br>[Semantic ranking](semantic-ranking.md) computes relevance scores using the semantic meaning behind words and content. </br></br>[Semantic captions](semantic-how-to-query-request.md) return relevant passages from the document that best summarize the document, with highlights over the most important terms or phrases. </br></br>[Semantic answers](semantic-answers.md) return key passages, extracted from a search document, that are formulated as a direct answer to a query that looks like a question. | Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). </br></br>Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview or [Search explorer](search-explorer.md) in Azure portal. </br></br>Region and tier restrictions apply. |
-| [Spell check query terms (preview)](speller-how-to-add.md) | Before query terms reach the search engine, you can have them checked for spelling errors. The `speller` option works with any query type (simple, full, or semantic). | Public preview, REST only, api-version=2020-06-30-Preview|
-| [SharePoint indexer (preview)](search-howto-index-sharepoint-online.md) | This indexer connects you to a SharePoint site so that you can index content from a document library. | Public preview, REST only, api-version=2020-06-30-Preview |
-| [Normalizers (preview)](search-normalizers.md) | Normalizers provide simple text pre-processing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| Public preview, REST only, api-version=2020-06-30-Preview |
-| [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md ) | A cognitive skill that looks for text from a custom, user-defined list of words and phrases. From this list, the skill labels all documents with any matching entities. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not quite exact. | Generally available. |
-
-## February 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||||
-| [Reset Documents (preview)](search-howto-run-reset-indexers.md) | Reprocesses individually selected search documents in indexer workloads. | [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) |
-| [Availability Zones](search-performance-optimization.md#availability-zones)| Search services with two or more replicas in certain regions, as listed in [Scale for performance](search-performance-optimization.md#availability-zones), gain resiliency by having replicas in two or more distinct physical locations. | The region and date of search service creation determine availability. See the Scale for performance article for details. |
-| [Azure CLI](/cli/azure/search) </br>[Azure PowerShell](/powershell/module/az.search/) | New revisions now provide the full range of operations in the Management REST API 2020-08-01, including support for IP firewall rules and private endpoint. | Generally available. |
-
-## January 2021
-
-|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
-||-||
-| [Solution accelerator for Azure Cognitive Search and QnA Maker](https://github.com/Azure-Samples/search-qna-maker-accelerator) | Pulls questions and answers out of the document and suggest the most relevant answers. A live demo app can be found at [https://aka.ms/qnaWithAzureSearchDemo](https://aka.ms/qnaWithAzureSearchDemo). | Open-source project (no SLA) |
-
-## 2020 Archive
-
-| Month | Feature | Description |
-|-||-|
-| November | [Customer-managed key encryption (extended)](search-security-manage-encryption-keys.md) | Extends customer-managed encryption over the full range of assets created and managed by a search service. Generally available.|
-| September | [Visual Studio Code extension for Azure Cognitive Search](search-get-started-vs-code.md) | Adds a workspace, navigation, intellisense, and templates for creating indexes, indexers, data sources, and skillsets. This feature is currently in public preview.|
-| September | [System managed service identity (indexers)](search-howto-managed-identities-data-sources.md) | Generally available. |
-| September | [Outbound requests using a private link](search-indexer-howto-access-private.md) | Generally available. |
-| September | [Management REST API (2020-08-01)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
-| September | [Management REST API (2020-08-01-Preview)](/rest/api/searchmanagement/management-api-versions) | Adds shared private link resource for Azure Functions and Azure SQL for MySQL Databases. |
-| September | [Management .NET SDK 4.0](/dotnet/api/overview/azure/search/management) | Azure SDK update for the management SDK, targeted REST API version 2020-08-01. Generally available.|
-| August | [double encryption](search-security-overview.md#encryption) | Generally available on all search services created after August 1, 2020 in these regions: West US 2, East US, South Central US, US Gov Virginia, US Gov Arizona. |
-| July | [Azure.Search.Documents client library](/dotnet/api/overview/azure/search.documents-readme) | Azure SDK for .NET, generally available. |
-| July | [azure.search.documents client library](/python/api/overview/azure/search-documents-readme) | Azure SDK for Python, generally available. |
-| July | [@azure/search-documents client library](/javascript/api/overview/azure/search-documents-readme) | Azure SDK for JavaScript, generally available. |
-| June | [Knowledge store](knowledge-store-concept-intro.md) | Generally available. |
-| June | [Search REST API 2020-06-30](/rest/api/searchservice/) | Generally available. |
-| June | [Search REST API 2020-06-30-Preview](/rest/api/searchservice/) | Adds Reset Skillset to selectively reprocess skills, and incremental enrichment. |
-| June | [Okapi BM25 relevance algorithm](index-ranking-similarity.md) | Generally available. |
-| June | **executionEnvironment** (applies to search services using Azure Private Link.) | Generally available. |
-| June | [AML skill (preview)](cognitive-search-aml-skill.md) | A cognitive skill that extends AI enrichment with a custom Azure Machine Learning (AML) model. |
-| May | [Debug sessions (preview)](cognitive-search-debug-session.md) | Skillset debugger in the portal. |
-| May | [IP rules for in-bound firewall support](service-configure-firewall.md) | Generally available. |
-| May | [Azure Private Link for a private search endpoint](service-create-private-endpoint.md) | Generally available. |
-| May | [Managed service identity (indexers) - (preview)](search-howto-managed-identities-data-sources.md) | Connect to Azure data sources using a managed identity. |
-| May | [sessionId query parameter](index-similarity-and-scoring.md), [scoringStatistics=global parameter](index-similarity-and-scoring.md#scoring-statistics) | Global search statistics, useful for [machine learning (LearnToRank) models for search relevance](https://github.com/Azure-Samples/search-ranking-tutorial). |
-| May | [featuresMode relevance score expansion (preview)](index-similarity-and-scoring.md#featuresMode-param) | |
-|March | [Native blob soft delete (preview)](search-howto-index-changed-deleted-blobs.md) | Deletes search documents if the source blob is soft-deleted in blob storage. |
-|March | [Management REST API (2020-03-13)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
-|February | [PII Detection skill](cognitive-search-skill-pii-detection.md) | A cognitive skill that extracts and masks personal information. |
-|February | [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md) | A cognitive skill that finds words and phrases from a list and labels all documents with matching entities. |
-|January | [Customer-managed key encryption](search-security-manage-encryption-keys.md) | Generally available |
-|January | [IP rules for in-bound firewall support](service-configure-firewall.md) | New **IpRule** and **NetworkRuleSet** properties in [CreateOrUpdate API](/rest/api/searchmanagement/2020-08-01/services/create-or-update). |
-|January | [Create a private endpoint](service-create-private-endpoint.md) | Set up a Private Link for secure connections to your search service. This preview feature has a dependency [Azure Private Link](../private-link/private-link-overview.md) and [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) as part of the solution. |
-
-## 2019 Archive
+## 2021 Archive
| Month | Feature | Description | |-||-|
-|December | [Create Demo App](search-create-app-portal.md) | A wizard that generates a downloadable HTML file with query (read-only) access to an index, intended as a validation and testing tool rather than a short cut to a full client app.|
-|November | [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) | Caches skillset processing for future reuse. |
-|November | [Document Extraction skill](cognitive-search-skill-document-extraction.md) | A cognitive skill to extract the contents of a file from within a skillset.|
-|November | [Text Translation skill](cognitive-search-skill-text-translation.md) | A cognitive skill used during indexing that evaluates and translates text. Generally available.|
-|November | [Power BI templates](https://github.com/Azure-Samples/cognitive-search-templates/blob/master/README.md) | Template for visualizing content in knowledge store |
-|November | [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) and [Cosmos DB Gremlin API (preview)](search-howto-index-cosmosdb.md) | New indexer data sources in public preview. |
-|July | [Azure Government Cloud support](https://azure.microsoft.com/global-infrastructure/services/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&products=search) | Generally available.|
+| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | This is a new addition to the 2021-04-30-Preview API, and are now required for semantic queries. Public preview in the portal and preview REST APIs.|
+| November | [Azure Files indexer (preview)](./search-file-storage-integration.md) | Public preview in the portal and preview REST APIs.|
+| July | [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Public preview announcement. |
+| July | [Role-based access control for data plane (preview)](search-security-rbac.md) | Public preview announcement. |
+| July | [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). Public preview announcement. |
+| May | [Power Query connector support (preview)](search-how-to-index-power-query-data-sources.md) | Public preview announcement. |
+| May | [Azure Data Lake Storage Gen2 indexer](search-howto-index-azure-data-lake-storage.md) | Generally available, using REST api-version=2020-06-30 and Azure portal. |
+| May | [Azure MySQL indexer (preview)](search-howto-index-mysql.md) | Public preview, REST api-version=2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
+| May | [More queryLanguages for spell check and semantic results](/rest/api/searchservice/preview-api/search-documents#queryLanguage) | See [Announcement (techcommunity blog)](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110). Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview, [Azure.Search.Documents 11.3.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.3.0-beta.2), or [Search explorer](search-explorer.md) in Azure portal. |
+| May| [More regions for double encryption](search-security-manage-encryption-keys.md#double-encryption) | Generally available in all regions, subject to [service creation dates](search-security-manage-encryption-keys.md#double-encryption). |
+| April | [Gremlin API support (preview)](search-howto-index-cosmosdb-gremlin.md) | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. |
+| March | [Semantic search (preview)](semantic-search-overview.md) | Search results relevance scoring based on semantic models. Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview or [Search explorer](search-explorer.md) in Azure portal. Region and tier restrictions apply. |
+| March | [Spell check query terms (preview)](speller-how-to-add.md) | The `speller` option works with any query type (simple, full, or semantic). Public preview, REST only, api-version=2020-06-30-Preview|
+| March | [SharePoint indexer (preview)](search-howto-index-sharepoint-online.md) | Public preview, REST only, api-version=2020-06-30-Preview |
+| March | [Normalizers (preview)](search-normalizers.md) | Public preview, REST only, api-version=2020-06-30-Preview |
+| March | [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md ) | Scans for strings specified in a custom, user-defined list of words and phrases. Generally available. |
+| February | [Reset Documents (preview)](search-howto-run-reset-indexers.md) | Available in the [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview). |
+| February | [Availability Zones](search-performance-optimization.md#availability-zones) | Search services with two or more replicas in certain regions, as listed in [Scale for performance](search-performance-optimization.md#availability-zones), gain resiliency by having replicas in two or more distinct physical locations. The region and date of search service creation determine availability. |
+| February | [Azure CLI](/cli/azure/search) </br>[Azure PowerShell](/powershell/module/az.search/) | New revisions now provide the full range of operations in the Management REST API 2020-08-01, including support for IP firewall rules and private endpoint. Generally available. |
+| January | [Solution accelerator for Azure Cognitive Search and QnA Maker](https://github.com/Azure-Samples/search-qna-maker-accelerator) | Pulls questions and answers out of the document and suggest the most relevant answers. A live demo app can be found at [https://aka.ms/qnaWithAzureSearchDemo](https://aka.ms/qnaWithAzureSearchDemo). This feature is an open-source project (no SLA). |
<a name="new-service-name"></a>
-## New service name
+## Service re-brand
Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflect the expanded (yet optional) use of cognitive skills and AI processing in service operations. API versions, NuGet packages, namespaces, and endpoints are unchanged. New and existing search solutions are unaffected by the service name change.
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Smart deployments is a back-end capability that improves the performance of depl
While smart deployments is enabled by default on newly created connections, we understand that some customers would prefer all their source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not. You can modify your workflow to disable smart deployments to have your connection deploy all content regardless of its modification status. See [Customize the deployment workflow](#customize-the-deployment-workflow) for more details. > [!NOTE]
- > This capapbilty was launched in public preview on April 20th, 2022. Connections created prior to launch would need to be updated or recreated for smart deployments to be turned on.
+ > This capability was launched in public preview on April 20th, 2022. Connections created prior to launch would need to be updated or recreated for smart deployments to be turned on.
> ### Customize the deployment workflow
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
View Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
SecurityAlert | where ProductName == "Azure Security Center for IoT"
- | where ProductComponentName == " PROTOCOL_VIOLATION"
+ | where ProductComponentName == "PROTOCOL_VIOLATION"
SecurityAlert | where ProductName == "Azure Security Center for IoT"
- | where ProductComponentName == " POLICY_VIOLATION"
+ | where ProductComponentName == "POLICY_VIOLATION"
SecurityAlert | where ProductName == "Azure Security Center for IoT"
sentinel Migration Arcsight Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-historical-data.md
Use the lacat utility to export data from ArcSight Logger. lacat exports CEF rec
To export data with the lacat utility:
-1. [Download the lacat utility](https://github.com/hpsec/lacat).
+1. [Download the lacat utility](https://github.com/hpsec/lacat). For large volumes of data, we suggest that you modify the script for better performance. [Use the modified version](https://aka.ms/lacatmicrosoft).
1. Follow the examples in the lacat repository on how to run the script. ## Next steps - [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md) - [Select a data ingestion tool](migration-ingestion-tool.md)-- [Ingest historical data into your target platform](migration-export-ingest.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
For example:
| **FQDN** | `appserver.contoso.com` | \<empty\> |
-When the value provided by the source is an FQDN, or when the value may be either and FQDN or a short hostname, the parser should calculate the 4 values. The following code snippet would perform this calculation, in this case setting the `Dvc` fields based on ab input in the `Host` field
-
-``` KQL
- | extend SplitHostname = split(Host,".")
- | extend
- DvcDomain = tostring(strcat_array(array_slice(SplitHostname, 1, -1), '.')),
- DvcFQDN = iif (array_length(SplitHostname) > 1, Hostname, ''),
- DvcDomainType = iif (array_length(SplitHostname) > 1, 'FQDN', '')
- | extend
- DvcHostname = tostring(SplitHostname[0])
- | project-away SplitHostname
-```
+When the value provided by the source is an FQDN, or when the value may be either and FQDN or a short hostname, the parser should calculate the 4 values. Use the ASIM helper functions `_ASIM_ResolveFQDN`, `_ASIM_ResolveSrcFQDN`, `_ASIM_ResolveDstFQDN`, and `_ASIM_ResolveDvcFQDN` to easily set all four fields based on a single input value. For more information, see [ASIM helper functions](normalization-functions.md).
#### The device ID
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
Title: Advanced Security Information Model (ASIM) security content | Microsoft Docs description: This article outlines the Microsoft Sentinel security content that uses the Advanced Security Information Model (ASIM). -+ Last updated 11/09/2021
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
The following workflow describes the high level steps in developing a custom ASI
1. Identify the schemas or schemas that the events sent from the source represent. For more information, see [Schema overview](normalization-about-schemas.md).
-1. [Map](#mapping) the source event fields to the identified schema or schemas.
+1. [Map](#planning-mapping) the source event fields to the identified schema or schemas.
1. [Develop](#developing-parsers) one or more ASIM parsers for your source. You'll need to develop a filtering parser and a parameter-less parser for each schema relevant to the source.
A representative set of logs should include:
>
-## Mapping
+## Planning mapping
Before you develop a parser, map the information available in the source event or events to the schema you identified:
Event | where Source == "Microsoft-Windows-Sysmon" and EventID == 1
In some cases, the event itself does not contain information that would allow filtering for specific source types.
-For example, Infoblox DNS events are sent as Syslog messages, and are hard to distinguish from Syslog messages sent from other sources. In such cases, the parser relies on a list of sources that defines the relevant events. This list is maintained in the **ASimSourceType** watchlist.
+For example, Infoblox DNS events are sent as Syslog messages, and are hard to distinguish from Syslog messages sent from other sources. In such cases, the parser relies on a list of sources that defines the relevant events. This list is maintained in the [**Sources_by_SourceType**](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser) watchlist.
-**To use the ASimSourceType watchlist in your parsers**:
-
-1. Include the following line at the beginning of your parser:
-
-```KQL
- let Sources_by_SourceType=(sourcetype:string){_GetWatchlist('ASimSourceType') | where SearchKey == tostring(sourcetype) | extend Source=column_ifexists('Source','') | where isnotempty(Source)| distinct Source };
-```
-
-2. Add a filter that uses the watchlist in the parser filtering section. For example, the Infoblox DNS parser includes the following in the filtering section:
+To use the ASimSourceType watchlist in your parsers, use the `_ASIM_GetSourceBySourceType` function in the parser filtering section. For example, the Infoblox DNS parser includes the following in the filtering section:
```KQL
- | where Computer in (Sources_by_SourceType('InfobloxNIOS'))
+ | where Computer in (_ASIM_GetSourceBySourceType('InfobloxNIOS'))
``` To use this sample in your parser:
The KQL operators that perform parsing are listed below, ordered by their perfor
|[parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction) | Parse the values in a string formatted as XML. If only a few values are needed from the XML, using `parse`, `extract`, or `extract_all` provides better performance. |
-In addition to parsing string, the parsing phase may require more processing of the original values, including:
+### Normalizing
+
+#### Mapping field names
-- **Formatting and type conversion**. The source field, once extracted, may need to be formatted to fit the target schema field. For example, you may need to convert a string representing date and time to a datetime field. Functions such as `todatetime` and `tohex` are helpful in these cases.
+The simplest form of normalization is renaming an original field to its normalized name. Use the operator `project-rename` for that. Using project-rename ensures that the field is still managed as a physical field and handling the field is more performant. For example:
-- **Value lookup**. The value of the source field, once extracted, may need to be mapped to the set of values specified for the target schema field. For example, some sources report numeric DNS response codes, while the schema mandates the more common text response codes. The functions `iff` and `case` can be helpful to map a few values.
+```KQL
+ | project-rename
+ ActorUserId = InitiatingProcessAccountSid,
+ ActorUserAadId = InitiatingProcessAccountObjectId,
+ ActorUserUpn = InitiatingProcessAccountUpn,
+```
- For example, the Microsoft DNS parser assigns the `EventResult` field based on the Event ID and Response Code using an `iff` statement, as follows:
+#### Normalizing fields format and type
- ```kusto
- extend EventResult = iff(EventId==257 and ResponseCode==0 ,'Success','Failure')
- ```
+In many cases, the original value extracted needs to be normalized. For example, in ASIM a MAC address uses colons as separator, while the source may send a hyphen delimited MAC address. The primary operator for transforming values is `extend`, alongside a broad set of KQL string, numerical and date functions.
- For several values, use `datatable` and `lookup`, as demonstrated in the same DNS parser:
+Also, ensuring that parser output fields matches type defined in the schema is critical for parsers to work. For example, you may need to convert a string representing date and time to a datetime field. Functions such as `todatetime` and `tohex` are helpful in these cases.
- ```kusto
- let RCodeTable = datatable(ResponseCode:int,ResponseCodeName:string) [ 0, 'NOERROR', 1, 'FORMERR'....];
- ...
- | lookup RCodeTable on ResponseCode
- | extend EventResultDetails = case (
- isnotempty(ResponseCodeName), ResponseCodeName,
- ResponseCode between (3841 .. 4095), 'Reserved for Private Use',
- 'Unassigned')
- ```
+For example, the original unique event ID may be sent as an integer, but ASIM requires the value to be a string, to ensure broad compatibility among data sources. Therefore, when assigning the source field use `extned` and `tostring` instead of `project-rename`:
-> [!NOTE]
-> The transformation does not allow using only `lookup`, as multiple values are mapped to `Reserved for Private Use` or `Unassigned`, and therefore the query uses both lookup and case.
-> Even so, the query is still much more efficient than using `case` for all values.
->
+```KQL
+ | extend EventOriginalUid = tostring(ReportId),
+```
+
+#### Derived fields and values
-### Mapping values
+The value of the source field, once extracted, may need to be mapped to the set of values specified for the target schema field. The functions `iff`, `case`, and `lookup` can be helpful to map available data to target values.
-In many cases, the original value extracted needs to be normalized. For example, in ASIM a MAC address uses colons as separator, while the source may send a hyphen delimited MAC address. The primary operator for transforming values is `extend`, alongside a broad set of KQL string, numerical and date functions, as demonstrated in the [Parsing](#parsing) section above.
+For example, the Microsoft DNS parser assigns the `EventResult` field based on the Event ID and Response Code using an `iff` statement, as follows:
-Use `case`, `iff`, and `lookup` statements when there is a need to map a set of values to the values allowed by the target field.
+```KQL
+ extend EventResult = iff(EventId==257 and ResponseCode==0 ,'Success','Failure')
+```
-When each source value maps to a target value, define the mapping using the `datatable` operator and `lookup` to map. For example
+To map several values, define the mapping using the `datatable` operator and use `lookup` to perform the mapping. For example, some sources report numeric DNS response codes and the network protocol, while the schema mandates the more common text labels representation for both. The following example demonstrates how to derive the needed values using `datatable` and `lookup`:
```KQL let NetworkProtocolLookup = datatable(Proto:real, NetworkProtocol:string)[
When each source value maps to a target value, define the mapping using the `dat
Notice that lookup is useful and efficient also when the mapping has only two possible values.
-When the mapping conditions are more complex use the `iff` or `case` functions. The `iff` function enables mapping two values:
-
-```KQL
-| extend EventResult =
- iff(EventId==257 and ResponseCode==0,'Success','FailureΓÇÖ)
-```
-
-The `case` function supports more than two target values. The example below shows how to combine `lookup` and `case`. The `lookup` example above returns an empty value in the field `DnsResponseCodeName` if the lookup value is not found. The `case` example below augments it by using the result of the `lookup` operation if available, and specifying additional conditions otherwise.
+When the mapping conditions are more complex combine `iff`, `case`, and `lookup`. The example below shows how to combine `lookup` and `case`. The `lookup` example above returns an empty value in the field `DnsResponseCodeName` if the lookup value is not found. The `case` example below augments it by using the result of the `lookup` operation if available, and specifying additional conditions otherwise.
```KQL | extend DnsResponseCodeName =
The `case` function supports more than two target values. The example below show
```
-### Prepare fields in the result set
+Microsoft Sentinel provides handy functions for common lookup values. For example, the `DnsResponseCodeName` lookup above, can be implemented using one of the following functions:
+
+```KQL
+
+| extend DnsResponseCodeName = _ASIM_LookupDnsResponseCode(DnsResponseCode)
+
+| invoke _ASIM_ResolveDnsResponseCode('DnsResponseCode')
+```
+
+The first option accepts as a parameter the value to lookup and let you choose the output field and therefore useful as a general lookup function. The second option is more geared towards parsers, takes as input the name of the source field, and updates the needed ASIM field, in this case `DnsResponseCodeName`.
-The parser must prepare the fields in the results set to ensure that the normalized fields are used.
+For a full list of ASIM help functions, refer to [ASIM functions](normalization-functions.md)
-The following KQL operators are used to prepare fields in your results set:
+
+#### Enrichment fields
+
+In addition to the fields available from the source, a resulting ASIM event includes enrichment fields that the parser should generate. In many cases, the parsers can assign a constant value to the fields, for example:
+
+```KQL
+ | extend
+ EventCount = int(1),
+ EventProduct = 'M365 Defender for Endpoint',
+ EventVendor = 'Microsoft',
+ EventSchemaVersion = '0.1.0',
+ EventSchema = 'ProcessEvent'
+```
+
+Another type of enrichment fields that your parsers should set are type fields, which designate the type of the value stored in a related field. For example, the `SrcUsernameType` field designates the type of value stored in the `SrcUsername` field. You can find more information about type fields in the [entities description](normalization-about-schemas.md#entities).
+
+In most cases, types are also assigned a constant value. However, in some cases the type has to be determined based on the actual value, for example:
+
+```KQL
+ DomainType = iif (array_length(SplitHostname) > 1, 'FQDN', '')
+```
+
+<a name="resolvefqnd"></a>Microsoft Sentinel provides useful functions for handling enrichment. For example, use the following function to automatically assign the fields `SrcHostname`, `SrcDomain`, `SrcDomainType` and `SrcFQDN` based on the value in the field `Computer`.
+
+```KQL
+ | invoke _ASIM_ResolveSrcFQDN('Computer')
+```
+
+This function will set the fields as follows:
+
+| Computer field | Output fields |
+| -- | - |
+| server1 | SrcHostname: server1<br>SrcDomain, SrcDomainType, SrcFQDN all empty |
+| server1.microsoft.com | SrcHostname: server1<br>SrcDomain: microsoft.com<br> SrcDomainType: FQDN<br>SrcFQDN:server1.microsoft.com |
++
+The functions `_ASIM_ResolveDstFQDN` and `_ASIM_ResolveDvcFQDN` perform a similar task populating the related `Dst` and `Dvc` fields.For a full list of ASIM help functions, refer to [ASIM functions](normalization-functions.md)
+
+### Select fields in the result set
+
+The parser can optionally select fields in the results set. Removing unneeded fields can improve performance and add clarity by avoiding confusing between normalized fields and remaining source fields.
+
+The following KQL operators are used to select fields in your results set:
|Operator | Description | When to use in a parser | ||||
-|**project-rename** | Renames fields. | If a field exists in the actual event and only needs to be renamed, use `project-rename`. <br><br>The renamed field still behaves like a built-in field, and operations on the field have much better performance. |
|**project-away** | Removes fields. | Use `project-away` for specific fields that you want to remove from the result set. We recommend not removing the original fields that are not normalized from the result set, unless they create confusion or are very large and may have performance implications. | |**project** | Selects fields that existed before, or were created as part of the statement, and removes all other fields. | Not recommended for use in a parser, as the parser should not remove any other fields that are not normalized. <br><br>If you need to remove specific fields, such as temporary values used during parsing, use `project-away` to remove them from the results. |
-|**extend** | Add aliases. | Aside from its role in generating calculated fields, the `extend` operator is also used to create aliases. |
+
+For example, when parsing a custom log table, use the following to remove the remaining original fields that still have a type descriptor:
+
+```KQL
+ | project-away
+ *_d, *_s, *_b, *_g
+```
### Handle parsing variants
sentinel Normalization Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-functions.md
+
+ Title: Advanced Security Information Model (ASIM) helper functions | Microsoft Docs
+description: This article outlines the Microsoft Sentinel Advanced Security Information Model (ASIM) helper functions.
++ Last updated : 06/07/2021+++
+# Advanced Security Information Model (ASIM) helper functions (Public preview)
++
+Advanced Security Information Model (ASIM) helper functions extend the KQL language providing functionality that helps interact with normalized data and in writing parsers. The following is a list of ASIM help functions:
+
+## Scalar functions
+
+Scalar functions are used in expressions are typically invoked as part of an `extend` statement.
+
+| Function | Input parameters | Output | Description |
+| -- | - | | -- |
+| _ASIM_GetSourceBySourceType | SourceType (String) | List of sources (dynamic) | Retrieve the list of sources associated with the input source type from the `SourceBySourceType` Watchlist. This function is intended for use by parsers writers. |
+| _ASIM_LookupDnsQueryType | QueryType (Integer) | Query Type Name | Translate a numeric DNS resource record (RR) type to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4) |
+| _ASIM_LookupDnsResponseCode | ResponseCode (Integer) | Response Code Name | Translate a numeric DNS response code (RCODE) to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6) |
++
+## Tabular functions
+
+Tabular functions are invoked using the `invoke` operator and return value by adding fields to the data set, as if they perform `extend`.
+
+| Function | Input parameters | Extended fields | Description |
+| -- | - | | -- |
+| _ASIM_ResolveDnsQueryType | field (String) | `DnsQueryTypeName` | Translate a numeric DNS resource record (RR) type stored in the field specified to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4), and assigns the result to the field `DnsQueryTypeName` |
+| _ASIM_LookupDnsResponseCode | field (String) | `DnsResponseCodeName` | Translate a numeric DNS response code (RCODE) stored in the field specified to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6), and assigns the result to the field `DnsResponseCodeName` |
+| _ASIM_ResolveFQDN | field (String) | - `ExtractedHostname`<br> - `Domain`<br> - `DomainType` <br> - `FQDN` | Analyzes the value in the field specified and set the output fields accordingly. For more information, see [example](normalization-develop-parsers.md#resolvefqnd) in the article about developing parsers. |
+| _ASIM_ResolveSrcFQDN | field (String) | - `SrcHostname`<br> - `SrcDomain`<br> - `SrcDomainType`<br> - `SrcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Src` fields |
+| _ASIM_ResolveDstFQDN | field (String) | - `DstHostname`<br> - `DstDomain`<br> - `DstDomainType`<br> - `SrcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Dst` fields |
+| _ASIM_ResolveDvcFQDN | field (String) | - `DvcHostname`<br> - `DvcDomain`<br> - `DvcDomainType`<br> - `DvcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Dvc` fields |
+++
+## <a name="next-steps"></a>Next steps
+
+This article discusses the Advanced Security Information Model (ASIM) help functions.
+
+For more information, see:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced Security Information Model (ASIM) overview](normalization.md)
+- [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced Security Information Model (ASIM) parsers](normalization-about-parsers.md)
+- [Using the Advanced Security Information Model (ASIM)](normalization-about-parsers.md)
+- [Modifying Microsoft Sentinel content to use the Advanced Security Information Model (ASIM) parsers](normalization-modify-content.md)
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-manage-parsers.md
For example, the following code shows a DNS filtering unifying parser, having re
## Configure the sources relevant to a source-specific parser
-Some parsers requires you to update the list of sources that are relevant to the parser. For example, a parser that uses Syslog data, may not be able to determine what Syslog events are relevant to the parser. Such a parser may use the ASimSourceType watchlist to determine which sources send information relevant to the parser. For such parses add a record for each relevant source to the watchlist:
+Some parsers requires you to update the list of sources that are relevant to the parser. For example, a parser that uses Syslog data, may not be able to determine what Syslog events are relevant to the parser. Such a parser may use the `Sources_by_SourceType` watchlist to determine which sources send information relevant to the parser. For such parses add a record for each relevant source to the watchlist:
- Set the `SourceType` field to the parser specific value specified in the parser documentation. - Set the `Source` field to the identifier of the source used in the events. You may need to query the original table, such as Syslog, to determine the correct value.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn more about the IntSights integration with Microsoft Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel) - To connect Microsoft Sentinel to the IntSights TAXII Server, obtain the API Root, Collection ID, Username and Password from the IntSights portal after you configure a policy of the data you wish to send to Microsoft Sentinel.
-### ThreatConnect
+### Pulsedive
-- [Learn more about STIX and TAXII @ThreatConnect](https://threatconnect.com/stix-taxii/)-- [TAXII Services documentation @ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)
+- [Learn about Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953)
### ReversingLabs
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn about SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497)
+### ThreatConnect
+
+- [Learn more about STIX and TAXII @ThreatConnect](https://threatconnect.com/stix-taxii/)
+- [TAXII Services documentation @ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)
+ ## Integrated threat intelligence platform products To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions to [connect Threat Intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). The second part of these instructions calls for you to enter information into your TIP solution. See the links below for more information.
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
Previously updated : 04/22/2022 Last updated : 06/06/2022 # Configure the minimum TLS version for a Service Bus namespace using ARM (Preview)
-To configure the minimum TLS version for a Service Bus namespace, set the `MinimumTlsVersion` version property. When you create a Service Bus namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+Azure Service Bus namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Service Bus namespace to require that clients send and receive data with a newer version of TLS. If a Service Bus namespace requires a minimum version of TLS, then any requests made with an older version will fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md).
-> [!NOTE]
-> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+You can configure the minimum TLS version using the Azure portal or Azure Resource Manager (ARM) template.
+
+## Specify the minimum TLS version in the Azure portal
+You can specify the minimum TLS version when creating a Service Bus namespace in the Azure portal on the **Advanced** tab.
++
+You can also specify the minimum TLS version for an existing namespace on the **Configuration** page.
+ ## Create a template to configure the minimum TLS version
+To configure the minimum TLS version for a Service Bus namespace, set the `MinimumTlsVersion` version property to 1.0, 1.1, or 1.2. When you create a Service Bus namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+
+> [!NOTE]
+> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
-To configure the minimum TLS version for a Service Bus namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
+The following steps describe how to create a template in the Azure portal.
1. In the Azure portal, choose **Create a resource**. 2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
service-fabric Service Fabric Cluster Creation Via Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-portal.md
Title: Create a Service Fabric cluster in the Azure portal
description: Learn how to set up a secure Service Fabric cluster in Azure using the Azure portal and Azure Key Vault. Previously updated : 09/06/2018 Last updated : 06/06/2022 # Create a Service Fabric cluster in Azure using the Azure portal > [!div class="op_single_selector"]
To serve these purposes, the certificate must meet the following requirements:
* The certificate must contain a private key. * The certificate must be created for key exchange, exportable to a Personal Information Exchange (.pfx) file. * The certificate's **subject name must match the domain** used to access the Service Fabric cluster. This is required to provide TLS for the cluster's HTTPS management endpoints and Service Fabric Explorer. You cannot obtain a TLS/SSL certificate from a certificate authority (CA) for the `.cloudapp.azure.com` domain. Acquire a custom domain name for your cluster. When you request a certificate from a CA the certificate's subject name must match the custom domain name used for your cluster.
+* The certificate's list of DNS names must include the Fully Qualified Domain Name (FQDN) of the cluster.
#### Client authentication certificates Additional client certificates authenticate administrators for cluster management tasks. Service Fabric has two access levels: **admin** and **read-only user**. At minimum, a single certificate for administrative access should be used. For additional user-level access, a separate certificate must be provided. For more information on access roles, see [role-based access control for Service Fabric clients][service-fabric-cluster-security-roles].
site-recovery Failover Failback Overview Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview-preview.md
To reprotect and fail back VMware machines and physical servers from Azure to on
- You can select any of the Azure Site Recovery replication appliances registered under a vault to re-protect to on-premises. You do not require a separate Process server in Azure for re-protect operation and a scale-out Master Target server for Linux VMs. - Replication appliance doesnΓÇÖt require additional network connection/ports (as compared with forward protection) during failback. Same appliance can be used for forward and backward protections if it is in healthy state. It should not impact the performance of the replications.-- Ensure that the data store or the host selected is the one where appliance is situated and can be accessed by the appliance selected.
+- When selecting target datastore, ensure that the ESX Host where the replication appliance is located is able to access it.
> [!NOTE] > Storage vMotion of replication appliance is not supported after re-protect operation.
site-recovery Site Recovery Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-workload.md
Site Recovery contributes to application-level protection and recovery as follow
Site Recovery can replicate any app running on a supported machine. We've partnered with product teams to do additional testing for the apps specified in the following table.
-| **Workload** |**Replicate Azure VMs to Azure** |**Replicate Hyper-V VMs to a secondary site** | **Replicate Hyper-V VMs to Azure** | **Replicate VMware VMs to a secondary site** | **Replicate VMware VMs to Azure** |
-| | | | | ||
-| Active Directory, DNS |Yes |Yes |Yes |Yes |Yes|
-| Web apps (IIS, SQL) |Yes |Yes |Yes |Yes |Yes|
-| System Center Operations Manager |Yes |Yes |Yes |Yes |Yes|
-| SharePoint |Yes |Yes |Yes |Yes |Yes|
-| SAP<br/><br/>Replicate SAP site to Azure for non-cluster |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft)|
-| Exchange (non-DAG) |Yes |Yes |Yes |Yes |Yes|
-| Remote Desktop/VDI |Yes |Yes |Yes |Yes |Yes|
-| Linux (operating system and apps) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft)|
-| Dynamics AX |Yes |Yes |Yes |Yes |Yes|
-| Windows File Server |Yes |Yes |Yes |Yes |Yes|
-| Citrix XenApp and XenDesktop |No|N/A |No |N/A |No |
+|**Workload** |**Replicate Azure VMs to Azure** |**Replicate Hyper-V VMs to a secondary site** |**Replicate Hyper-V VMs to Azure** |**Replicate VMware VMs to Azure** |
+| | | | | | | | |
+| Active Directory, DNS |Yes |Yes |Yes |Yes|
+| Web apps (IIS, SQL) |Yes |Yes |Yes |Yes|
+| System Center Operations Manager |Yes |Yes |Yes |Yes|
+| SharePoint |Yes |Yes |Yes |Yes|
+| SAP<br/><br/>Replicate SAP site to Azure for non-cluster |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft)|
+| Exchange (non-DAG) |Yes |Yes |Yes |Yes|
+| Remote Desktop/VDI |Yes |Yes |Yes |Yes|
+| Linux (operating system and apps) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft) |Yes (tested by Microsoft)|
+| Dynamics AX |Yes |Yes |Yes |Yes|
+| Windows File Server |Yes |Yes |Yes |Yes|
+| Citrix XenApp and XenDesktop |No|N/A |No |No |
## Replicate Active Directory and DNS
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
On each Linux machine that you want to protect, do the following:
11. On the **Manage Accounts** tab, select **Add Account**. 12. Add the account you created. 13. Enter the credentials you use when you enable replication for a computer.
-1. Additional step for updating or protecting SUSE Linux Enterprise Server 11 SP3 OR RHEL 5 or CentOS 5 or Debian 7 machines. [Ensure the latest version is available in the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server).
+1. Additional step for updating or protecting SUSE Linux Enterprise Server 11 SP3 OR RHEL 5 or CentOS 5 or Debian 7 machines. [Ensure the latest version is available in the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
## Anti-virus on replicated machines
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1,
Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10 [(Review supported kernel versions)](#debian-kernel-versions)
-SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
+SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5 <br/><br/> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/) > [!Note]
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
You set up mobility agent on your server when you use Azure Site Recovery for di
## Update mobility service from Azure portal 1. Before you start ensure that the configuration server, scale-out process servers, and any master target servers that are a part of your deployment are updated before you update the Mobility Service on protected machines.
- 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server).
+ 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
1. In the portal open the vault > **Replicated items**. 1. If the configuration server is the latest version, you see a notification that reads "New Site recovery replication agent update is available. Click to install."
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Push installation is an integral part of the job that's run from the Azure porta
- Ensure that all push installation [prerequisites](vmware-azure-install-mobility-service.md) are met. - Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md).-- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server).
+- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, SUSE Linux Enterprise Server 11 SP4, RHEL 5, CentOS 5, Debian 7, Debian 8, Ubunut 14.04 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
The push installation workflow is described in the following sections:
Installer file | Operating system (64-bit only)
`Microsoft-ASR_UA_version_RHEL7-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 7 </br> CentOS 7 `Microsoft-ASR_UA_version_RHEL8-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 8 </br> CentOS 8 `Microsoft-ASR_UA_version_SLES12-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 12 SP1 </br> Includes SP2 and SP3.
-[To be downloaded and placed in this folder manually](#suse-11-sp3-server) | SUSE Linux Enterprise Server 11 SP3
+[To be downloaded and placed in this folder manually](#suse-11-sp3-or-suse-11-sp4-server) | SUSE Linux Enterprise Server 11 SP3
`Microsoft-ASR_UA_version_SLES11-SP4-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 11 SP4 `Microsoft-ASR_UA_version_SLES15-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 15 `Microsoft-ASR_UA_version_OL6-64_GA_date_release.tar.gz` | Oracle Enterprise Linux 6.4 </br> Oracle Enterprise Linux 6.5
Installer file | Operating system (64-bit only)
`Microsoft-ASR_UA_version_UBUNTU-16.04-64_GA_date_release.tar.gz` | Ubuntu Linux 16.04 LTS server `Microsoft-ASR_UA_version_UBUNTU-18.04-64_GA_date_release.tar.gz` | Ubuntu Linux 18.04 LTS server `Microsoft-ASR_UA_version_UBUNTU-20.04-64_GA_date_release.tar.gz` | Ubuntu Linux 20.04 LTS server
-[To be downloaded and placed in this folder manually](#debian-7-server) | Debian 7
+[To be downloaded and placed in this folder manually](#debian-7-or-debian-8-server) | Debian 7
`Microsoft-ASR_UA_version_DEBIAN8-64_GA_date_release.tar.gz` | Debian 8 `Microsoft-ASR_UA_version_DEBIAN9-64_GA_date_release.tar.gz` | Debian 9
-## Download latest mobility agent installer for SUSE 11 SP3, RHEL 5, Debian 7 server
+## Download latest mobility agent installer for SUSE 11 SP3, SUSE 11 SP4, RHEL 5, Cent OS 5, Debian 7, Debian 8, Oracle Linux 6 and Ubuntu 14.04 server
-### SUSE 11 SP3 server
+### SUSE 11 SP3 or SUSE 11 SP4 server
-As a **prerequisite to update or protect SUSE Linux Enterprise Server 11 SP3 machines** from 9.36 version onwards:
+As a **prerequisite to update or protect SUSE Linux Enterprise Server 11 SP3 or SUSE 11 SP4 machines** from 9.36 version onwards:
1. Ensure latest mobility agent installer is downloaded from Microsoft Download Center and placed in push installer repository on configuration server and all scale out process servers
-2. [Download](site-recovery-whats-new.md) the latest SUSE Linux Enterprise Server 11 SP3 agent installer.
-3. Navigate to Configuration server, copy the SUSE Linux Enterprise Server 11 SP3 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
+2. [Download](site-recovery-whats-new.md) the latest SUSE Linux Enterprise Server 11 SP3 or SUSE 11 SP4 agent installer.
+3. Navigate to Configuration server, copy the SUSE Linux Enterprise Server 11 SP3 or SUSE 11 SP4 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
1. After copying the latest installer, restart InMage PushInstall service. 1. Now, navigate to associated scale-out process servers, repeat step 3 and step 4. 1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be
As a **prerequisite to update or protect RHEL 5 machines** from 9.36 version onw
1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository
-## Debian 7 server
+## Debian 7 or Debian 8 server
-As a **prerequisite to update or protect Debian 7 machines** from 9.36 version onwards:
+As a **prerequisite to update or protect Debian 7 or Debian 8 machines** from 9.36 version onwards:
1. Ensure latest mobility agent installer is downloaded from Microsoft Download Center and placed in push installer repository on configuration server and all scale out process servers
-2. [Download](site-recovery-whats-new.md) the latest Debian 7 agent installer.
-3. Navigate to Configuration server, copy the Debian 7 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
+2. [Download](site-recovery-whats-new.md) the latest Debian 7 or Debian 8 agent installer.
+3. Navigate to Configuration server, copy the Debian 7 or Debian 8 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
1. After copying the latest installer, restart InMage PushInstall service. 1. Now, navigate to associated scale-out process servers, repeat step 3 and step 4. 1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository
+### Ubuntu 14.04 server
+
+As a **prerequisite to update or protect Ubuntu 14.04 machines** from 9.42 version onwards:
+
+1. Ensure latest mobility agent installer is downloaded from Microsoft Download Center and placed in push installer repository on configuration server and all scale out process servers
+2. [Download](site-recovery-whats-new.md) the latest Ubuntu 14.04 agent installer.
+3. Navigate to Configuration server, copy the Ubuntu 14.04 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
+1. After copying the latest installer, restart InMage PushInstall service.
+1. Now, navigate to associated scale-out process servers, repeat step 3 and step 4.
+1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be
+ 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository
++ ## Install the Mobility service using UI (preview) >[!NOTE]
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | ||-|||--| | [Access tier - archive](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Access tier - cold](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
+| [Access tier - cool](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
| [Access tier - hot](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Anonymous public access](anonymous-read-access-configure.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) | | [Azure Active Directory security](authorize-access-azure-active-directory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | ||-|||--| | [Access tier - archive](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Access tier - cold](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Access tier - cool](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
| [Access tier - hot](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Anonymous public access](anonymous-read-access-configure.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | [Azure Active Directory security](authorize-access-azure-active-directory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
The following table provides an overview of how to switch from each type of repl
|--|-||-|| | <b>…from LRS</b> | N/A | Use Azure portal, PowerShell, or CLI to change the replication setting<sup>1,2</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>5</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Switch to GRS/RA-GRS first and then request a live migration<sup>3</sup> | | <b>…from GRS/RA-GRS</b> | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | Perform a manual migration <br /><br /> OR <br /><br /> Switch to LRS first and then request a live migration<sup>3</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>3</sup> |
-| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use PowerShell or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
+| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Use Azure Portal, PowerShell or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
| <b>…from GZRS/RA-GZRS</b> | Perform a manual migration | Perform a manual migration | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | <sup>1</sup> Incurs a one-time egress charge.<br />
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
The sync command differs from the copy command in several ways:
1. By default, the recursive flag is true and sync copies all subdirectories. Sync only copies the top-level files inside a directory if the recursive flag is false. 2. When syncing between virtual directories, add a trailing slash to the path (refer to examples) if there's a blob with the same name as one of the virtual directories.
- 3. If the 'deleteDestination' flag is set to true or prompt, then sync will delete files and blobs at the destination that aren't present at the source.
+ 3. If the 'delete-destination' flag is set to true or prompt, then sync will delete files and blobs at the destination that aren't present at the source.
Advanced:
Note: if include and exclude flags are used together, only files matching the in
`--cpk-by-value` Client provided key by name let clients that make requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
-`--delete-destinatio` (string) Defines whether to delete extra files from the destination that aren't present at the source. Could be set to true, false, or prompt. If set to prompt, the user will be asked a question before scheduling files and blobs for deletion. (default 'false'). (default "false")
+`--delete-destination` (string) Defines whether to delete extra files from the destination that aren't present at the source. Could be set to true, false, or prompt. If set to prompt, the user will be asked a question before scheduling files and blobs for deletion. (default 'false'). (default "false")
`--dry-run` Prints the path of files that would be copied or removed by the sync command. This flag doesn't copy or remove the actual files.
storage File Sync Cloud Tiering Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-policy.md
description: Details on how the date and volume free space policies work togethe
Previously updated : 04/13/2021 Last updated : 06/07/2022
Cloud tiering has two policies that determine which files are tiered to the clou
The **volume free space policy** ensures that a specified percentage of the local volume the server endpoint is located on is always kept free.
-The **date policy** tiers files last accessed x days ago or later. The volume free space policy will always take precedence; when there isn't enough free space on the volume to store as many days worth of files as described by the date policy, Azure File Sync will override the date policy and continue tiering the coldest files until the volume free space percentage is met.
+The **date policy** tiers files last accessed x days ago or later. The volume free space policy will always take precedence. When there isn't enough free space on the volume to store as many days worth of files as described by the date policy, Azure File Sync will override the date policy and continue tiering the coldest files until the volume free space percentage is met.
## How both policies work together
-We'll use an example to illustrate how these policies work: Let's say you configured Azure File Sync on a 500 GiB local volume, and cloud tiering was never enabled. These are the files in your file share:
+We'll use an example to illustrate how these policies work: Let's say you configured Azure File Sync on a 500-GiB local volume, and cloud tiering was never enabled. These are the files in your file share:
|File Name |Last Access Time |File Size |Stored In | |-||--|-| |File 1 | 2 days ago | 10 GiB | Server and Azure file share |File 2 | 10 days ago | 30 GiB | Server and Azure file share |File 3 | 1 year ago | 200 GiB | Server and Azure file share
-|File 4 | 1 year, 2 days ago | 130 GiB | Server and Azure file share
+|File 4 | 1 year, 2 days ago | 120 GiB | Server and Azure file share
|File 5 | 2 years, 1 day ago | 140 GiB | Server and Azure file share **Change 1:** You enabled cloud tiering, set a volume free space policy of 20%, and kept the date policy disabled. With that configuration, cloud tiering ensures 20% (in this case 100 GiB) of space is kept free and available on the local machine. As a result, the total capacity of the local cache is 400 GiB. That 400 GiB will store the most recently and frequently accessed files on the local volume.
-With this configuration, only files 1 through 4 would be stored in the local cache, and file 5 would be tiered. This is only 370 GiB out of the 400 GiB that could be used. File 5 is 140 GiB and would exceed the 400 GiB limit if it was locally cached.
+With this configuration, only files 1 through 4 would be stored in the local cache, and file 5 would be tiered. This only accounts for 360 GiB out of the 400 GiB that could be used. File 5 is 140 GiB and would exceed the 400-GiB limit if it was locally cached.
-**Change 2:** Say a user accesses file 5. This makes file 5 the most recently accessed file in the share. As a result, File 5 would be stored in the local cache and to fit under the 400 GiB limit, file 4 would be tiered. The following table shows where the files are stored, with these updates:
+**Change 2:** Say a user accesses file 5. This makes file 5 the most recently accessed file in the share. As a result, File 5 would be stored in the local cache and to fit under the 400-GiB limit, file 4 would be tiered. The following table shows where the files are stored, with these updates:
|File Name |Last Access Time |File Size |Stored In | |-||--|-|
With this configuration, only files 1 through 4 would be stored in the local cac
|File 1 | 2 days ago | 10 GiB | Server and Azure file share |File 2 | 10 days ago | 30 GiB | Server and Azure file share |File 3 | 1 year ago | 200 GiB | Server and Azure file share
-|File 4 | 1 year, 2 days ago | 130 GiB | Azure file share, tiered locally
+|File 4 | 1 year, 2 days ago | 120 GiB | Azure file share, tiered locally
-**Change 3:** Let's say you updated the policies so that the date-based tiering policy is 60 days and the volume free space policy is 70%. Now, only up to 150 GiB can be stored in the local cache. Although File 2 has been accessed less than 60 days ago, the volume free space policy will override the date policy, and file 2 is tiered to maintain the 70% local free space.
+**Change 3:** Imagine you updated the policies so that the date-based tiering policy is 60 days and the volume free space policy is 70%. Now, only up to 150 GiB can be stored in the local cache. Although File 2 has been accessed less than 60 days ago, the volume free space policy will override the date policy, and file 2 is tiered to maintain the 70% local free space.
**Change 4:** If you changed the volume free space policy to 20% and then used `Invoke-StorageSyncFileRecall` to recall all the files that fit on the local drive while adhering to the cloud tiering policies, the table would look like this:
With this configuration, only files 1 through 4 would be stored in the local cac
|File 1 | 2 days ago | 10 GiB | Server and Azure file share |File 2 | 10 days ago | 30 GiB | Server and Azure file share |File 3 | 1 year ago | 200 GiB | Azure file share, tiered locally
-|File 4 | 1 year, 2 days ago | 130 GiB | Azure file share, tiered locally
+|File 4 | 1 year, 2 days ago | 120 GiB | Azure file share, tiered locally
In this case, files 1, 2 and 5 would be locally cached and files 3 and 4 would be tiered. Because the date policy is 60 days, files 3 and 4 are tiered, even though the volume free space policy allows for up to 400 GiB locally.
In this case, files 1, 2 and 5 would be locally cached and files 3 and 4 would b
## Multiple server endpoints on a local volume
-Cloud tiering can be enabled for multiple server endpoints on a single local volume. For this configuration, you should set the volume free space to the same amount for all the server endpoints on the same volume. If you set different volume free space policies for several server endpoints on the same volume, the largest volume free space percentage will take precedence. This is called the **effective volume free space policy**. For example, if you have three server endpoints on the same local volume, one set to 15%, another set to 20%, and a third set to 30%, they will all begin to tier the coldest files when they have less than 30% free space available.
+Cloud tiering can be enabled for multiple server endpoints on a single local volume. For this configuration, you should set the volume free space to the same amount for all the server endpoints on the same volume. If you set different volume free space policies for several server endpoints on the same volume, the largest volume free space percentage will take precedence. This is called the **effective volume free space policy**. For example, if you have three server endpoints on the same local volume, one set to 15%, another set to 20%, and a third set to 30%, they'll all begin to tier the coldest files when they have less than 30% free space available.
## Next steps
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 5/24/2022 Last updated : 6/06/2022
The following release notes are for version 15.0.0.0 of the Azure File Sync agen
```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Debug-StorageSyncServer -AFSDiag -OutputDirectory C:\output -KernelModeTraceLevel Verbose -UserModeTraceLevel Verbose
- ```
-- Immediately run server change enumeration to detect files changes that were missed by USN journal
- - Azure File Sync uses the Windows USN journal feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
-
- To immediately run server change enumeration on a server endpoint path, run the following PowerShell commands:
- ```powershell
- Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
- Invoke-StorageSyncServerChangeDetection -ServerEndpointPath <path>
- ```
- > [!Note]
- >By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter.
-
+ ```
- Miscellaneous improvements - Reliability and telemetry improvements for cloud tiering and sync.
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 02/09/2022 Last updated : 06/06/2022
* <a id="cross-domain-sync"></a> **Can I have domain-joined and non-domain-joined servers in the same sync group?**
- Yes. A sync group can contain server endpoints that have different Active Directory memberships, even if they are not domain-joined. Although this configuration technically works, we do not recommend this as a typical configuration because access control lists (ACLs) that are defined for files and folders on one server might not be able to be enforced by other servers in the sync group. For best results, we recommend syncing between servers that are in the same Active Directory forest, between servers that are in different Active Directory forests but which have established trust relationships, or between servers that are not in a domain. We recommend that you avoid using a mix of these configurations.
+ Yes. A sync group can contain server endpoints that have different Active Directory memberships, even if they are not domain-joined. Although this configuration technically works, we do not recommend this as a typical configuration because access control lists (ACLs) that are defined for files and folders on one server might not be able to be enforced by other servers in the sync group. For best results, we recommend syncing between servers that are in the same Active Directory forest, between servers that are in different Active Directory forests but have established trust relationships, or between servers that aren't in a domain. We recommend that you avoid using a mix of these configurations.
* <a id="afs-change-detection"></a> **I created a file directly in my Azure file share by using SMB or in the portal. How long does it take for the file to sync to the servers in the sync group?**
* <a id="afs-conflict-resolution"></a> **If the same file is changed on two servers at approximately the same time, what happens?**
- Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name. The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name is **Cloud**. The name follows this taxonomy:
+ Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name. The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name is **Cloud**. The name follows this taxonomy:
\<FileNameWithoutExtension\>-\<endpointName\>\[-#\].\<ext\>
**Why are my tiered files not showing thumbnails or previews in Windows Explorer?** For tiered files, thumbnails and previews won't be visible at your server endpoint. This behavior is expected since the thumbnail cache feature in Windows intentionally skips reading files with the offline attribute. With Cloud Tiering enabled, reading through tiered files would cause them to be downloaded (recalled).
- This behavior is not specific to Azure File Sync, Windows Explorer displays a "grey X" for any files that have the offline attribute set. You will see the X icon when accessing files over SMB. For a detailed explanation of this behavior, refer to [Why donΓÇÖt I get thumbnails for files that are marked offline?](https://devblogs.microsoft.com/oldnewthing/20170503-00/?p=96105)
+ This behavior isn't specific to Azure File Sync. Windows Explorer displays a "grey X" for any files that have the offline attribute set. You'll see the X icon when accessing files over SMB. For a detailed explanation of this behavior, refer to [Why donΓÇÖt I get thumbnails for files that are marked offline?](https://devblogs.microsoft.com/oldnewthing/20170503-00/?p=96105)
- For questions on how to manage tiered files, please see [How to manage tiered files](../file-sync/file-sync-how-to-manage-tiered-files.md).
+ For questions on how to manage tiered files, see [How to manage tiered files](../file-sync/file-sync-how-to-manage-tiered-files.md).
* <a id="afs-tiered-files-out-of-endpoint"></a> **Why do tiered files exist outside of the server endpoint namespace?**
- Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered files, and moves of tiered to other volumes were unaffected. The reason for this behavior was the implicit assumption that File Explorer and other Windows APIs have that move operations on the same volume are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the data from the cloud. Starting with [Azure File Sync agent version 3.0.12.0](../file-sync/file-sync-release-notes.md#supported-versions), Azure File Sync will allow you to move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in the background. This means that moves on the same volume are instantaneous, and we do all the work to recall the file to disk after the move has completed.
+ Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered files, and moves of tiered to other volumes were unaffected. The reason for this behavior was the implicit assumption that File Explorer and other Windows APIs have that move operations on the same volume are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the data from the cloud. Starting with [Azure File Sync agent version 3.0.12.0](../file-sync/file-sync-release-notes.md#supported-versions), Azure File Sync will allow you to move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in the background. This means that moves on the same volume are instantaneous, and we do all the work to recall the file to disk after the move has completed.
* <a id="afs-do-not-delete-server-endpoint"></a> **I'm having an issue with Azure File Sync on my server (sync, cloud tiering, etc.). Should I remove and recreate my server endpoint?**
* <a id="afs-ntfs-acls"></a> **Does Azure File Sync preserve directory/file level NTFS ACLs along with data stored in Azure Files?**
- As of February 24th, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync group. Any changes on ACLs made to Azure Files will sync down via Azure file sync. When copying data to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes, timestamps and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools, such as AzCopy, it is important to use the latest version. Check the [file copy tools table](storage-files-migration-overview.md#file-copy-tools) to get an overview of Azure copy tools to ensure you can copy all of the important metadata of a file.
+ As of February 24, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync group. Any changes on ACLs made to Azure Files will sync down via Azure file sync. When copying data to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes, timestamps and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools, such as AzCopy, it's important to use the latest version. Check the [file copy tools table](storage-files-migration-overview.md#file-copy-tools) to get an overview of Azure copy tools to ensure you can copy all of the important metadata of a file.
If you have enabled Azure Backup on your file sync managed file shares, file ACLs can continue to be restored as part of the backup restore workflow. This works either for the entire share or individual files/directories.
- If you are using snapshots as part of the self-managed backup solution for file shares managed by file sync, your ACLs may not be restored properly to NTFS ACLs if the snapshots were taken prior to February 24th, 2020. If this occurs, consider contacting Azure Support.
+ If you're using snapshots as part of the self-managed backup solution for file shares managed by file sync, your ACLs may not be restored properly to NTFS ACLs if the snapshots were taken before February 24, 2020. If this occurs, consider contacting Azure Support.
* <a id="afs-lastwritetime"></a> **Does Azure File Sync sync the LastWriteTime for directories?**
- No, Azure File Sync does not sync the LastWriteTime for directories.
+ No, Azure File Sync doesn't sync the LastWriteTime for directories.
## Security, authentication, and access control
There are two options that provide auditing functionality for Azure Files: - If users are accessing the Azure file share directly, [Azure Storage logs](../blobs/monitor-blob-storage.md?tabs=azure-powershell#analyzing-logs) can be used to track file changes and user access. These logs can be used for troubleshooting purposes and the requests are logged on a best-effort basis.
- - If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent installed, use an [audit policy](/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder) or 3rd party product to track file changes and user access on the Windows Server.
+ - If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent installed, use an [audit policy](/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder) or third-party product to track file changes and user access on the Windows Server.
+
+* <a id="access-based-enumeration"></a>
+**Does Azure Files support using Access-Based Enumeration (ABE) to control the visibility of the files and folders in SMB Azure file shares?**
+
+ No, this scenario isn't supported.
+ ### AD DS & Azure AD DS Authentication * <a id="ad-support-devices"></a> **Does Azure Active Directory Domain Services (Azure AD DS) support SMB access using Azure AD credentials from devices joined to or registered with Azure AD?**
- No, this scenario is not supported.
+ No, this scenario isn't supported.
* <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?**
No, Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant that resides in the same subscription as the file share. Only one subscription can be associated with an Azure AD tenant. This limitation applies to both Azure AD DS and on-premises AD DS authentication methods. When using on-premises AD DS for authentication, [the AD DS credential must be synced to the Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) that the storage account is associated with. * <a id="ad-multiple-forest"></a>
-**Does on-premises AD DS authentication for Azure file shares support integration with an AD DS environment using multiple forests?**
+**Does on-premises AD DS authentication for Azure file shares support integration with an AD DS environment using multiple forests?**
Azure Files on-premises AD DS authentication only integrates with the forest of the domain service that the storage account is registered to. To support authentication from another forest, your environment must have a forest trust configured correctly. The way Azure Files register in AD DS almost the same as a regular file server, where it creates an identity (computer or service logon account) in AD DS for authentication. The only difference is that the registered SPN of the storage account ends with "file.core.windows.net" which does not match with the domain suffix. Consult your domain administrator to see if any update to your suffix routing policy is required to enable multiple forest authentication due to the different domain suffix. We provide an example below to configure suffix routing policy.
- Example: When users in forest A domain want to reach an file share with the storage account registered against a domain in forest B, this will not automatically work because the service principal of the storage account does not have a suffix matching the suffix of any domain in forest A. We can address this issue by manually configuring a suffix routing rule from forest A to forest B for a custom suffix of "file.core.windows.net".
- First, you must add a new custom suffix on forest B. Make sure you have the appropriate administrative permissions to change the configuration, then follow these steps:
- 1. Logon to a machine domain joined to forest B
- 2. Open up "Active Directory Domains and Trusts" console
- 3. Right click on "Active Directory Domains and Trusts"
- 4. Click on "Properties"
- 5. Click on "Add"
- 6. Add "file.core.windows.net" as the UPN Suffixes
- 7. Click on "Apply", then "OK" to close the wizard
+ Example: When users in forest A domain want to reach a file share with the storage account registered against a domain in forest B, this won't automatically work because the service principal of the storage account doesn't have a suffix matching the suffix of any domain in forest A. We can address this issue by manually configuring a suffix routing rule from forest A to forest B for a custom suffix of "file.core.windows.net".
+
+ First, you must add a new custom suffix on forest B. Make sure you have the appropriate administrative permissions to change the configuration, then follow these steps:
+
+ 1. Logon to a machine domain joined to forest B.
+ 2. Open up the **Active Directory Domains and Trusts** console.
+ 3. Right-click on **Active Directory Domains and Trusts**.
+ 4. Select **Properties**.
+ 5. Select **Add**.
+ 6. Add "file.core.windows.net" as the UPN Suffixes.
+ 7. Select **Apply**, then **OK** to close the wizard.
Next, add the suffix routing rule on forest A, so that it redirects to forest B.
- 1. Logon to a machine domain joined to forest A
- 2. Open up "Active Directory Domains and Trusts" console
- 3. Right-click on the domain that you want to access the file share, then click on the "Trusts" tab and select forest B domain from outgoing trusts. If you haven't configure trust between the two forests, you need to setup the trust first
- 4. Click on "Properties…" then "Name Suffix Routing"
- 5. Check if the "*.file.core.windows.net" suffix shows up. If not, click on 'Refresh'
- 6. Select "*.file.core.windows.net", then click on "Enable" and "Apply"
+
+ 1. Logon to a machine domain joined to forest A.
+ 2. Open up "Active Directory Domains and Trusts" console.
+ 3. Right-click on the domain that you want to access the file share, then select the **Trusts** tab and select forest B domain from outgoing trusts. If you haven't configured trust between the two forests, you need to set up the trust first.
+ 4. Select **Properties** and then **Name Suffix Routing**
+ 5. Check if the "*.file.core.windows.net" suffix shows up. If not, click **Refresh**.
+ 6. Select "*.file.core.windows.net", then select **Enable** and **Apply**.
* <a id="ad-aad-smb-files"></a> **Is there any difference in creating a computer account or service logon account to represent my storage account in AD?**
- Creating either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or a [service logon account](/windows/win32/ad/about-service-logon-accounts) has no difference on how the authentication would work with Azure Files. You can make your own choice on how to represent a storage account as an identity in your AD environment. The default DomainAccountType set in Join-AzStorageAccountForAuth cmdlet is computer account. However, the password expiration age configured in your AD environment can be different for computer or service logon account and you need to take that into consideration for [Update the password of your storage account identity in AD](./storage-files-identity-ad-ds-update-password.md).
+ Creating either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or a [service logon account](/windows/win32/ad/about-service-logon-accounts) has no difference on how the authentication would work with Azure Files. You can make your own choice on how to represent a storage account as an identity in your AD environment. The default DomainAccountType set in `Join-AzStorageAccountForAuth` cmdlet is computer account. However, the password expiration age configured in your AD environment can be different for computer or service logon account and you need to take that into consideration for [Update the password of your storage account identity in AD](./storage-files-identity-ad-ds-update-password.md).
* <a id="ad-support-rest-apis"></a> **How to remove cached credentials with storage account key and delete existing SMB connections before initializing new connection with Azure AD or AD credentials?**
- You can follow the two step process below to remove the saved credential associated with the storage account key and remove the SMB connection:
+ You can follow the two step process below to remove the saved credential associated with the storage account key and remove the SMB connection:
+ 1. Run the cmdlet below in Windows Cmd.exe to remove the credential. If you cannot find one, it means that you have not persisted the credential and can skip this step. cmdkey /delete:Domain:target=storage-account-name.file.core.windows.net
* <a id="migrate-nfs-data"></a> **Can I migrate existing data to an NFS share?**
- Within a region, you can use standard tools like scp, rsync, or SSHFS to move data. Because Azure Files NFS can be accessed from multiple compute instances concurrently, you can improve copying speeds with parallel uploads. If you want to bring data from outside of a region, use a VPN or a Expressroute to mount to your file system from your on-premises data center.
+ Within a region, you can use standard tools like scp, rsync, or SSHFS to move data. Because Azure Files NFS can be accessed from multiple compute instances concurrently, you can improve copying speeds with parallel uploads. If you want to bring data from outside of a region, use a VPN or a ExpressRoute to mount to your file system from your on-premises data center.
* <a id=nfs-ibm-mq-support></a> **Can you run IBM MQ (including multi-instance) on Azure Files NFS?**
- * Azure Files NFS v4.1 file shares meets the three requirements set by IBM MQ
+ * Azure Files NFS v4.1 file shares meets the three requirements set by IBM MQ:
- https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-requirements-shared-file-systems + Data write integrity + Guaranteed exclusive access to files + Release locks on failure
- * The following test cases run successfully
+ * The following test cases run successfully:
1. https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-verifying-shared-file-system-behavior 2. https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-running-amqsfhac-test-message-integrity
* <a id="geo-redundant-snaphsots"></a> **Are my share snapshots geo-redundant?**
- Share snapshots have the same redundancy as the Azure file share for which they were taken. If you have selected geo-redundant storage for your account, your share snapshot also is stored redundantly in the paired region.
+ Share snapshots have the same redundancy as the Azure file share for which they were taken. If you've selected geo-redundant storage for your account, your share snapshot also is stored redundantly in the paired region.
### Clean up share snapshots * <a id="delete-share-keep-snapshots"></a> **Can I delete my share but not delete my share snapshots?**
- If you have active share snapshots on your share, you cannot delete your share. You can use an API to delete share snapshots, along with the share. You also can delete both the share snapshots and the share in the Azure portal.
+ If you have active share snapshots on your share, you can't delete your share. You can use an API to delete share snapshots, along with the share. You also can delete both the share snapshots and the share in the Azure portal.
## Billing and pricing * <a id="share-snapshot-price"></a> **How much do share snapshots cost?**
- Share snapshots are incremental in nature. The base share snapshot is the share itself. All subsequent share snapshots are incremental and store only the difference from the preceding share snapshot. You are billed only for the changed content. If you have a share with 100 GiB of data but only 5 GiB has changed since your last share snapshot, the share snapshot consumes only 5 additional GiB, and you are billed for 105 GiB. For more information about transaction and standard egress charges, see the [Pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
+ Share snapshots are incremental in nature. The base share snapshot is the share itself. All subsequent share snapshots are incremental and store only the difference from the preceding share snapshot. You're billed only for the changed content. If you have a share with 100 GiB of data but only 5 GiB has changed since your last share snapshot, the share snapshot consumes only 5 additional GiB, and you're billed for 105 GiB. For more information about transaction and standard egress charges, see the [Pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
## See also * [Troubleshoot Azure Files in Windows](storage-troubleshoot-windows-file-connection-problems.md)
storsimple Storsimple 8000 Aad Registration Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-aad-registration-key.md
If using a StorSimple 8000 series device, use the following table to determine w
| If your device is running| Take the following action | |--||
-| Update 5.0 or earlier and the device is offline. | Transport Layer Security (TLS) 1.2 is being enforced by the StorSimple Device Manager service.<br>Install Update 5.1 (or higher):<ol><li>[Connect to Windows PowerShell on the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console), or connect directly to the appliance via serial cable.</li><li>Use [Start-HcsUpdate](/powershell/module/hcs/start-hcsupdate?view=winserver2012r2-ps) to update the device. For steps, see [Install regular updates via Windows PowerShell](storsimple-update-device.md#to-install-regular-updates-via-windows-powershell-for-storsimple). This update is non-disruptive.</li><li>If `Start-HcsUpdate` doesnΓÇÖt work because of firewall issues, [install Update 5.1 (or higher) via the hotfix method](storsimple-8000-install-update-51.md#install-update-51-as-a-hotfix).</li></ol> |
+| Update 5.0 or earlier and the device is offline. | Transport Layer Security (TLS) 1.2 is being enforced by the StorSimple Device Manager service.<br>Install Update 5.1 (or higher):<ol><li>[Connect to Windows PowerShell on the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console), or connect directly to the appliance via serial cable.</li><li>Use [Start-HcsUpdate](/powershell/module/hcs/start-hcsupdate?view=winserver2012r2-ps&preserve-view=true) to update the device. For steps, see [Install regular updates via Windows PowerShell](storsimple-update-device.md#to-install-regular-updates-via-windows-powershell-for-storsimple). This update is non-disruptive.</li><li>If `Start-HcsUpdate` doesnΓÇÖt work because of firewall issues, [install Update 5.1 (or higher) via the hotfix method](storsimple-8000-install-update-51.md#install-update-51-as-a-hotfix).</li></ol> |
| Update 5 or later and the device is offline. <br> You see an alert that the URL is not approved.|<ol><li>Modify the firewall rules to include the authentication URL. See [authentication URLs](#url-changes-for-azure-ad-authentication).</li><li>[Get the Azure AD registration key from the service](#azure-ad-based-registration-keys).</li><li>[Connect to the Windows PowerShell interface of the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console).</li><li>Use `Redo-DeviceRegistration` cmdlet to register the device through the Windows PowerShell. Supply the key you got in the previous step.</li></ol> | | Update 4 or earlier and the device is offline. |<ol><li>Modify the firewall rules to include the authentication URL.</li><li>[Download Update 5 through catalog server](storsimple-8000-install-update-5.md#download-updates-for-your-device).</li><li>[Apply Update 5 through the hotfix method](storsimple-8000-install-update-5.md#install-update-5-as-a-hotfix).</li><li>[Get the Azure AD registration key from the service](#azure-ad-based-registration-keys).</li><li>[Connect to the Windows PowerShell interface of the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console).</li><li>Use `Redo-DeviceRegistration` cmdlet to register the device through the Windows PowerShell. Supply the key you got in the previous step.</li></ol> | | Update 4 or earlier and the device is online. |Modify the firewall rules to include the authentication URL.<br> Install Update 5 through the Azure portal. |
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
Previously updated : 12/15/2021 Last updated : 06/06/2022 # Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics
Data Lake Storage Gen2 makes Azure Storage the foundation for building enterpris
Azure Blob storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud. For an introduction on Blob storage and its usage, see [Upload, download, and list blobs with the Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md).
+>[!NOTE]
+> For details on the behaviors specific to the AVRO and Parquet formats, see the related sections in the [overview](stream-analytics-define-outputs.md).
+ ## Output configuration The following table lists the property names and their descriptions for creating a blob or ADLS Gen2 output.
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Previously updated : 01/14/2022 Last updated : 06/06/2022 # Outputs from Azure Stream Analytics
Additionally, for more advanced tuning of the partitions, the number of output w
All outputs support batching, but only some support batch size explicitly. Azure Stream Analytics uses variable-size batches to process events and write to outputs. Typically the Stream Analytics engine doesn't write one message at a time, and uses batches for efficiency. When the rate of both the incoming and outgoing events is high, Stream Analytics uses larger batches. When the egress rate is low, it uses smaller batches to keep latency low.
+## Avro and Parquet file splitting behavior
+
+A Stream Analytics query can generate multiple schemas for a given output. The list of columns projected, and their type, can change on a row-by-row basis.
+By design, the Avro and Parquet formats do not support variable schemas in a single file.
+
+The following behaviors may occur when directing a stream with variable schemas to an output using these formats:
+
+- If the schema change can be detected, the current output file will be closed, and a new one initialized on the new schema. Splitting files as such will severely slow down the output when schema changes happen frequently. With back pressure this will in turn severly impact the overall performance of the job
+- If the schema change cannot be detected, the row will most likely be rejected, and the job become stuck as the row can't be output. Nested columns, or multi-type arrays, are situations that won't be discovered and be rejected.
+
+It is highly recommended to consider outputs using the Avro or Parquet format to be strongly typed, or schema-on-write, and queries targeting them to be written as such (explicit conversions and projections for a uniform schema).
+
+If multiple schemas need to be generated, consider creating multiple outputs and splitting records into each destination by using a `WHERE` clause.
+ ## Parquet output batching window properties When using Azure Resource Manager template deployment or the REST API, the two batching window properties are:
synapse-analytics Apache Spark Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-applications.md
Previously updated : 04/15/2020 Last updated : 06/01/2022 -+ # Use Synapse Studio to monitor your Apache Spark applications
With Azure Synapse Analytics, you can use Apache Spark to run notebooks, jobs, a
This article explains how to monitor your Apache Spark applications, allowing you to keep an eye on the latest status, issues, and progress. ## View Apache Spark applications + You can view all Apache Spark applications from **Monitor** -> **Apache Spark applications**.
- ![apache spark applications](./media/how-to-monitor-spark-applications/apache-spark-applications.png)
-## View completed Apache Spark application
+ ![Screenshot of Apache Spark applications.](./media/how-to-monitor-spark-applications/apache-spark-applications.png)
+
+## View completed Apache Spark applications
-Open **Monitor**, then select **Apache Spark applications**. To view the details about the completed Apache Spark applications, select the Apache Spark application and view the details.
+Open **Monitor**, then select **Apache Spark applications**. To view the details about the completed Apache Spark applications, select the Apache Spark application.
- ![select completed job](./media/how-to-monitor-spark-applications/select-completed-job.png)
+ ![Screenshot of completed job details.](./media/how-to-monitor-spark-applications/select-completed-job.png)
1. Check the **Completed tasks**, **Status**, and **Total duration**.
Open **Monitor**, then select **Apache Spark applications**. To view the details
11. Use scroll bar to zoom in and zoom out the job graph, you can also select **Zoom to Fit** to make it fit the screen.
- [![view completed job](./media/how-to-monitor-spark-applications/view-completed-job.png)](./media/how-to-monitor-spark-applications/view-completed-job.png#lightbox)
-
+ [![Screenshot of completed job.](./media/how-to-monitor-spark-applications/view-completed-job.png)](./media/how-to-monitor-spark-applications/view-completed-job.png#lightbox)
12. The job graph node displays the following information of each stage:
Open **Monitor**, then select **Apache Spark applications**. To view the details
- Data written: the sum of output size and shuffle writes size - Stage number
- ![job graph node](./media/how-to-monitor-spark-applications/job-graph-node.png)
+ ![Screenshot of job graph node.](./media/how-to-monitor-spark-applications/job-graph-node.png)
13. Hover the mouse over a job, and the job details will be displayed in the tooltip:
- - Icon of job status: If the job status is successful, it will be displayed as a green "√"; if the job detects a problem, it will display a yellow "!".
- - Job ID.
+ - Icon of job status: If the job status is successful, it will be displayed as a green "√"; if the job detects a problem, it will display a yellow "!"
+ - Job ID
- General part: - Progress - Duration time
Open **Monitor**, then select **Apache Spark applications**. To view the details
- Time skew - Stage number
- ![hover a job](./media/how-to-monitor-spark-applications/hover-a-job.png)
+ ![Screenshot of tooltip hovering over a job.](./media/how-to-monitor-spark-applications/hover-a-job.png)
14. Click **Stage number** to expand all the stages contained in the job. Click **Collapse** next to the Job ID to collapse all the stages in the job.
-15. Click on **View details** in a stage graph,then the details for stage will show out.
+15. Click on **View details** in a stage graph, then the details for a stage will appear.
- [![expand all the stages](./media/how-to-monitor-spark-applications/expand-all-the-stages.png)](./media/how-to-monitor-spark-applications/expand-all-the-stages.png#lightbox)
+ [![Screenshot of stages expanded.](./media/how-to-monitor-spark-applications/expand-all-the-stages.png)](./media/how-to-monitor-spark-applications/expand-all-the-stages.png#lightbox)
-## Monitor running Apache Spark application
+## Monitor Apache Spark application progress
-Open **Monitor**, then select **Apache Spark applications**. To view the details about the Apache Spark applications that are running, select the submitting Apache Spark application and view the details. If the Apache Spark application is still running, you can monitor the progress.
+Open **Monitor**, then select **Apache Spark applications**. To view the details about the Apache Spark applications that are running, select the submitted Apache Spark application. If the Apache Spark application is still running, you can monitor the progress.
- ![select running job](./media/how-to-monitor-spark-applications/select-running-job.png)
+ ![Screenshot of selected running job](./media/how-to-monitor-spark-applications/select-running-job.png)
1. Check the **Completed tasks**, **Status**, and **Total duration**.
Open **Monitor**, then select **Apache Spark applications**. To view the details
4. Click on **Spark UI** button to go to Spark Job page.
-5. For **Job graph**, **Summary**, **Diagnostics**, **Logs**. You can see an overview of your job in the generated job graph. Refer to step 5 - 15 of [View completed Apache Spark application](#view-completed-apache-spark-application).
+5. For **Job graph**, **Summary**, **Diagnostics**, **Logs**. You can see an overview of your job in the generated job graph. Refer to steps 5 - 15 of [View completed Apache Spark applications](#view-completed-apache-spark-applications).
- [![view running job](./media/how-to-monitor-spark-applications/view-running-job.png)](./media/how-to-monitor-spark-applications/view-running-job.png#lightbox)
+ [![Screenshot of running job.](./media/how-to-monitor-spark-applications/view-running-job.png)](./media/how-to-monitor-spark-applications/view-running-job.png#lightbox)
-## View canceled Apache Spark application
+## View canceled Apache Spark applications
-Open **Monitor**, then select **Apache Spark applications**. To view the details about the canceled Apache Spark applications, select the Apache Spark application and view the details.
+Open **Monitor**, then select **Apache Spark applications**. To view the details about the canceled Apache Spark applications, select the Apache Spark application.
- ![select cancelled job](./media/how-to-monitor-spark-applications/select-cancelled-job.png)
+ ![Screenshot of canceled job.](./media/how-to-monitor-spark-applications/select-cancelled-job.png)
1. Check the **Completed tasks**, **Status**, and **Total duration**.
Open **Monitor**, then select **Apache Spark applications**. To view the details
4. Open Apache history server link by clicking **Spark history server**.
-5. View the graph. You can see an overview of your job in the generated job graph. Refer to step 5 - 15 of [View completed Apache Spark application](#view-completed-apache-spark-application).
+5. View the graph. You can see an overview of your job in the generated job graph. Refer to steps 5 - 15 of [View completed Apache Spark applications](#view-completed-apache-spark-applications).
- [![view cancelled job](./media/how-to-monitor-spark-applications/view-cancelled-job.png)](./media/how-to-monitor-spark-applications/view-cancelled-job.png#lightbox)
+ [![Screenshot of canceled job details.](./media/how-to-monitor-spark-applications/view-cancelled-job.png)](./media/how-to-monitor-spark-applications/view-cancelled-job.png#lightbox)
## Debug failed Apache Spark application
-Open **Monitor**, then select **Apache Spark applications**. To view the details about the failed Apache Spark applications, select the Apache Spark application and view the details.
+Open **Monitor**, then select **Apache Spark applications**. To view the details about the failed Apache Spark applications, select the Apache Spark application.
-![select failed job](./media/how-to-monitor-spark-applications/select-failed-job.png)
+ ![Screenshot of failed job.](./media/how-to-monitor-spark-applications/select-failed-job.png)
1. Check the **Completed tasks**, **Status**, and **Total duration**.
Open **Monitor**, then select **Apache Spark applications**. To view the details
4. Open Apache history server link by clicking **Spark history server**.
-5. View the graph. You can see an overview of your job in the generated job graph. Refer to step 5 - 15 of [View completed Apache Spark application](#view-completed-apache-spark-application).
+5. View the graph. You can see an overview of your job in the generated job graph. Refer to steps 5 - 15 of [View completed Apache Spark applications](#view-completed-apache-spark-applications).
- [![failed job info](./media/how-to-monitor-spark-applications/failed-job-info.png)](./media/how-to-monitor-spark-applications/failed-job-info.png#lightbox)
+ [![Screenshot of failed job details.](./media/how-to-monitor-spark-applications/failed-job-info.png)](./media/how-to-monitor-spark-applications/failed-job-info.png#lightbox)
-## View input data/output data for Apache Spark Application
+## View input data/output data
-Select an Apache Spark application, and click on **Input data/Output data tab** to view dates of the input and output for Apache Spark application. This function can better help you debug the Spark job. And the data source supports three storage methods: gen1, gen2, and blob.
+Select an Apache Spark application, and click on **Input data/Output data tab** to view dates of the input and output for Apache Spark application. This function can help you debug the Spark job. And the data source supports three storage methods: gen1, gen2, and blob.
**Input data tab**
Select an Apache Spark application, and click on **Input data/Output data tab**
4. You can sort the input files by clicking **Name**, **Read format**, and **path**.
-5. Use the mouse hover on an input file, the icon of the **Download/Copy path/More** button will show out.
+5. Use the mouse to hover over an input file, the icon of the **Download/Copy path/More** button will appear.
- ![input tab](./media/how-to-monitor-spark-applications/input-tab.png)
+ ![Screenshot of input tab.](./media/how-to-monitor-spark-applications/input-tab.png)
-6. Click on **More** button, the **Copy path/Show in explorer/Properties** show the context menu.
+6. Click on **More** button. The **Copy path/Show in explorer/Properties** will appear in the context menu.
- ![input more](./media/how-to-monitor-spark-applications/input-more.png)
+ ![Screenshot of more input menu.](./media/how-to-monitor-spark-applications/input-more.png)
* Copy path: can copy **Full path** and **Relative path**. * Show in explorer: can jump to the linked storage account (Data->Linked). * Properties: show the basic properties of the file (File name/File path/Read format/Size/Modified).
- ![properties image](./media/how-to-monitor-spark-applications/properties.png)
+ ![Screenshot of properties.](./media/how-to-monitor-spark-applications/properties.png)
**Output data tab**
- Have the same features as the input.
+ Displays the same features as the input tab.
- ![output-image](./media/how-to-monitor-spark-applications/output.png)
+ ![Screenshot of output data.](./media/how-to-monitor-spark-applications/output.png)
## Compare Apache Spark Applications
-There are two ways to compare applications. You can compare by choose a **Compare Application**, or click the **Compare in notebook** button to view it in the notebook.
+There are two ways to compare applications. You can compare by choosing **Compare Application**, or click the **Compare in notebook** button to view it in the notebook.
-### Compare by choosing an application
+### Compare by application
-Click on **Compare applications** button and choose an application to compare performance, you can intuitively see the difference between the two applications.
+Click on **Compare applications** button and choose an application to compare performance. You can see the difference between the two applications.
-![compare applications](./media/how-to-monitor-spark-applications/compare-applications.png)
+![Screenshot of compare applications.](./media/how-to-monitor-spark-applications/compare-applications.png)
-![details compare applications](./media/how-to-monitor-spark-applications/details-compare-applications.png)
+![Screenshot of details to compare applications.](./media/how-to-monitor-spark-applications/details-compare-applications.png)
-1. Use the mouse to hover on an application, and then the **Compare applications** icon is displayed.
+1. Use the mouse to hover over an application, and then the **Compare applications** icon is displayed.
2. Click on the **Compare applications** icon, and the Compare applications page will pop up.
Click on **Compare applications** button and choose an application to compare pe
4. When choosing the comparison application, you need to either enter the application URL, or choose from the recurring list. Then, click **OK** button.
- ![choose comparison application](./media/how-to-monitor-spark-applications/choose-comparison-application.png)
+ ![Screenshot of choose comparison application.](./media/how-to-monitor-spark-applications/choose-comparison-application.png)
5. The comparison result will be displayed on the compare applications page.
- ![comparison result](./media/how-to-monitor-spark-applications/comparison-result.png)
+ ![Screenshot of comparison result.](./media/how-to-monitor-spark-applications/comparison-result.png)
-### Compare by Compare in notebook
+### Compare in notebook
Click the **Compare in Notebook** button on the **Compare applications** page to open the notebook. The default name of the *.ipynb* file is **Recurrent Application Analytics**.
-![compare in notebook](./media/how-to-monitor-spark-applications/compare-in-notebook.png)
+![Screenshot of compare in notebook.](./media/how-to-monitor-spark-applications/compare-in-notebook.png)
In the Notebook: Recurrent Application Analytics file, you can run it directly after setting the Spark pool and Language.
-![recurrent application analytics](./media/how-to-monitor-spark-applications/recurrent-application-analytics.png)
+![Screenshot of recurrent application analytics.](./media/how-to-monitor-spark-applications/recurrent-application-analytics.png)
## Next steps
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
ORDER BY
nbr_files desc, gb_processed desc; ```
+## Retrieve query text from waiting and blocking queries
+
+The following query provides the query text and identifier for the waiting and blocking queries to easily troubleshoot.
+
+```sql
+
+-- To retrieve query text from waiting and blocking queries
+
+SELECT waiting.session_id AS WaitingSessionId,
+ waiting.request_id AS WaitingRequestId,
+ COALESCE(waiting_exec_request.command,waiting_exec_request.command2) AS WaitingExecRequestText,
+ blocking.session_id AS BlockingSessionId,
+ blocking.request_id AS BlockingRequestId,
+ COALESCE(blocking_exec_request.command,blocking_exec_request.command2) AS BlockingExecRequestText,
+ waiting.object_name AS Blocking_Object_Name,
+ waiting.object_type AS Blocking_Object_Type,
+ waiting.type AS Lock_Type,
+ waiting.request_time AS Lock_Request_Time,
+ datediff(ms, waiting.request_time, getdate())/1000.0 AS Blocking_Time_sec
+FROM sys.dm_pdw_waits waiting
+ INNER JOIN sys.dm_pdw_waits blocking
+ ON waiting.object_type = blocking.object_type
+ AND waiting.object_name = blocking.object_name
+ INNER JOIN sys.dm_pdw_exec_requests blocking_exec_request
+ ON blocking.request_id = blocking_exec_request.request_id
+ INNER JOIN sys.dm_pdw_exec_requests waiting_exec_request
+ ON waiting.request_id = waiting_exec_request.request_id
+WHERE waiting.state = 'Queued'
+ AND blocking.state = 'Granted'
+ORDER BY Lock_Request_Time DESC;
+```
## Next steps
synapse-analytics Sql Data Warehouse Tables Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition.md
WITH
,20030101,20040101,20050101 ) )
-)
-;
+);
``` ## Migrate partitions from SQL Server
To migrate SQL Server partition definitions to dedicated SQL pool simply:
- Eliminate the SQL Server [partition scheme](/sql/t-sql/statements/create-partition-scheme-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). - Add the [partition function](/sql/t-sql/statements/create-partition-function-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) definition to your CREATE TABLE.
-If you are migrating a partitioned table from a SQL Server instance, the following SQL can help you to figure out the number of rows that in each partition. Keep in mind that if the same partitioning granularity is used in dedicated SQL pool, the number of rows per partition decreases by a factor of 60.
+If you are migrating a partitioned table from a SQL Server instance, the following SQL can help you to figure out the number of rows in each partition. Keep in mind that if the same partitioning granularity is used in dedicated SQL pool, the number of rows per partition decreases by a factor of 60.
```sql -- Partition information for a SQL Server Database
GROUP BY s.[name]
, p.[partition_number] , p.[rows] , rv.[value]
-, p.[data_compression_desc]
-;
+, p.[data_compression_desc];
``` ## Partition switching Dedicated SQL pool supports partition splitting, merging, and switching. Each of these functions is executed using the [ALTER TABLE](/sql/t-sql/statements/alter-table-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement.
-To switch partitions between two tables, you must ensure that the partitions align on their respective boundaries and that the table definitions match. As check constraints are not available to enforce the range of values in a table, the source table must contain the same partition boundaries as the target table. If the partition boundaries are not then same, then the partition switch will fail as the partition metadata will not be synchronized.
+To switch partitions between two tables, you must ensure that the partitions align on their respective boundaries and that the table definitions match. As check constraints are not available to enforce the range of values in a table, the source table must contain the same partition boundaries as the target table. If the partition boundaries are not the same, then the partition switch will fail as the partition metadata will not be synchronized.
A partition split requires the respective partition (not necessarily the whole table) to be empty if the table has a clustered columnstore index (CCI). Other partitions in the same table can contain data. A partition that contains data cannot be split, it will result in error: `ALTER PARTITION statement failed because the partition is not empty. Only empty partitions can be split in when a columnstore index exists on the table. Consider disabling the columnstore index before issuing the ALTER PARTITION statement, then rebuilding the columnstore index after ALTER PARTITION is complete.` As a workaround to split a partition containing data, see [How to split a partition that contains data](#how-to-split-a-partition-that-contains-data).
WITH
(20000101 ) )
-)
-;
+);
INSERT INTO dbo.FactInternetSales VALUES (1,19990101,1,1,1,1,1,1);+ INSERT INTO dbo.FactInternetSales VALUES (1,20000101,1,1,1,1,1,1); ```
JOIN sys.tables t ON p.[object_id] = t.[object_id]
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id] JOIN sys.indexes i ON p.[object_id] = i.[object_Id] AND p.[index_Id] = i.[index_Id]
-WHERE t.[name] = 'FactInternetSales'
-;
+WHERE t.[name] = 'FactInternetSales';
``` The following split command receives an error message:
However, you can use `CTAS` to create a new table to hold the data.
```sql CREATE TABLE dbo.FactInternetSales_20000101 WITH ( DISTRIBUTION = HASH(ProductKey)
- , CLUSTERED COLUMNSTORE INDEX
+ , CLUSTERED COLUMNSTORE INDEX
, PARTITION ( [OrderDateKey] RANGE RIGHT FOR VALUES (20000101 ) )
- )
+)
AS SELECT * FROM FactInternetSales
-WHERE 1=2
-;
+WHERE 1=2;
``` As the partition boundaries are aligned, a switch is permitted. This will leave the source table with an empty partition that you can subsequently split. ```sql
-ALTER TABLE FactInternetSales SWITCH PARTITION 2 TO FactInternetSales_20000101 PARTITION 2;
+ALTER TABLE FactInternetSales SWITCH PARTITION 2 TO FactInternetSales_20000101 PARTITION 2;
ALTER TABLE FactInternetSales SPLIT RANGE (20010101); ```
AS
SELECT * FROM [dbo].[FactInternetSales_20000101] WHERE [OrderDateKey] >= 20000101
-AND [OrderDateKey] < 20010101
-;
+AND [OrderDateKey] < 20010101;
ALTER TABLE dbo.FactInternetSales_20000101_20010101 SWITCH PARTITION 2 TO dbo.FactInternetSales PARTITION 2; ```
Once you have completed the movement of the data, it is a good idea to refresh t
```sql UPDATE STATISTICS [dbo].[FactInternetSales]; ```
+Finally, in the case of a one-time partition switch to move data, you could drop the tables created for the partition switch, `FactInternetSales_20000101_20010101` and `FactInternetSales_20000101`. Alternatively, you may want to keep empty tables for regular, automated partition switches.
### Load new data into partitions that contain data in one step
To avoid your table definition from **rusting** in your source control system, y
( CLUSTERED COLUMNSTORE INDEX , DISTRIBUTION = HASH([ProductKey]) , PARTITION ( [OrderDateKey] RANGE RIGHT FOR VALUES () )
- )
- ;
+ );
``` 1. `SPLIT` the table as part of the deployment process:
To avoid your table definition from **rusting** in your source control system, y
SELECT CAST(20030101 AS INT) UNION ALL SELECT CAST(20040101 AS INT)
- ) a
- ;
+ ) a;
-- Iterate over the partition boundaries and split the table
To avoid your table definition from **rusting** in your source control system, y
, @q NVARCHAR(4000) --query , @p NVARCHAR(20) = N'' --partition_number , @s NVARCHAR(128) = N'dbo' --schema
- , @t NVARCHAR(128) = N'FactInternetSales' --table
- ;
+ , @t NVARCHAR(128) = N'FactInternetSales' --table;
WHILE @i <= @c BEGIN
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
description: A brief overview of which locations Azure Virtual Desktop's data an
Previously updated : 06/30/2021 Last updated : 06/07/2022 # Data locations for Azure Virtual Desktop
-Azure Virtual Desktop is currently available for all geographical locations. Administrators can choose the location to store user data when they create the host pool virtual machines and associated services, such as file servers. Learn more about Azure geographies at [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/#overview).
+Azure Virtual Desktop is available in many Azure regions, which are grouped by geography. When Azure Virtual Desktop resources are deployed, you have to specify the Azure region they'll be created in. The location of the resource determines where its information will be stored and the geography where related information will be stored. Azure Virtual Desktop itself is a non-regional service where there's no dependency on a specific Azure region. Learn more about [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) and [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
->[!NOTE]
->Microsoft doesn't control or limit the regions where you or your users can access your user and app-specific data.
+Azure Virtual Desktop stores various information for service objects, such as host pool names, application group names, workspace names, and user principal names. Data is categorized into different types, such as customer input, customer data, diagnostic data, and service-generated data. For more information about data category definitions, see [How Microsoft categorizes data for online services](https://www.microsoft.com/trust-center/privacy/customer-data-definitions).
->[!IMPORTANT]
->Azure Virtual Desktop stores various types of information like host pool names, app group names, workspace names, and user principal names in a datacenter. While creating any of the service objects, the customer has to enter the location where the object needs to be created. The location of this object determines where the information for the object will be stored. The customer will choose an Azure region and the related information will be stored in the associated geography. Customers also choose a region for the Session host Virtual Machines in an additional step in the deployment process. This region can be any Azure region, hence it can be the same region as the service objects or a separate region. For a list of all Azure regions and related geographies, visit [https://azure.microsoft.com/global-infrastructure/geographies/](https://azure.microsoft.com/global-infrastructure/geographies/).
-
-This article describes which information the Azure Virtual Desktop service stores. To learn more about the customer data definitions, see [How Microsoft categorizes data for online services](https://www.microsoft.com/trust-center/privacy/customer-data-definitions).
+> [!NOTE]
+> Microsoft doesn't control or limit the regions where you or your users can access your user and app-specific data.
## Customer input
-To set up the Azure Virtual Desktop service, the customer must create host pools and other service objects. During configuration, the customer must give information like the host pool name, application group name, and so on. This information is considered customer input. Customer input is stored in the geography associated with the region the object is created in. Azure Resource Manager paths to the objects are considered organizational information, so data residency doesn't apply to them. Data about Azure Resource Manager paths will be stored outside of the chosen geography.
+To set up Azure Virtual Desktop, you must create host pools and other service objects. During configuration, you must enter information such as the host pool name, application group name, and so on. This information is considered *customer input*. Customer input is stored in the geography associated with the Azure region the resource is created in. Azure Resource Manager paths to the objects are considered organizational information, so data residency doesn't apply to them. Data about Azure Resource Manager paths will be stored outside of the chosen geography.
## Customer data
-The service doesn't directly store any user created or app-related information, but it does store customer data like application names and user principal names because they're part of the object setup process. This information is stored in the geography associated with the region the customer created the object in.
+The Azure Virtual Desktop service doesn't directly store any user-created or app-related information, but it does store customer data, such as application names and user principal names, because they're part of the resource deployment process. This information is stored in the geography associated with the region you created the resource in.
## Diagnostic data
-Azure Virtual Desktop gathers service-generated diagnostic data whenever the customer or user interacts with the service. This data is only used for troubleshooting, support, and checking the health of the service in aggregate form. For example, from the session host side, when a VM registers to the service, we generate information that includes the virtual machine (VM) name, which host pool the VM belongs to, and so on. This information is stored in the geography associated with the region the host pool is created in. Also, when a user connects to the service and launches a remote desktop, we generate diagnostic information that includes the user principal name, client location, client IP address, which host pool the user is connecting to, and so on. This information is sent to two different locations:
+Diagnostic data is generated by the Azure Virtual Desktop service and is gathered whenever administrators or users interact with the service. This data is only used for troubleshooting, support, and checking the health of the service in aggregate form. For example, when a session host VM is registered to a host pool, information is generated that includes the virtual machine (VM) name, which host pool the VM belongs to, and so on. This information is stored in the geography associated with the Azure region the host pool is created in. Also, when a user connects to the service and launches a session, diagnostic information is generated that includes the user principal name, client location, client IP address, which host pool the user is connecting to, and so on. This information is sent to two different locations:
-- The location closest to the user where the service infrastructure (client traces, user traces, diagnostic data) is present.
+- The location closest to the user where the service infrastructure (client traces, user traces, and diagnostic data) is present.
- The location where the host pool is located. ## Service-generated data
-To keep Azure Virtual Desktop reliable and scalable, we aggregate traffic patterns and usage to check the health and performance of the infrastructure control plane. For example, to understand how to ramp up regional infrastructure capacity as service usage increases, we process service usage log data. We then review the logs for peak times and decide which data centers to add to meet this capacity.
+To keep Azure Virtual Desktop reliable and scalable, traffic patterns and usage are aggregated to check the health and performance of the infrastructure control plane. For example, to help us understand how to ramp up regional infrastructure capacity as service usage increases, we process service usage log data. We then review the logs for peak times and decide where to increase capacity.
-We currently support storing the aforementioned data in the following locations:
+Storing service-generated data is currently supported in the following geographies:
- United States (US) - Europe (EU) - United Kingdom (UK) - Canada (CA)-- Japan (JP) (Public Preview)
+- Japan (JP) \**in Public Preview*
+
+In addition, service-generated data is aggregated from all locations where the service infrastructure is, and sent to the US geography. The data sent to the US includes scrubbed data, but not customer data.
-In addition we aggregate service-generated from all locations where the service infrastructure is, then send it to the US geography. The data sent to the US region includes scrubbed data, but not customer data.
+## Data storage
-More geographies will be added as the service grows. The stored information is encrypted at rest, and geo-redundant mirrors are maintained within the geography. Customer data, such as app settings and user data, resides in the location the customer chooses and isn't managed by the service.
+Stored information is encrypted at rest, and geo-redundant mirrors are maintained within the geography. Data generated by the Azure Virtual Desktop service is replicated within the Azure geography for disaster recovery purposes.
-The outlined data is replicated within the Azure geography for disaster recovery purposes.
+User-created or app-related information, such as app settings and user data, resides in the Azure region you choose and isn't managed by the Azure Virtual Desktop service.
virtual-desktop Connect Windows 7 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-windows-7-10.md
Last updated 01/27/2022
+adobe-target: true
# Connect with the Windows Desktop client
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS / MBps<sup>1</sup> | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 12/07/2021 Last updated : 06/06/2022
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
az vm extension set \
If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension), or an [Azure Resource Manager template](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions) when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0&preserve-view=true), the [Azure CLI](/cli/azure/vmss/extension), or an [Azure Resource Manager template](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions) when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Troubleshooting When the Custom Script Extension runs, the script is created or downloaded into a directory that's similar to the following example. The command output is also saved into this directory in `stdout` and `stderr` files.
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
The response content cannot be parsed because the Internet Explorer engine is no
If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0), the [Azure CLI](/cli/azure/vmss/extension), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension?view=azps-7.0.0Fixed&preserve-view=true), the [Azure CLI](/cli/azure/vmss/extension), or an Azure Resource Manager template when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Classic VMs
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Then run installation commands specific for your distribution.
> [!NOTE] > The example below shows the CUDA package path for Ubuntu 16.04. Replace the path specific to the version you plan to use. >
- > Visit the [Nvidia Download Center] (https://developer.download.nvidia.com/compute/cuda/repos/) for the full path specific to each version.
+ > Visit the [Nvidia Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) for the full path specific to each version.
> ```bash CUDA_REPO_PKG=cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
sudo reboot
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install dkms
- wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo /etc/yum.repos.d/cuda-rhel7.repo
+ sudo wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo -O /etc/yum.repos.d/cuda-rhel7.repo
sudo yum install cuda-drivers ```
For example, CentOS 8 and RHEL 8 will need the following steps.
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm sudo yum install dkms
- wget https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo /etc/yum.repos.d/cuda-rhel8.repo
+ sudo wget https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo -O /etc/yum.repos.d/cuda-rhel8.repo
sudo yum install cuda-drivers ```
virtual-machines Hybrid Use Benefit Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/hybrid-use-benefit-licensing.md
Title: Azure Hybrid Benefit for Windows Server description: Learn how to maximize your Windows Software Assurance benefits to bring on-premises licenses to Azure.- Last updated 4/22/2018- ms.devlang: azurecli
az vm get-instance-view -g MyResourceGroup -n MyVM --query "[?licenseType=='Wind
To see and count all virtual machines and virtual machine scale sets deployed with Azure Hybrid Benefit for Windows Server, you can run the following command from your subscription: ### Portal
-From the Virtual Machine or Virtual machine scale sets resource blade, you can view a list of all your VM(s) and licensing type by configuring the table column to include "Azure Hybrid Benefit". The VM setting can either be in "Enabled", "Not enabled" or "Not supported" state.
+From the Virtual Machine or Virtual machine scale sets resource blade, you can view a list of all your VM(s) and licensing type by configuring the table column to include "OS licensing benefit". The VM setting can either be in **Azure Hybrid Benefit for Windows**, **Not enabled**, or **Windows client with multi-tenant hosting** state.
### PowerShell For virtual machines:
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
generalized disk, see
## Convert the virtual disk to a fixed size VHD > [!NOTE]
-> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0) will perform it for you.
+> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) will perform it for you.
Use one of the methods in this section to convert and resize your virtual disk to the required format for Azure:
You can convert a virtual disk using the [Convert-VHD](/powershell/module/hyper-
cmdlet in PowerShell. If you need information about installing this cmdlet see [Install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). > [!NOTE]
-> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0) will perform it for you.
+> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) will perform it for you.
The following example converts the disk from VHDX to VHD. It also converts the disk from a dynamically expanding disk to a fixed-size disk.
disk.
### Use Hyper-V Manager to resize the disk > [!NOTE]
-> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0) will perform it for you.
+> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) will perform it for you.
1. Open Hyper-V Manager and select your local computer on the left. In the menu above the computer list, select **Action** > **Edit Disk**.
disk.
### Use PowerShell to resize the disk > [!NOTE]
-> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0) will perform it for you.
+> If you're going to use Azure PowerShell to [upload your disk to Azure](disks-upload-vhd-to-managed-disk-powershell.md) and you have [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) enabled, this step is optional. [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) will perform it for you.
You can resize a virtual disk using the [Resize-VHD](/powershell/module/hyper-v/resize-vhd) cmdlet in PowerShell. If you need information about installing this cmdlet see [Install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
virtual-machines Redhat Imagelist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-imagelist.md
Azure offers various RHEL images for different use cases.
## List of RHEL images This section provides list of RHEL images available in Azure. Unless otherwise stated, all images are LVM-partitioned and attached to regular RHEL repositories (not EUS, not E4S). The following images are currently available for general use:
-> [!NOTE]
-> RAW images are no longer being produced in favor of LVM-partitioned images. LVM provides several advantages over the older raw (non-LVM) partitioning scheme, including significantly more flexible partition resizing options.
+### RHEL x64 architecture images
Offer| SKU | Partitioning | Provisioning | Notes :-|:-|:-|:-|:--
RHEL | 6.7 | RAW | Linux Agent | Extended Lifecycle Support ava
| | 81-ci-gen2| LVM | Linux Agent | Hyper-V Generation 2 - Attached to EUS repositories as of November 2020. | | 8.2 | LVM | Linux Agent | Attached to EUS repositories as of November 2020. | | 82gen2 | LVM | Linux Agent | Hyper-V Generation 2 - Attached to EUS repositories as of November 2020.
-| | 8.3 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.3)
-| | 83-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.3)
-| | 8.4 | LVM | Linux Agent | Attached to EUS repositories
-| | 84-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
-| | 8.5 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.5)
-| | 85-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.5)
-| | 8.6 | LVM | Linux Agent | Attached to EUS repositories
-| | 86-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
-RHEL-SAP-APPS | 6.8 | RAW | Linux Agent | RHEL 6.8 for SAP Business Applications. Outdated in favor of the RHEL-SAP images.
-| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP Business Applications. Outdated in favor of the RHEL-SAP images.
-| | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP Business Applications
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications
-| | 77-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications. Generation 2 image
-| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications
-| | 81-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications. Generation 2 image
-| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications
-| | 82-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications. Generation 2 image
-| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications
-| | 84-gen2 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications. Generation 2 image
-| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications
-| | 86-gen2 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications. Generation 2 image
-RHEL-SAP-HA | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.3 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.3)
+| | 83-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.3)
+| | 8.4 | LVM | Linux Agent | Attached to EUS repositories
+| | 84-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
+| | 8.5 | LVM | Linux Agent | Attached to regular repositories (EUS unavailable for RHEL 8.5)
+| | 85-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Attached to regular repositories (EUS unavailable for RHEL 8.5)
+| | 8.6 | LVM | Linux Agent | Attached to EUS repositories
+| | 86-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
+| | 9.0 | LVM | Linux Agent | Attached to EUS repositories
+| | 90-gen2 | LVM | Linux Agent |Hyper-V Generation 2 - Hyper-V Generation 2 - Attached to EUS repositories
+RHEL-SAP-APPS | 6.8 | RAW | Linux Agent | RHEL 6.8 for SAP Business Applications. Outdated in favor of the RHEL-SAP images.
+| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP Business Applications. Outdated in favor of the RHEL-SAP images.
+| | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP Business Applications
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications
+| | 77-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP Business Applications. Generation 2 image
+| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications
+| | 81-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP Business Applications. Generation 2 image
+| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications
+| | 82-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP Business Applications. Generation 2 image
+| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications
+| | 84-gen2 | LVM | Linux Agent | RHEL 8.4 for SAP Business Applications. Generation 2 image
+| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications
+| | 86-gen2 | LVM | Linux Agent | RHEL 8.6 for SAP Business Applications. Generation 2 image
+RHEL-SAP-HA | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 74sapha-gen2 | LVM | Linux Agent | RHEL 7.4 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 76sapha-gen2 | LVM | Linux Agent | RHEL 7.6 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 77sapha-gen2 | LVM | Linux Agent | RHEL 7.7 for SAP with HA and Update Services. Generation 2 image Attached to E4S repositories.Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.1 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Images are attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 81sapha-gen2 | LVM | Linux Agent | RHEL 8.1 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 82sapha-gen2 | LVM | Linux Agent | RHEL 8.2 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.4 | LVM | Linux Agent | RHEL 8.4 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 84sapha-gen2 | LVM | Linux Agent | RHEL 8.4 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
-| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
+| | 8.6 | LVM | Linux Agent | RHEL 8.6 for SAP with HA and Update Services. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees
| | 86sapha-gen2 | LVM | Linux Agent | RHEL 8.6 for SAP with HA and Update Services. Generation 2 images Attached to E4S repositories. Will charge a premium for SAP and HA repositories and RHEL, on top of the base compute fees rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not attached to any source of updates, won't charge an RHEL premium | |rhel-lvm75| LVM | Linux Agent | RHEL 7.5 BYOS images, not attached to any source of updates, won't charge an RHEL premium
rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not atta
| |rhel-lvm86-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOSimages, not attached to any source of updates, won't charge an RHEL premium RHEL-SAP (out of support) | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee | | 74sap-gen2| LVM | Linux Agent | RHEL 7.4 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
| | 75sap-gen2| LVM | Linux Agent | RHEL 7.5 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
| | 76sap-gen2| LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Generation 2 image. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
-| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
+| | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP HANA and Business Apps. Images are attached to E4S repositories, will charge a premium for SAP and RHEL and the base compute fee
RHEL-SAP-HANA (out of support) | 6.7 | RAW | Linux Agent | RHEL 6.7 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
-| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
-| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
+| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
+| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271)
RHEL-HA (out of support) | 7.4 | LVM | Linux Agent | RHEL 7.4 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
-| | 7.5 | LVM | Linux Agent | RHEL 7.5 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
-| | 7.6 | LVM | Linux Agent | RHEL 7.6 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
+| | 7.5 | LVM | Linux Agent | RHEL 7.5 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
+| | 7.6 | LVM | Linux Agent | RHEL 7.6 with HA Add-On. Will charge a premium for HA and RHEL on top of the base compute fee. Outdated in favor of the RHEL-SAP-HA images
> [!NOTE] > The RHEL-SAP-HANA product offering is considered end of life by Red Hat. Existing deployments will continue to work normally, but Red Hat recommends that customers migrate from the RHEL-SAP-HANA images to the RHEL-SAP-HA images which includes the SAP HANA repositories and the HA add-on. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
+### RHEL ARM64 architecture images
+
+Offer| SKU | Partitioning | Provisioning | Notes
+:-|:-|:-|:-|:--
+RHEL | 8_6-arm64 | LVM | Linux Agent | Hyper-V Generation 2 - Attached to EUS repositories
+ ## Next steps * Learn more about the [Red Hat images in Azure](./redhat-images.md). * Learn more about the [Red Hat Update Infrastructure](./redhat-rhui.md). * Learn more about the [RHEL BYOS offer](./byos.md).
-* Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
+* Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
>[!NOTE] > Hub Routing Intent is currently in gated public preview. >
-> This preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> The preview for Hub Routing Intent impacts routing and route advertisements for **all** connections to the Virtual Hub (Point-to-site VPN, Site-to-site VPN, ExpressRoute, NVA, Virtual Network).
+
+This preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
-> Inspecting inter-hub traffic via Azure Firewall or NVA between Virtual Hubs deployed in **different** Azure regions is available in select Azure Regions. Please reach out to previewinterhub@microsoft.com for more details.
->
> To obtain access to the preview, please deploy any Virtual WAN hubs and gateways (Site-to-site VPN Gateways, Point-to-site Gateways and ExpressRouteGateways) and then reach out to previewinterhub@microsoft.com with the Virtual WAN ID, Subscription ID and Azure Region you wish to configure Routing Intent in. Expect a response within 48 business hours (Monday-Friday) with confirmation of feature enablement. Please note that any gateways created after feature enablement will need to be upgraded by the Virtual WAN team. ## Background
Routing Intent and Routing policies allow you to specify how the Virtual WAN hub
While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent Concepts. >[!NOTE]
-> Inter-region traffic can be inspected by Azure Firewall or NVA for Virtual Hubs deployed in select Azure regions. For available regions, please contact previewinterhub@microsoft.com.
+> Inter-region traffic **cannot** be inspected by Azure Firewall or NVA.
* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or **Network Virtual Appliance** specified as part of the Routing Policy.
While Private Traffic includes both branch and Virtual Network address prefixes
## Key considerations * You will **not** be able to enable routing policies on your deployments with existing Custom Route tables configured or if there are static routes configured in your Default Route Table.
-* Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity).
-* In the gated public preview of Virtual WAN Hub routing policies, inter-regional traffic is only inspected by Azure Firewall or Network Virtual Appliances deployed in the Virtual WAN Hub for traffic between select Azure regions. For more information, reach out to previewinterhub@microsoft.com.
-* Routing Intent and Routing Policies currently must be configured via the custom portal link provided in Step 3 of **Prerequisites**. Routing Intents and Policies are not supported via Terraform, PowerShell, and CLI.
+* Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity).
+* In the gated public preview of Virtual WAN Hub routing policies, inter-hub traffic between hubs in different Azure regions is dropped.
+* Routing Intent and Routing Policies currently must be configured via the custom portal link provided in Step 3 of **Prerequisites**. Routing Intents and Policies are not supported via Terraform, PowerShell, and CLI.
+ ## Prerequisites
Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) ar
The following section describes common issues encountered when you configure Routing Policies on your Virtual WAN Hub. Read the below sections and if your issue is still unresolved, reach out to previewinterhub@microsoft.com for support. Expect a response within 48 business hours (Monday through Friday). ### Troubleshooting configuration issues+ * Make sure that you have gotten confirmation from previewinterhub@microsoft.com that access to the gated public preview has been granted to your subscription and chosen region. You will **not** be able to configure routing policies without being granted access to the preview. * After enabling the Routing Policy feature on your deployment, ensure you **only** use the custom portal link provided as part of your confirmation email. Do not use PowerShell, CLI, or REST API calls to manage your Virtual WAN deployments. This includes creating new Branch (Site-to-site VPN, Point-to-site VPN or ExpressRoute) connections.
This scenario is not supported in the gated public preview. However, reach out
No. Currently, branches and Virtual Networks will egress to the internet using an Azure Firewall deployed inside of the Virtual WAN hub the branches and Virtual Networks are connected to. You cannot configure a connection to access the Internet via the Firewall in a remote hub.
-### Why do I see RFC1918 prefixes advertised to my on-premises devices?
+### Why do I see RFC1918 aggregate prefixes advertised to my on-premises devices?
When Private Traffic Routing Policies are configured, Virtual WAN Gateways will automatically advertise static routes that are in the default route table (RFC1918 prefixes: 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16) in addition to the explicit branch and Virtual Network prefixes.
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
Previously updated : 05/20/2022 Last updated : 06/07/2022 # Customer intent: As someone with a networking background, I want to understand what Virtual WAN is and if it is the right choice for my Azure network.
Connectivity between the virtual network connections assumes, by default, a maxi
Virtual WAN allows transit connectivity between VPN and ExpressRoute. This implies that VPN-connected sites or remote users can communicate with ExpressRoute-connected sites. There is also an implicit assumption that the **Branch-to-branch flag** is enabled and BGP is supported in VPN and ExpressRoute connections. This flag can be located in the Azure Virtual WAN settings in Azure portal. All route management is provided by the virtual hub router, which also enables transit connectivity between virtual networks.
-### <a name="routing"></a>Custom Routing
+### <a name="routing"></a>Custom routing
Virtual WAN provides advanced routing enhancements. Ability to set up custom route tables, optimize virtual network routing with route association and propagation, logically group route tables with labels and simplify numerous network virtual appliances (NVAs) or shared services routing scenarios.
If you have pre-existing routes in Routing section for the hub in the Azure port
## Gated public preview
-The following features are currently in gated public preview.
+The following features are currently in gated public preview. If, after working with the listed articles, you have questions or require support, please reach out the the contact alias that corresponds to the feature.
-| Feature | Description |
-| - | |
-| Routing intent and policies enabling Inter-hub security | This feature allows customers to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](../virtual-wan/how-to-routing-policies.md).|
-| Hub-to-hub over ER preview link | This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). For more information, see [Hub-to-hub over ER preview link](virtual-wan-faq.md#expressroute-bow-tie).|
-| BGP peering with a virtual hub | This feature provides the ability for the virtual hub to pair with and directly exchange routing information through Border Gateway Protocol (BGP) routing protocol. For more information, see [BGP peering with a virtual hub](create-bgp-peering-hub-portal.md) and [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).|
+| Feature | Description | Contact alias |
+| - | | |
+| Routing intent and policies enabling Inter-hub security | This feature allows you to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](../virtual-wan/how-to-routing-policies.md).| previewinterhub@microsoft.com |
+| Hub-to-hub over ER preview link | This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). For more information, see [Hub-to-hub over ER preview link](virtual-wan-faq.md#expressroute-bow-tie).| previewpreferh2h@microsoft.com |
+| BGP peering with a virtual hub | This feature provides the ability for the virtual hub to pair with and directly exchange routing information through Border Gateway Protocol (BGP) routing protocol. For more information, see [BGP peering with a virtual hub](create-bgp-peering-hub-portal.md) and [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).| previewbgpwithvhub@microsoft.com |
+| Virtual hub routing preference | This features allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md). | Coming soon |
## <a name="faq"></a>FAQ